bbullock wrote:
> The problem that keeps me awake at night now is that we now have
> manufacturing machines wanting to use TSM for their backups. In the past
> they have used small DLT libraries locally attached to the host, but that's
> labor intensive and they want to take advantage of o
bbullock <[EMAIL PROTECTED]> writes:
> Jeff,
> You hit the nail on the head of what is the biggest problem I face
> with TSM today. Excuse me for being long winded, but let me explain the boat
> I'm in, and how it relates to many small files.
> Any other options or sugges
This message is intended only for the use of the addressee and
may contain information that is privileged and confidential.
> -Original Message-
> From: bbullock [SMTP:[EMAIL PROTECTED]]
> Sent: Monday, February 26, 2001 4:32 PM
> To: [EMAIL PROTECTED]
> Su
ion
though.
Ben Bullock
UNIX Systems Manager
(208) 368-4287
> -Original Message-
> From: Lambelet,Rene,VEVEY,FC-SIL/INF.
> [mailto:[EMAIL PROTECTED]]
> Sent: Monday, February 26, 2001 1:13 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Performance Large Files vs. Small Files
>
Message-
> From: bbullock [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, February 20, 2001 11:22 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Performance Large Files vs. Small Files
>
> Jeff,
> You hit the nail on the head of what is the biggest problem I fa
[mailto:[EMAIL PROTECTED]]On Behalf Of
Thomas A. La Porte
Sent: Wednesday, February 21, 2001 2:39 PM
To: [EMAIL PROTECTED]
Subject: TSM Pricing [was Re: Performance Large Files vs. Small Files]
We, too, were a bit rudely awakened by this new pricing structure
when we purchased our upgrade to 4.1
Jeff,
Regarding solution #4, the last time performance of lots of
small files was dicussed on the list, I thought that there might be
an opportunity here for someone to make an add-on product
(SSSI maybe?). This product would do client side aggregation with
tar or zip as a frontend and tsm out t
OTECTED]>
> Komu: <[EMAIL PROTECTED]>
> Odesláno: 21. února 2001 1:16
> Predmet: Re: Performance Large Files vs. Small Files
>
>
> > Point well taken Steve. Your classification of the
> nature of the
> > data is basically correct except for a twist. On t
t we have paid in the past for the same
>functionality.
>
>
>Ben Bullock
>UNIX Systems Manager
>
>
>> -Original Message-
>> From: Suad Musovich [mailto:[EMAIL PROTECTED]]
>> Sent: Wednesday, February 21, 2001 4:37 AM
>> To: [EMAIL PROTECTED
:37 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Performance Large Files vs. Small Files
>
>
> On Tue, Feb 20, 2001 at 03:21:34PM -0700, bbullock wrote:
> ...
> > How many files? Well, I have one Solaris-based host
> that generates
> > 500,000 new files a day in a
On Tue, Feb 20, 2001 at 03:21:34PM -0700, bbullock wrote:
...
> How many files? Well, I have one Solaris-based host that generates
> 500,000 new files a day in a deeply nested directory structure (about 10
> levels deep with only about 5 files per directory). Before I am asked, "no,
> they
Hello ,
did you check -fromdate and -fromtime ( and -totime and -todate ) restore
parameters ?
Regards
Petr
- Puvodní zpráva -
Od: "bbullock" <[EMAIL PROTECTED]>
Komu: <[EMAIL PROTECTED]>
Odesláno: 21. února 2001 1:16
Predmet: Re: Performance Large
:43 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Performance Large Files vs. Small Files
>
>
> Ben
> >>> bbullock <[EMAIL PROTECTED]> 21/02/2001 8:21:34 >>>
> >>>Big Snip
> This one nightmare host now has over 20 million files (and an
Ben
>>> bbullock <[EMAIL PROTECTED]> 21/02/2001 8:21:34 >>>
>>>Big Snip
This one nightmare host now has over 20 million files (and an
unknown number of directories) across 10 filesystems. We have found from
experience, that any more than about 500,000 files in any filesystem means a
full
IX Consultant / Senior Storage Administrator (TSM)
> ITS Unix Systems Support
> Coles Myer Ltd.
-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 21 February 2001 9:22
To: [EMAIL PROTECTED]
Subject: Re: Performance Large Files vs. Small Files
Je
gy
> -Original Message-
> From: Jeff Connor [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, February 15, 2001 12:01 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Performance Large Files vs. Small Files
>
>
> Diana,
>
> Sorry to chime in late on this but you've hit
Also what size (and how many) processors are in the Windows NT/W2K
machine???
I have seen non-server win-nt clients halve their processing time when a
user upgrades from Pentium to Pentium II or III, etc.
Client processor muscle matters! My first Mac clients only got
25Megs/Hour!!!
TECTED]]
> Sent: Thursday, February 15, 2001 8:01 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Performance Large Files vs. Small Files
>
> Diana,
>
> Sorry to chime in late on this but you've hit a subject I've been
> struggling with for quite some time.
>
Diana,
Sorry to chime in late on this but you've hit a subject I've been
struggling with for quite some time.
We have some pretty large Windows NT file and print servers using MSCS.
Each server has lots of small files(1.5 to 2.5 million) and total disk
space(the D: drive) between 150GB and 200GB
PROTECTED]
Subject: Re: Performance Large Files vs. Small Files
...
>We are trying to complete an incremental backup an NT Server with about 3
>million small objects (according to TSM) in many, many folders and it can't
>even get done in 12 hours.
To the excellent responses alread
>I believe you change the buffer pool size in the dsmserv.opt file and use an
>entry as follows:
>BUFPOOLSIZE 16384
>The accepted way to do this is to note your current buffer pool size and then
>double it. Watch your cache hit percentage for a day or two and then double
>again to achieve the opti
AIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:(bcc: George Lesho/Partners/AFC)
Fax to:
Subject: Re: Performance Large Files vs. Small Files
After performing the command _q db f=d_ you will see a screen like this:
Available Space (MB): 12,000
Assigned Capacity (MB): 12,000
...
>We are trying to complete an incremental backup an NT Server with about 3
>million small objects (according to TSM) in many, many folders and it can't
>even get done in 12 hours.
To the excellent responses already posted regarding the TSM db being in the
middle of the operations, I could onl
Diana,
At the start of the backup session the server sends a list of all
'current' files to the client. If, on the client, the
RESOURCEUTILIZATION parameter is set to a number greater than 2 then the
task of processing the list may be split over a number of processes (and
the task of backing-up
>I'm assuming that the MASTER LIST of these 1000 files, their creation, modifiy
>date, # of generations kept, how many there currently are, etc. are kept in the
>TSM Server Database. If we watch an incremental backup, via the GUI, we are
>seeing that for this client the COMPARE time is horrible.
Diana,
Each file backed-up requires a database update to record the backup. If
each file is large the transaction rate is slower than if each file is
small due to the time it takes to send the file to TSM.
The only optimization is to tune the database & logs by spreading the
load across multipl
> Does anyone have a TECHNICAL reason why I can backup 30GB of 2GB files that
are
> stored in one directory so much faster than 30GB of 2kb files that are
stored
> in a bunch of directories?
>
> I know that this is the case, I just would like to find out why. If the
amount
> of data is the same a
Imagine it strictly from a database perspective.
Scenario 1: 15 files, 2GB each
Scenario 2: 15728640 files, 2KB each
In scenario one, your loop is essentially like this:
numfiles = 15;
for (i = 0; $i < $numfiles ; $i++) {
insert file characteristics into database;
request data be se
After performing the command _q db f=d_ you will see a screen like this:
Available Space (MB): 12,000
Assigned Capacity (MB): 12,000
Maximum Extension (MB): 0
Maximum Reduction (MB): 2,684
Page Size (bytes): 4,096
Total Usable Pages: 3,0
I don't have the fine detail reason but the "in a nutshell" reason is that for every
file backed up there is a compare between the *SM server DB and the client as to file
name and date to see if it needs to be backed up or what. (That to me is main reason
for lot of small files taking longer t
30 matches
Mail list logo