Luke wrote, in part..
> Is there a way to identify how the addition of a node affects the
> TSM database size? I can capture the number of files and the amount of
> data that each additional node adds to the TSM server, but what I'm
> really looking for is a way to separate how a particular node affects
> the database size. Even better would be to identify how much a
> particular type of workstation affects the TSM database.
Some of my Win workstations have a couple of hundred files and a
few megabytes in *SM and others have a couple of hundred thousand
and many gigabytes. Further, some of my workstations have nothing
change in a week and others have tens of thousands of files and
gigabytes change in a day.
The good news is that DB size can be estimated, if you can make
good guesses as to what will be backed up and how long everything
must be saved.
I estimate that the DB will require 1 kilobyte per DB entry and that a
DB entry is a backed up (file or subdirectory) object (version). Yes, I
know it really takes less space, but this is an estimate and I want my
estimate to be a bit conservative.
Now, to really restate the obvious: the number of objects on a
workstation is probably not equal to the number of *SM DB objects!
A DB object is something that has been backed up. Forget that the
object contents rattle around the backup server, for this discussion.
It's only important to know that an object has been backed up to the
*SM server.
>From the backup server point of view, the number of objects in the DB
is affected by the various backup policies. Keep up to "n" copies of a
changing file ... then up to "n" DB objects will be required. Most files
will have 1 DB object, but some, such as "Joe's Daily Diary.doc" will
have "n" objects. Keep erased files for "m" days ... then up to "m"
objects may exist in your DB for those daily logs named "11July",
"12July", etc. and erased after a few days. If you are backing up
"cache" files, such as from a web browser, your old cache entries will
be saved for an additional "m" days. If your workstations have lots of
changing subdirectories, then it might be worth noting that
subdirectories are backed up using the longest retention in the policy.
That is, if you have a management class with a 10-year retention, then
all your workstations (with using the same policy) will have directories
retained for 10 years, even if a workstation uses only a management
class with a 30-day retention.
>From the backup client point of view, the number of DB objects will be
affected somewhat by how often backups are run,but mostly by the
file system and file include/exclude criteria. Certainly, a workstation
which has numerous OS and PP installs will use more DB entries than
one that is static.
Finally, if you have a workstation that you believe is typical, back it up
for a while. The number of objects backed up, and therefore DB
space requirement may be viewed with a QUERY OCCUPANCY
command.
Hope this helps, wayne
Wayne T. Smith [EMAIL PROTECTED]
ADSM Technical Coordinator - UNET University of Maine System