Jeff,
Regarding solution #4, the last time performance of lots of
small files was dicussed on the list, I thought that there might be
an opportunity here for someone to make an add-on product
(SSSI maybe?). This product would do client side aggregation with
tar or zip as a frontend and tsm out the back, which is what the TDP
products do (on the backend).
The value-add over just doing tar or zip yourself could
be keeping a local database, storing much less data than tsm does, which
could locate what blob a file is in and bring it back.
--
--------------------------
Bill Colwell
C. S. Draper Lab
Cambridge, Ma.
[EMAIL PROTECTED]
--------------------------
In <[EMAIL PROTECTED]>, on 02/21/01
at 01:05 PM, bbullock <[EMAIL PROTECTED]> said:
> Jeff,
> You hit the nail on the head of what is the biggest problem I face
>with TSM today. Excuse me for being long winded, but let me explain the boat
>I'm in, and how it relates to many small files.
> (snip)
>4. Use TSM as a disaster recovery solution (with a short 30 day retention)
>and have a process tar up all the 30-day old files into one large file, then
>have TSM do an archive and delete .tar file. This would mean we only track 1
>large tar file for every day for the 5 year time (about 1800 files). This is
>the option we are currently pursuing.
> Any other options or suggestions from the group? Any other backup
>solutions you have in place for tracking many files over longer periods of
>time?
> If you made it this far through this long e-mail, thanks for letting
>me drone on.
>Thanks,
>Ben Bullock
>UNIX Systems Manager
>Micron Technology
>>