We, too, were a bit rudely awakened by this new pricing structure
when we purchased our upgrade to 4.1.
>From my perspective, the most agonizing aspect of the structure
is the need to pay for a tape library access license (I forgot
what the "feature" is actually called) for every physical server
that accesses a tape library, rather than for each tape library.
We did the math, and it almost made more sense for us to purchase
a new S80 class machine that could effectively handle multiple
TSM instances, rather than purchase additional tape library
access licenses to continue to run on our two separate F50
machines. That begins to make very little sense when the price
point of a "feature" outweighs the cost of the hardware to
implement that feature. Particularly a feature that is so
fundamental to the running of the application: I can't imagine
trying to run a large-scale ADSM implementation without a 3494
(or similar) tape library.
-- Tom
Thomas A. La Porte
DreamWorks SKG
[EMAIL PROTECTED]
On Wed, 21 Feb 2001, bbullock wrote:
> That is also an option that we have not considered. It actually
>sounds like a good one. I'll bring it up to management.
> The only problem I see in this is the $$$ needed to purchase a TSM
>server license for every manufacturing host we try to back up.
> I don't know about your shops, but with the newest "Point based"
>pricing structure that Tivoli implemented back in November (I believe it was
>about then). They are now wanting to charge you more $$$ to run the TSM
>server on a multi-cpu unix host than on a single host NT box. In our shop
>where we run TSM on beefy S80s, that means a price change that is
>exponentially larger than what we have paid in the past for the same
>functionality.
>
>
>Ben Bullock
>UNIX Systems Manager
>
>
>> -----Original Message-----
>> From: Suad Musovich [mailto:[EMAIL PROTECTED]]
>> Sent: Wednesday, February 21, 2001 4:37 AM
>> To: [EMAIL PROTECTED]
>> Subject: Re: Performance Large Files vs. Small Files
>>
>>
>> On Tue, Feb 20, 2001 at 03:21:34PM -0700, bbullock wrote:
>> ...
>> > How many files? Well, I have one Solaris-based host
>> that generates
>> > 500,000 new files a day in a deeply nested directory
>> structure (about 10
>> > levels deep with only about 5 files per directory). Before
>> I am asked, "no,
>> > they are not able to change the directory of file structure
>> on the host. It
>> > runs proprietary applications that can't be altered". They
>> are currently
>> > keeping these files on the host for about 30 days and then
>> deleting them.
>> >
>> > I have no problem moving the files to TSM on a
>> nightly basis, we
>> > have a nice big network pipe and the files are small. The
>> problem is with
>> > the TSM database growth, and the number of files per
>> filesystem (stored in
>> > TSM). Unfortunately, the directories are not shown when you
>> do a 'q occ' on
>> > a node, so there is actually a "hidden" number of database
>> entries that are
>> > taking up space in my TSM database that are not readily
>> apparent when
>> > looking at the output of "q node".
>>
>> Why not put a TSM server on the Solaris box and back it up to
>> one of the other
>> servers as a virtual volume.
>> It would redistribute the database to the Solaris host and
>> the data is kept
>> as a large object on the tape-attached TSM server.
>>
>> I also remember reading about grouping files together as a
>> single object. I can't
>> remember if it did selective groups of files or just whole
>> filesystems.
>>
>> Cheers, Suad
>> --
>>
>
>