> Bug your EMC rep for "Backup to disk guide with IBM Tivoli Storage > Manager". Though, you're already doing most of their recommended > practices. > > They claim that "Lab testing had shown that Solaris and AIX may > perform slower than other operating systems", which is amusing / > irritating.
I did find and read that guide, although the information now seems a little dated (10/20/2003). It looks like it hasn't been updated since EMC came out with their own VTL, which would make sense if they are pushing people towards that solution for disk based backups. The whitepaper isn't available on Powerlink, but is available on a few other sites found by searching. If you are aware of a newer/updated version than the one I have, please let me know. I also read and recommend the engineering whitepaper "EMC Clariion Best Practices for Fibre Channel Storage". > How many striped lvs did you end up with? I have hesitated about > making big striped LVs out of fear that I'll bottleneck my clients > because I'll only have "a few" threads free to process data. > Consequently, my striping has been in terms of defining N DISK volumes > per stgpool, where N is the number of underlying RAIDs. > > But that is a pain in the patoot in many ways, especially for > reorganization. > > I've considered making a bunch of striped LVs (as in, a hundred or so) > and doing it that way, but I figured that would add up to thrashing > the heads. > I just made one large striped LV. I was leaning towards the multiple LUNs and multiple LVs route as well so I would have multiple queues, threads, etc. to the disk, however, the EMC recommendation is to use one very large LUN per RAID group for sequential based disk backups to SATA disks and I concur. I tested out performance with 4 LUNs on a single SATA RAID group and the performance was abysmal, and got worse as the night went on - I actually had to suffer through a night's backup with that configuration and backups were still running at 10am the next day. I think if you had one or two other LUNs on the same RAID group that were assigned to different hosts who were not all active at the same time, then it probably wouldn't be as much of a problem. If I had more RAID groups to work with, then I probably would have created more LVs, ie. 2 RAID groups per LV. Most likely that would only be possible where the disk is the permanent onsite storage for TSM and multiple terabytes are required, thus many more SATA drive heads would be needed and multiple RAID groups could then be configured. 2 4+1 RAID3 groups of 250GB SATA drives give you a usable capacity of roughly 3.6TB so you would need gobs of disk to get many RAID groups and LVs.