This is a real catch-22 BIG virtual volumes means less tsm data base space tied up tracking tons of little virtual volumes (and files on the remote server) BUT big virtual volumes also means lots of data transfered over the network during reclamation (sigh)
Also remember, a virtual volume is only as big as the (A) current work process (B) defined max capacity So even if you define a maxcap of 10 GB, if you do something like a "backup stg" and that process only writes 2 GB, then the virtual volume will be closed at 2 GB. (no additional data will be written to that ~virtual volume~) In general, if you know you will be putting large amounts of data off to virtual volumes that is likely to all expire together, such as a backup of an archive storage pool that holds large DB archives, then really big virtual volumes is the way to go since there will be very little reclamation ever needed (since the data is likely to expire in large sets). just my 2 cents worth... Dwight -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Tuesday, September 03, 2002 10:02 AM To: [EMAIL PROTECTED] Subject: Virtual Volumes... Greetings all. For those of you using virtual volumes, how large do you make the volumes? When I started doing this, I had the following opinions: + The server volumes should be significantly smaller than the remote physical storage. It would be a real pain for most of the virtual volumes to be spread over different tapes; multiple physical mounts for each virtual mount? Ugh. + The server volumes should be large enough not to be a huge load. Each virtual volume is a "file" on the hosting server. For each TB of data, that's 1000 files if volumes are 1G; actually probably more like 1300, with reclamation at 50%. You certainly don't want them as small as 100M. I selected 1GB for my virtual volumes, but am starting to re-think the issue, maybe going for 10G or so. Anyone want to share thought processes? - Allen S. Rout