Bill, You really only need to set the reuse delay long enough to ensure that you have at least one good DB backup before reusing the volume. That's why all of my onsite tapes carry a 2-day reuse delay. Being able to set a fractional amount would be ideal (we really only need about 1.25), but the feature only supports integers.
The idea is that if you have to roll back to your NEWEST DB backup, everything is where TSM expects to find it. If you have to go back farther than that, then you're effectively in DR mode, and you declare your primary tapes "destroyed" and restore the volumes; as has been mentioned in the forum already. Tab Trepagnier TSM Administrator Laitram, L.L.C. Bill Mansfield <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 06/09/2004 07:02 AM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Reclamation data loss scenario We were documenting some TSM server recovery scenarios the other day, and came up with one I haven't seen discussed before. Here's the scenario: Client backups finish, storage pool backups finish, TSM DB backup finishes, prepare run. Expiration runs. Primary tapepool reclamation reclaims tapes A and B, moving content to tape C (was scratch). Diskpool migration runs, starts moving data to tapepool tapes A and B. DISK FAILURE wipes out database and log (but not storage pools). Get new disk, restore server from TSM DB backup tape. At this point everything looks ok, but I actually have two "corrupt" tapepool tapes (A and B), and I'm not too sure the diskpool is any good either. The question is, what do we need to put in the recovery procedure to handle this? I can probably prevent it by setting reusedelay on the tapepool, but we're short on tape slots most of the time, and letting the reclaimed tapes pend until the oldest DB backup expires like we do our copypool tapes would overflow the library. I know I can audit and recover the tapes from the copypool, but the problem is I have hundreds of tapes, and insufficient time and tape drives to audit them all, and no sure way to tell which might need auditing. BTW, we've taken steps to better protect the TSM DB and Log disk, but this scenario lingers like a bad smell. Thanks in advance! Bill Mansfield