I call them "stuck filling tapes", and have a script that runs every day that keeps them under control. Any filling tape that hasn't been written to in 6 days gets a movedata run on it. Also, it also occurs on file pool volumes. We're on tsm server v5.5.2, and v5.5.5 on our dedicated library managers.
Below is the select I use to fine them. sftdays=6 dsmadmc -se=$i \ -id=$adminid \ -password=$adminpwd \ -dataonly=yes \ -tab \ "select '${i}', \ volume_name, \ stgpool_name, \ status, \ DEVCLASS_NAME, \ access, \ last_write_date, \ cast((current_timestamp - last_write_date)days as decimal(6,0)), \ est_capacity_mb, \ pct_utilized, \ cast((est_capacity_mb * pct_utilized / 100) as decimal(8,1)), \ pct_reclaim \ from volumes \ where status = 'FILLING' \ and access != 'UNAVAILABLE' \ and stgpool_name not like '%ARCH%' \ and cast((current_timestamp - last_write_date)days as decimal(6,0)) \> $sftdays \ order by volume_name" From: "Allen S. Rout" <a...@ufl.edu> To: ADSM-L@VM.MARIST.EDU Date: 02/06/2012 04:20 PM Subject: Excessive number of filling tapes... Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> So, I've got an offsite machine which exists to accept remote virtual volumes. For years, now, the filling volumes have behaved in a way I thought I understood. The tapes are collocated by node. There are about 20 server nodes which write to it. My number of filling volumes has rattled around 50-60 for years; I interpret this as basic node collocation, plus occasional additional tapes allocated when more streams than tapes are writing at a time. So some of the servers have just one filling tape, some have two, and the busiest of them might have as many as 6 (my drive count). Add a little error for occasionally reclaiming a still-filling volume, and that gives me a very clear sense of what's going on, and I can just monitor scratch count. Right now, I have 190 filling volumes. None of them has data from more than one client. I have some volumes RO and filling, and am looking into that, but it's 20 of them, not enough to account for this backlog. Those are also the only vols in error state. I've been rooting through my actlogs looking for warnings or errors, but I've never had occasion to introspect about how TSM picks which tape to call for, when it's going to write. It's always Just Worked. Does this ring any bells for anyone? Any dumb questions I've forgotten to ask? I don't hold much hope for getting a good experience out of IBM support on this. - Allen S.Rout ----------------------------------------- The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately, and delete the original message.