Lucian, what was the command used for checkin that WORM Scratch tapes?
Did you use "CHECKLABEL=YES"? Using LTO it is mandatory to use that parameter during checkin. https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.9/srv.solutions/c_tapeops_worm_checkin_ulw.html Regards, Uwe -----Ursprüngliche Nachricht----- Von: ADSM: Dist Stor Manager <ADSM-L@VM.MARIST.EDU> Im Auftrag von Lucian Vlaicu Gesendet: Mittwoch, 25. März 2020 20:52 An: ADSM-L@VM.MARIST.EDU Betreff: Re: [ADSM-L] Still have issue with not able to use Scratch LV tapes Not sure if is relevant but yesterday all these were scratch tsm: ARDTSM1>q libvol Library Name Volume Name Status Owner Last Use Home Device Element Type ------------ ----------- ---------------- ---------- --------- ------- ------ LIBIBM3500 RDP000LV Scratch 2,734 LTO LIBIBM3500 RDP003LV Private ARDTSM1 Data 2,743 LTO LIBIBM3500 RDP004LV Private ARDTSM1 Data 2,755 LTO LIBIBM3500 RDP005LV Private ARDTSM1 Data 2,760 LTO LIBIBM3500 RDP006LV Private ARDTSM1 Data 2,762 LTO LIBIBM3500 RDP007LV Private ARDTSM1 Data 2,777 LTO LIBIBM3500 RDP008LV Private ARDTSM1 Data 2,794 LTO LIBIBM3500 RDP009LV Private ARDTSM1 Data 2,823 LTO LIBIBM3500 RDP010LV Private ARDTSM1 Data 2,824 LTO LIBIBM3500 RDP011LV Private ARDTSM1 Data 2,904 LTO LIBIBM3500 RDP012LV Private ARDTSM1 Data 3,900 LTO LIBIBM3500 RDP013LV Private ARDTSM1 Data 3,934 LTO LIBIBM3500 RDP014LV Private ARDTSM1 Data 3,935 LTO Now i see only first one is scratch and the rest are private? how did they got in private since was not used? On 3/25/2020 3:22 PM, Sasa Drnjevic wrote: >> Maximum Scratch Volumes Allowed: 999 >> Number of Scratch Volumes Used: 0 > Obviosuly not a problem.... > > Some other possibilities: > > -how many scratch tapes are available in total? > -what is the size of data in STDHRDISKW pool? > -do you have any idea of file sizes in STDHRDISKW pool? > -ownership and permissions for Tape devices? > > For example, Linux shoud be: > > crw-rw-rw- 1 tsmadm tsmadm 238, 1024 Feb 24 14:51 /dev/IBMtape0n > > > Rgds, > > -- > Sasa Drnjevic > www.srce.unizg.hr/en/ > > > > > On 25.3.2020. 14:07, Lucian Vlaicu wrote: >> tsm: ARDTSM1>q stgpool STDHRLTO5W f=d >> >> Storage Pool Name: STDHRLTO5W >> Storage Pool Type: Primary >> Device Class Name: ULTRIUM5W >> Estimated Capacity: 0.0 M >> Space Trigger Util: >> Pct Util: 0.0 >> Pct Migr: 0.0 >> Pct Logical: 0.0 >> High Mig Pct: 90 >> Low Mig Pct: 70 >> Migration Delay: 0 >> Migration Continue: Yes >> Migration Processes: 1 >> Reclamation Processes: 1 >> Next Storage Pool: >> Reclaim Storage Pool: >> Maximum Size Threshold: No Limit >> Access: Read/Write >> Description: >> Overflow Location: >> Cache Migrated Files?: >> Collocate?: Group >> Reclamation Threshold: 50 >> Offsite Reclamation Limit: >> Maximum Scratch Volumes Allowed: 999 >> Number of Scratch Volumes Used: 0 >> Delay Period for Volume Reuse: 0 Day(s) >> Migration in Progress?: No >> Amount Migrated (MB): 0.00 >> Elapsed Migration Time (seconds): 0 >> Reclamation in Progress?: No >> Last Update by (administrator): ADMIN >> Last Update Date/Time: 03/23/20 22:30:50 >> Storage Pool Data Format: Native >> Copy Storage Pool(s): >> Active Data Pool(s): >> Continue Copy on Error?: Yes >> CRC Data: No >> Reclamation Type: Threshold >> Overwrite Data when Deleted: >> Deduplicate Data?: No >> Processes For Identifying Duplicates: >> Duplicate Data Not Stored: >> Auto-copy Mode: Client Contains Data >> Deduplicated by Client?: No >> >> >> tsm: ARDTSM1> >> >> On 3/25/2020 3:01 PM, Sasa Drnjevic wrote: >>> q stgpool NEXTPOOL f=d -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus