[Bug?] Strange PCT_RECLAIM statistic
TSM server 5.5.2.1 on AIX 5.3 - watch this: == tsm: SM>q pr Process Process Description Status Number - 791 Space ReclamationVolume GU0405 (storage pool L32POOL), Moved Files: 1482, Moved Bytes: 5,144,170,486, Unreadable Files: 0, Unreadable Bytes: 0. Current Physical File (bytes): 25,750,124 Current input volume: GU0405. Current output volume: GU0049. tsm: SM>q vol GU0405 f=d Volume Name: GU0405 Storage Pool Name: L32POOL Device Class Name: LTO3_2 Estimated Capacity: 574.0 G <== Scaled Capacity Applied: Pct Util: 9.0 <== Volume Status: Full Access: Read/Write Pct. Reclaimable Space: 100.0 <== Scratch Volume?: No In Error State?: No Number of Writable Sides: 1 Number of Times Mounted: 13 Write Pass Number: 1 Approx. Date Last Written: 05/28/09 11:49:18 Approx. Date Last Read: 10/07/09 11:53:03 Date Became Pending: Number of Write Errors: 0 Number of Read Errors: 0 Volume Location: Volume is MVS Lanfree Capable : No Last Update by (administrator): MOELLER Last Update Date/Time: 05/18/09 11:52:47 Begin Reclaim Period: End Reclaim Period: Drive Encryption Key Manager: None tsm: SM>q cont GU0405 > 0.GU0405 Output of command redirected to file '0.GU0405' [[ listing 27837 backup objects from a single MAC client ]] == The reclamation process ended successfully quite some time later, having moved >27k files, >55 GB. Not exactly what I had in mind when I set the storagepool's reclamation threshold to 99 (instead of 100) ... -- W. J. Moeller, GWDG Goettingen, Germany
Exclude DB2 folders from TSM server backup?
Hello... should the TSM client on my TSM server be instructed to exclude the various folders used by the DB2 database? Such as C:\TSMData\DB001\...\*, or D:\TSMData\ActLog\...\*, or E:\TSMData\ArcLog\...\*. The client is not automatically excluding these, and they do not fail to backup (not locked.) But they are redundant, and I assume that restoring them is not a legitimate way to restore the TSM database. Thanks... Ken Wisconsin Housing and Economic Development Authority
Re: Exclude DB2 folders from TSM server backup?
That's correct. You should exclude all other TSM disk volumes too, like the recovery log and storage pool volumes. TSM List Server Mailbox wrote: Hello... should the TSM client on my TSM server be instructed to exclude the various folders used by the DB2 database? Such as C:\TSMData\DB001\...\*, or D:\TSMData\ActLog\...\*, or E:\TSMData\ArcLog\...\*. The client is not automatically excluding these, and they do not fail to backup (not locked.) But they are redundant, and I assume that restoring them is not a legitimate way to restore the TSM database. Thanks... Ken Wisconsin Housing and Economic Development Authority -- -- Skylar Thompson (skyl...@u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
What is the best why to identify what tape volumes a backup object is residing on.
For security reasons, we're trying to identify what tapes a file is residing on. I've found the object_id of the file we're looking for, 1524734887, and started a search of the CONTENTS table using the following sql command: SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND OBJECT_ID=1524734887 > /home/bkunst/qcontents.out This search has been running for over 24 hours now. Is there a quicker/better way to search for the volume_id for this file? Thanks, -- Brian Kunst Storage Administrator UW Technology
Re: What is the best why to identify what tape volumes a backup object is residing on.
Instructions here: http://www-01.ibm.com/support/docview.wss?uid=swg21114873 On Wed, Oct 7, 2009 at 2:18 PM, Brian G. Kunst wrote: > For security reasons, we're trying to identify what tapes a file is > residing on. I've found the object_id of the file we're looking for, > 1524734887, and started a search of the CONTENTS table using the following > sql command: > > SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND > OBJECT_ID=1524734887 > /home/bkunst/qcontents.out > > This search has been running for over 24 hours now. Is there a > quicker/better way to search for the volume_id for this file? > > Thanks, > > -- > Brian Kunst > Storage Administrator > UW Technology >
Re: What is the best why to identify what tape volumes a backup object is residing on.
I don't have the manual pulled up, but I believe that there is a way to do a preview restore from the client that will prompt for tapes that are required to do the restore. I would tackle it that way. Gary Itrus Technologies On Oct 7, 2009, at 1:18 PM, Brian G. Kunst wrote: For security reasons, we're trying to identify what tapes a file is residing on. I've found the object_id of the file we're looking for, 1524734887, and started a search of the CONTENTS table using the following sql command: SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND OBJECT_ID=1524734887 > /home/bkunst/qcontents.out This search has been running for over 24 hours now. Is there a quicker/better way to search for the volume_id for this file? Thanks, -- Brian Kunst Storage Administrator UW Technology
Re: What is the best why to identify what tape volumes a backup object is residing on.
Have you considered just doing a restore? If it's just one file, that is definitely the fastest way to find out. Just specify an alternate destination. On 7 okt 2009, at 20:18, Brian G. Kunst wrote: For security reasons, we're trying to identify what tapes a file is residing on. I've found the object_id of the file we're looking for, 1524734887, and started a search of the CONTENTS table using the following sql command: SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND OBJECT_ID=1524734887 > /home/bkunst/qcontents.out This search has been running for over 24 hours now. Is there a quicker/better way to search for the volume_id for this file? Thanks, -- Brian Kunst Storage Administrator UW Technology -- Met vriendelijke groeten, Remco Post r.p...@plcs.nl +31 6 248 21 622
Re: What is the best why to identify what tape volumes a backup object is residing on.
Brian, "q noded ucsfscl1_2 stg=poolname" Will give you the potential volume numbers. You can add these as a limiter of your select "and volume_name in (list from q noded)" Fred Johanson TSM Administrator University of Chicago 773-702-8464 -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Brian G. Kunst Sent: Wednesday, October 07, 2009 1:18 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] What is the best why to identify what tape volumes a backup object is residing on. For security reasons, we're trying to identify what tapes a file is residing on. I've found the object_id of the file we're looking for, 1524734887, and started a search of the CONTENTS table using the following sql command: SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND OBJECT_ID=1524734887 > /home/bkunst/qcontents.out This search has been running for over 24 hours now. Is there a quicker/better way to search for the volume_id for this file? Thanks, -- Brian Kunst Storage Administrator UW Technology
Re: What is the best why to identify what tape volumes a backup object is residing on.
The very fastest way to find out is to initiate a retrieval to a throwaway output, and look at what tape/tapes is/are mounted: you'll have an answer within seconds. (Never do a Select on a large database Contents table unless you are very desperate.) Richard Sims
Re: What is the best why to identify what tape volumes a backup object is residing on.
On 7 okt 2009, at 20:28, Gary Bowers wrote: I don't have the manual pulled up, but I believe that there is a way I was just browsing the publib... to do a preview restore from the client that will prompt for tapes that are required to do the restore. I would tackle it that way. I was expecting the same, but I can't find it, so maybe that was wishful thinking... Maybe in a next release. Of course, one could quite easily code this using the api, if you really wanted to Gary Itrus Technologies -- Met vriendelijke groeten, Remco Post r.p...@plcs.nl +31 6 248 21 622
Re: What is the best why to identify what tape volumes a backup object is residing on.
Thanks Wanda, that worked. Thanks everyone else too. I'm kick myself now for not just doing a test restore. -- Brian Kunst Storage Administrator UW Technology > -Original Message- > From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf > Of Wanda Prather > Sent: Wednesday, October 07, 2009 11:26 AM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: [ADSM-L] What is the best why to identify what tape > volumes a backup object is residing on. > > Instructions here: > > http://www-01.ibm.com/support/docview.wss?uid=swg21114873 > > > On Wed, Oct 7, 2009 at 2:18 PM, Brian G. Kunst > wrote: > > > For security reasons, we're trying to identify what tapes a file is > > residing on. I've found the object_id of the file we're looking for, > > 1524734887, and started a search of the CONTENTS table using the > following > > sql command: > > > > SELECT * FROM CONTENTS WHERE NODE_NAME='UCSFSCL1_2' AND > > OBJECT_ID=1524734887 > /home/bkunst/qcontents.out > > > > This search has been running for over 24 hours now. Is there a > > quicker/better way to search for the volume_id for this file? > > > > Thanks, > > > > -- > > Brian Kunst > > Storage Administrator > > UW Technology > >
Re: What is the best why to identify what tape volumes a backup object is residing on.
Ok, so I went and looked it up. The way we used to do this was to kick of the restore with the "tapeprompt" option. This would cause the client to pause before the tape was mounted, and prompt if you wanted to mount the tape. If you select no, then it does not restore anything, and goes on to the next tape. Just pick no on each one, and write down the tapes it asks for. The sho bfo works too, but can be annoying if you want to restore a whole directory. Hope someone finds this useful. :) Gary Itrus Technologies On Oct 7, 2009, at 1:44 PM, Remco Post wrote: On 7 okt 2009, at 20:28, Gary Bowers wrote: I don't have the manual pulled up, but I believe that there is a way I was just browsing the publib... to do a preview restore from the client that will prompt for tapes that are required to do the restore. I would tackle it that way. I was expecting the same, but I can't find it, so maybe that was wishful thinking... Maybe in a next release. Of course, one could quite easily code this using the api, if you really wanted to Gary Itrus Technologies -- Met vriendelijke groeten, Remco Post r.p...@plcs.nl +31 6 248 21 622
Encrypted Server to Server Communications?
Good afternoon fellow TSMers, Does anyone know if you can use ssl for server to server communications in the TSM versions that do SSL (TSM 5.5.x and above, I think)? I was just reading through the 6.1 documentation and it isn't clear to me that it can be done. They talk about backup client to server but not server to server. Has anyone tried that? Also, I am disheartened by the fact that CAD doesn't work over SSL after Tivoli/IBM pushing it pretty hard these past few years as the first choice in configuring scheduling operations. Also disheartening is that SSL only works for Windows and AIX. I welcome your thoughts. Respectfully, Marc Taylor
Re: Encrypted Server to Server Communications?
On 7 okt 2009, at 22:54, Taylor, Marc wrote: Good afternoon fellow TSMers, good evening ;-) Does anyone know if you can use ssl for server to server communications in the TSM versions that do SSL (TSM 5.5.x and above, I think)? I was just reading through the 6.1 documentation and it isn't clear to me that it can be done. They talk about backup client to server but not server to server. Has anyone tried that? Also, I am disheartened by the fact that CAD doesn't work over SSL after Tivoli/IBM pushing it pretty hard these past few years as the first choice in configuring scheduling operations. Also disheartening is that SSL only works for Windows and AIX. It's a pity that you missed out on the TSM symposium. I don't recall exactly, but expanding SSL is at least on some developers minds, though it might not be implemented on other platforms next year or even in 2011. As for server2server there are no plans that I know of. So if you really need to encrypt that path now, use a (V)PN. I welcome your thoughts. Respectfully, Marc Taylor -- Met vriendelijke groeten, Remco Post r.p...@plcs.nl +31 6 248 21 622
Expiration of Exchange
We use Copy Services instead of the TDP for EXCHANGE. We want to keep backups for 30 days, which does work. The .edb files are daily marked as inactive and roll off as expected. But an examination by Mr. Exchange shows that there are .log files which are never marked as inactive, and, thus, are immortal, so far to the sum of 50 Tb on site (and the same offsite). We obviously missed something in configuration, but what? To complicate matters, we tried to modify the client to allow deletion of backups (Mr. Exchange discovered on his own that "del ba *log todate=current_date-30" will get rid of the unwanted) but keep getting the client is accessing the server message, on an empty machine. While waiting to figure this out, we could do "del vol xxx discarddat=y" on all those volumes more than 5 weeks old, but there must be some way to prevent this in the future.
Re: Expiration of Exchange
My guess is that you are mounting up the filesystem, and backing up the files directly. The log files in Exchange are probably getting a new name as they are truncated, which means that there are no versions of the files, only a single version that goes from active to inactive when it is deleted from the server. Check your "Retain Only" parameter of the TSM Exchange Management class. Make sure that this is set to 30 days and not 365 or nolimit. This should delete older files, but only from the date they get marked inactive. Gary Itrus Technologies On Oct 7, 2009, at 8:33 PM, Fred Johanson wrote: We use Copy Services instead of the TDP for EXCHANGE. We want to keep backups for 30 days, which does work. The .edb files are daily marked as inactive and roll off as expected. But an examination by Mr. Exchange shows that there are .log files which are never marked as inactive, and, thus, are immortal, so far to the sum of 50 Tb on site (and the same offsite). We obviously missed something in configuration, but what? To complicate matters, we tried to modify the client to allow deletion of backups (Mr. Exchange discovered on his own that "del ba *log todate=current_date-30" will get rid of the unwanted) but keep getting the client is accessing the server message, on an empty machine. While waiting to figure this out, we could do "del vol xxx discarddat=y" on all those volumes more than 5 weeks old, but there must be some way to prevent this in the future.
Re: Expiration of Exchange
Gary, This is the copygroup definition: VERE nol VERD NOL RETE 30 RETO 60 Yet we have backups from May '08. ADSM-L@VM.MARIST.EDU From: ADSM: Dist Stor Manager [ads...@vm.marist.edu] On Behalf Of Gary Bowers [gbow...@itrus.com] Sent: Wednesday, October 07, 2009 8:53 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Expiration of Exchange My guess is that you are mounting up the filesystem, and backing up the files directly. The log files in Exchange are probably getting a new name as they are truncated, which means that there are no versions of the files, only a single version that goes from active to inactive when it is deleted from the server. Check your "Retain Only" parameter of the TSM Exchange Management class. Make sure that this is set to 30 days and not 365 or nolimit. This should delete older files, but only from the date they get marked inactive. Gary Itrus Technologies On Oct 7, 2009, at 8:33 PM, Fred Johanson wrote: > We use Copy Services instead of the TDP for EXCHANGE. We want to > keep backups for 30 days, which does work. The .edb files are daily > marked as inactive and roll off as expected. But an examination by > Mr. Exchange shows that there are .log files which are never marked > as inactive, and, thus, are immortal, so far to the sum of 50 Tb on > site (and the same offsite). We obviously missed something in > configuration, but what? > > To complicate matters, we tried to modify the client to allow > deletion of backups (Mr. Exchange discovered on his own that "del ba > *log todate=current_date-30" will get rid of the unwanted) but keep > getting the client is accessing the server message, on an empty > machine. While waiting to figure this out, we could do "del vol xxx > discarddat=y" on all those volumes more than 5 weeks old, but there > must be some way to prevent this in the future. >
Re: AW: TSM 6.1 and the ever expanding DB
OK, here's my good news/bad news for the informal poll: Production 6.1 server: New TSM 6.1.2 installation with a wompin powerful Win2K3 box, lots of external SAN disk Running 76 schedules of daily incrementals: Most clients are relatively small Win2K3 servers, all TSM client 6.1 A few Linux clients TSM 5.4 & 5.5, Solaris clients TSM 5.4 & 5.5, SQL client 5.5. Retention is nolim, 30 days (but only about half the clients have been running 30 days yet). DB is 25 GB Occupancy 29M objects, 7TB total data in primary storage pools. (Since this is a new TSM customer, didn't have to face any issues with doing export/import. That may be why I haven't had to deal with data base swell.) No problems with server crashes since upgrade from 6.1.1 to 6.1.2 (in spite of the fact that I almost daily get 1 or 2 client failures saying the client backup died because the server active log is full, which it isn't). The problem is sizing the active logs, which do not work as documented. (Documentation says the active logs will get created in 512M chunks up to the ACTIVELOGSIZE limit, but it goes higher than that, and when it starts running up, it's explosive.) NOBODY should run at the default log size of 2048. My active log size is 80GB. I chose to go with V6 for a particular reason with this customer. Still probably the best choice for this customer for internal site reasons. After this experience, here's what I'm telling my other customers: The good news: There is nothing I've encountered that makes me worry about data integrity. The sucker works. The bugs I'm still encountering are things we can work around for a while. (I would probably feel differently if it kept crashing on me.) The bad news: There are messy, annoying, time consuming issues that you will run into, especially if you are a power user: - There are error messages that are not documented to any degree of usefulness. (Fond memories of ANR? Say hello to his little friend, ANR0162W "Supplemental data base diagnostic information"... A simple SQL query syntax error generates enough output to send zillions of bits to an untimely death in the bit bucket. But for real errors, the info provided is useless.) - Just the fact that the stable of scripts I normally use for checking and diagnosing TSM systems don't format in columns from the command line anymore has taken a surprising amount of time to mess with. - SQL time-interval queries don't appear to work at all any more. Certainly don't work as documented (gotta spend some time on that one..) - There are errors in the documentation, some obvious, some subtle. - There are bugs, and we'll be finding them for a while yet. - et cetera - et cetera So, If you have a REASON to go to the 6.1 server, do it (at least on Windows - sounds like Linux is having uglier issues...). For example, if I had the opportunity to take advantage of 6.1 dedup, I'd say do it in a heartbeat. Just understand that dealing with 6.1 still requires patience and time, and plan accordingly. It will NOT be like previous smooth upgrades in the 5.x series. If you don't have a reason you need to go to 6.1 server, stick with 5.5 while the rest of us bleed, and spend your time taking advantage of all the cool new things in the 6.1 clients: -6.1 for Windows has much improved integration with VCB -6.1 -snapdiff for NetApp is HUGE -6.1 for Exchange has item-level restores now -6.1 ISC is a significant improvement over previous versions You can use all those with your 5.5 server. They should keep you busy for a while. Wanda (send Bandaids) P On Mon, Oct 5, 2009 at 11:50 AM, Allen S. Rout wrote: > >> On Fri, 2 Oct 2009 09:49:52 -0600, Kelly Lipp > said: > > > > That last paragraph made my head hurt! I had the "opportunity" to > > take a database class in college. Didn't want to know it then, > > don't want to know it now. > > There was a "Know-nothing" political party, once.. :P > > > I'll echo Rick's comments: you pioneers, you go! Those arrows don't > > hurt that much. That which doesn't kill you makes you and all of us > > stronger. > > But that logic works even if we _do_ die. > > - Allen S. Rout >