Re: TSM 4.2
Joe, Take at look at: http://www.tivoli.com/products/documents/updates/storage_mgr_42_enhancements.html and you'll see that 4.2 server is around too for all of the normal servers, including NT - we've just installed it onto one of our AIX test boxes and would be most interesting in hearing of others experiences. Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Internet: [EMAIL PROTECTED] Joe Cascanette <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 23-07-2001 19:05:00 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: TSM 4.2 I am assuming you are talking about the client?!. The latest version for the server on NT is 4.1.4, however the client is 4.2. I am using the client 4.2 on most of my Windows 2000 servers, and are taking a close look at the journal option. But so far so good, no problems.. Joe Cascanette -Original Message- From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]] Sent: Monday, July 23, 2001 1:01 PM To: [EMAIL PROTECTED] Subject: TSM 4.2 Is anyone running TSM 4.2, on what platform, and have you seen any problems? Geoff Gill TSM Administrator NT Systems Support Engineer SAIC E-Mail: [EMAIL PROTECTED] Phone: (858) 826-4062 Pager: (888) 997-9614
Re: Licenses and 4.1
Hi, TSM Licensing changed from 3.7 to 4.x and clients are now distinguished into whether they are Managed LAN or Managed SAN clients. In AIX one installs the tivoli.tsm.licenses filepack - this dumps a whole load of '.lic' files into the TSM server directory - I prefer to move them myself in to a 'licenses' subdir. Then a 'register license file=10mgsyslan.lic' (or file=licenses/10mgsyslan.lic if you moved them into the licenses dir) should license you for 10 normal LAN clients. This is how things work for the UNIX server (well, AIX anyhow)... Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Internet: [EMAIL PROTECTED] Francisco Reyes <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 24-07-2001 13:17:24 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Licenses and 4.1 On TSM 4.1 how do I add licenses for my clients? I don't have any of the additional services such as HSM. I purchased licenses for Netware, NT and Unix. How do I register those? The manual only lists how to register additional services and options, but nothing on how to register clients.
Re: Several WebSessions killing the Server
Hi Guys, Would persuading all Web Admin Client sessions to go via the TSM SWAP (Secure Web Admin Proxy) prevent this from happening? The clients would no longer be connecting directly to the TSM server - it's really very simple to set up, and probably less of a risk than upgrading IE on your client machines. Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Internet: [EMAIL PROTECTED] Richard Sims <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 26-07-2001 13:35:20 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: Several WebSessions killing the Server >sometimes when I try an action over the Admin. Web client, several >WebSessions are opened and can't be canceled. And the >real problem is: these sessions kill my server, an OS/390 R10 TSM >Server V 4.1, which starts using between 10 and 40% of the CPU. Robert - See past discussions of this problem at www.adsm.org. Customers have reported it in conjunction with use of Internet Explorer. Going to a higher level of IE resolved the problem according to their reports.
Re: AIX-Tape-Messages
Hi, We assumed this was as a result of mis-matched drive microcode and Atape drivers. Upgrading the drive microcode and atape lpp's to the latest levels fixed this instantly and gave us meaningful messages once again. Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Internet: [EMAIL PROTECTED] David Longo <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 06-08-2001 14:58:04 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: AIX-Tape-Messages Yes, we have had that. (We have 3575 library and use the Atape driver). Updates and some new fileset installs overwrite the message catalog for Atape messages. The solution is to reinstall the Atape driver. Hopefully you still have the install file, if not download the current version. You don't have to delete and reinstall, just use installp with the -F option, this does a force overwrite of installed package. It keeps the device definitions (actually I think it deletes and recreates them as they were). For safety sakes I would make sure no tape activity is going on in *SM and print out lscfg before doing install. It only takes about 1 -2 minutes. IBM is aware of this problem, I don't know if there is a planned fix. David B. Longo System Administrator Health First, Inc. 3300 Fiske Blvd. Rockledge, FL 32955-4305 PH 321.434.5536 Pager 321.634.8230 Fax:321.434.5525 [EMAIL PROTECTED] >>> [EMAIL PROTECTED] 08/06/01 09:45AM >>> Hy all, I have a problem with the messages in errpt on AIX 4.3.3 Since we installed Maintenance-Level 6 on AIX 4.3.3 messages from the tape-drives in our tape-library (3490) look something strange. This does not only happens with new messages, also old (before the maintenance-install)message are not more interpreted. The messages look like this DE9A52D1 0806150301 I S rmt2 AAA1 D1A1AE6F 0806144101 I H rmt5 AAA0 Does anybody know how to fix ? Thanks Christoph "MMS " made the following annotations on 08/06/01 10:03:08 -- This message is for the named person's use only. It may contain confidential, proprietary, or legally privileged information. No confidentiality or privilege is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it, and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Health First reserves the right to monitor all e-mail communications through its networks. Any views or opinions expressed in this message are solely those of the individual sender, except (1) where the message states such views or opinions are on behalf of a particular entity; and (2) the sender is authorized by the entity to give such views or opinions. ==
Re: TSM 4.2 (AIX), licensing..
Tom, Yes, apparently this is a known bug - you should have a look at the latest patch (4.2.0.1) , rather than the flat 4.2 version, although I'm of the understanding that the licensing problem is still an issue here too... There'll be a fix for this along soon - can anyone add any more to this? Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Internet: [EMAIL PROTECTED] Tom Tann{s <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 15-08-2001 14:46:22 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: TSM 4.2 (AIX), licensing.. Hello! I upgraded a server from 4.1 to 4.2 today. I've only managed to register one 50mgsyslan.lic whith the command register lic file=50mgsyslan.lic number=12 additional attempts toregister more result in the same.. tsm: SUMO>register license file=50mgsyslan.lic ANR2852I Current license information: ANR2827I Server is licensed to support Managed System for LAN for a quantity of 60. ANR2853I New license information: ANR2827I Server is licensed to support Managed System for LAN for a quantity of 60. tsm: SUMO> (Its 60 now because I successfully registered one 10mgsyslan.lic) After several attempts I took a look at the nodelock-file, and this file seems to be updated correctly, with one entry for each of my attempts.. . . . . # Managed System for LAN 50 Licen 6fb1ea8d2ebc.a3.89.a3.25.04.00.00.00 8umtikm47qkykpffafnaa "" "4.2" #[admin_comment] "" "" "0" "0" "0" # Managed System for LAN 50 Licen 6fb1ea8d2ebc.a3.89.a3.25.04.00.00.00 8umtikm47qkykpffafnaa "" "4.2" #[admin_comment] "" "" "0" "0" "0" So... Could this be a bug, or am I missing something here?
Re: ANS4031E error - TSM Client 4.2 and RH 7
Andy, You suggest below that TSM Client 4.2 for Linux requires Red Hat 7.x or higher. However, looking in the Readme file in the rpm distribution it suggests that the only real stipulation is that the Linux kernel version is 2.2.13 or higher, or 2.4.0 or higher. I have clients in my environment at Red Hat 6.2 with the Kernel at v2.2.14-5.0 (i.e. meeting the above requirements) and so far they seem to function fine with TSM Client 4.2. Is this combo stable to run, or is there something I'm missing here? Thanks for your help! Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet/Sametime: [EMAIL PROTECTED] Andrew Raibeck/Tucson/IBM@[EMAIL PROTECTED]> on 23-08-2001 14:36:46 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: ANS4031E error $)CI'm not sure why you aren't seeing any files get backed up (maybe none are eligible for backup?), but APAR IC29686 addresses the ANS4031E message. Here is the description of the APAR: = BEGIN APAR DESCRIPTION = ERROR DESCRIPTION: Customer tries to back up a fully qualified path that is too long. This will cause the object to fail. When the backup is completed, the summary does not show any failed objects even though ones with paths that are too long will fail. Directly after the summary statistics there is a message issued: ANS4031E Error processing '/': destination directory path length exceedssystem maximum. - These objects should be logged as failed objects in the summary. LOCAL FIX: None. = BEGIN APAR DESCRIPTION = You may have a recursive symbolic link, which could cause TSM to think that there is a path that is too long (the '/' is a "red herring"). From what I understand about this problem, the problem object is skipped, but just not logged as an error. This is fixed in the 4.2 client, but 4.2 requires Red Hat Linux 7.0 or 7.1. If you need further assistance, please contact IBM/Tivoli support and provide them with your symptoms, and mention APAR IC29686. Regards, Andy Andy Raibeck IBM Tivoli Systems Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. 1h@N?1 <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 08/23/2001 05:40 Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject:ANS4031E error When I issued 'dsmc incr', the following error occured and there is no backup data.. Anybody know why?? Appreciate any suggestions.. client os: Red Hat Linux release 6.2 (Zoot) Kernel 2.2.14-5.0smp on an i686 tsm server ver: 4.1.3 tsm client: 4.1 ANS4031E Error processing '/': destination directory path length exceeds system maximum [root@host herbdb]# dsmc incr Tivoli Storage Manager Command Line Backup Client Interface - Version 4, Release 1, Level 0.0 (C) Copyright IBM Corporation, 1990, 2000, All Rights Reserved. Node Name: JOINWEB01 Session established with server SERVER1: Solaris 2.6 Server Version 4, Release 1, Level 3.0 Server date/time: 08/23/2001 18:03:52 Last access: 08/23/2001 17:45:37 Incremental backup of volume '/' Incremental backup of volume '/boot' ANS1898I * Processed 6,000 files * ANS1898I * Processed 8,000 files * ANS1898I * Processed 9,000 files * ANS1898I * Processed 9,500 files * Successful incremental backup of '' ANS1898I * Processed10,000 files * Total number of objects inspected: 10,090 Total number of objects backed up:0 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred:0 Data transfer time:0.00 sec Network data transfer rate:0.00 KB/sec Aggregate data transfer rate: 0.00 KB/sec Objects compressed by:0% Elapsed processing time: 00:00:24 ANS4031E Error processing '/': destination directory path length exceeds system maximum
Re: TSM not backing up file systems in AIX
David, Might be worth checking that there are no DOMAIN statements in the clients' dsm.opt explicitly stating which filesystems to back up, and not including those excluded filesystems. Does the user that you run the backup as (presumably root...?) have access to those filesystems? David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre "Pace, David K" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 24-08-2001 19:01:35 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: TSM not backing up file systems in AIX Hay TSM'ers. I have some AIX 4.3.3 servers that TSM is backing up. I have found that 3 file systems are not being backed up and have not be backed up since February. There is no entry in the exclude list to keep these from being backed up. Any thoughts as to what would keep TSM for backing up specific file systems? Any thoughts as to anything to try? Dave Pace Pier1 imports.
Re: Archive recovery
Jennifer, You can, if you're a) lucky and b) a bit naughty. We've achieved such feats in extreme circumstances in the past by building a second TSM server partition and restoring an old TSM database backup from our main TSM server, and then trying to restore from that. If you're fortunate, the tape that the server will request the restore from, although 'expired' in the current database, will not have been written over with fresh data since then, and you'll be able to satisfy your restore. It helps if you have a large number of scratchtapes and not too high a turnaround. Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] "Page, Jennifer" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 28-08-2001 15:50:49 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Archive recovery looked in archive list but did not find and answer, new to list.. Can we restore data from a tape that has been deleted from the ADSM database, but we still have physical copy of? Thanks, Jennifer
TSM 4.1.3 Server - ExpQuiet Yes problems?
Hi *TSMers, Has anyone else come accross this? Since upgrading from 3.7.2/3 to 4.1.3 server on AIX, 'expire inventory' has been spilling out its rather noisy output into the activity log, even though it is set to be 'quiet'. The dsmserv.opt has the 'expquiet yes' in there, and a 'q opt' reports that the TSM server does think it should be being quiet. Bouncing the dsmserv process, changing to no and yes again have all failed... This has happened on all of the servers that we've upgraded and also on all that we have straight installed, and is rather a nuisance as some of our servers have an awful lof of clients filespaces. Has anyone else come accross this? David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Tel: 02392-56 0218 Mob: 07711 120 931 Internet: [EMAIL PROTECTED] "Page, Jennifer" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 28-08-2001 16:57:47 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: SUMMARY:Archive recovery Thanks. That's the answer we got from IBM as well. IBM gave us the following plan(if we chose to implement): 1) backup current copy of ADSM database 2) restore old version of database 3) get file from tapes 4) restore today's version of ADSM database back Thanks again for the prompt reply! -Original Message- From: Prather, Wanda [mailto:[EMAIL PROTECTED]] Sent: Tuesday, August 28, 2001 10:38 AM To: [EMAIL PROTECTED] Subject: Re: Archive recovery No. To do that you would have to restore your TSM data base back to a time when it still contained the pointers to that tape. -Original Message- From: Page, Jennifer [mailto:[EMAIL PROTECTED]] Sent: Tuesday, August 28, 2001 10:51 AM To: [EMAIL PROTECTED] Subject: Archive recovery looked in archive list but did not find and answer, new to list.. Can we restore data from a tape that has been deleted from the ADSM database, but we still have physical copy of? Thanks, Jennifer
Re: Archive recovery
Richard, >>> And you never want to tell anyone that you can do this. If they know you >>> can they'll want you to. >> On the other hand, letting the organization know that you, and only you, can >> perform awesome feats can only help advance your salary. Hmm, this would normally be true, except that - I work for IBM! 'Nuff said I think! I'm sure fellow IBM'ers on this list will know what I mean... Only joking :o) >> Just be sure to thwart >> requests for such feats by also advising that actually performing the feat will >> be infeasible for the organization because of the costs involved - which they >> hopefully won't associate with your salary and thus reduce it to make the >> feat feasible. Kinda sounds like a Simpsons episode. :-$ Simpsons episode? Now that sounds more like the IBM that I work for! Once again, only joking! >> Richard Sims, BU David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED]
Re: locate files in the tsm database
Hi Henrik, Try something simple, like: select NODE_NAME, FILESPACE_NAME, FILE_NAME, FILE_SIZE from CONTENTS where FILE_NAME like '% init.dat' This should find all instances of 'init.dat' whether backups or archives. I'm sure you could probably format this better, but the bare bones are there :o) Anyone else with any advances on the above? Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] Henrik Ursin <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 30-08-2001 09:21:52 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: locate files in the tsm database A filesystem on a tsm node contained a lot of different init.dat files (all erased). Is it possible to make a query in the tsm database to find out where these files are positioned in the filesystem - some kind of select command? Med venlig hilsen / Regards Henrik UrsinTlf./Phone +45 35878934 Fax+45 35878990 Email [EMAIL PROTECTED] Mail: UNI-C DTU, bygning 304 DK-2800 Lyngby
Re: locate files in the tsm database
Hi, Depends how many clients you have I guess - if you're backing up one or two clients then I couldn't agree more, but if you're looking after SP's then the gui approach might not be so appropriate. But hey, each to their own!!! Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] "Prather, Wanda" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 30-08-2001 16:11:57 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: locate files in the tsm database I think it's a lot faster to let the client find it. Start the client (dsm on AIX) Click RESTORE Pull down VIEW, Active and Inactive Click the SEARCH icon (the magnifying glass on the AIX client) Type in the name of the filesystem as "start path" Type in the name of the file to search for (you can search on partial names) Click SEARCH Let the client do the walking! -Original Message- From: David McClelland [mailto:[EMAIL PROTECTED]] Sent: Thursday, August 30, 2001 4:41 AM To: [EMAIL PROTECTED] Subject: Re: locate files in the tsm database Hi Henrik, Try something simple, like: select NODE_NAME, FILESPACE_NAME, FILE_NAME, FILE_SIZE from CONTENTS where FILE_NAME like '% init.dat' This should find all instances of 'init.dat' whether backups or archives. I'm sure you could probably format this better, but the bare bones are there :o) Anyone else with any advances on the above? Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] Henrik Ursin <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 30-08-2001 09:21:52 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: locate files in the tsm database A filesystem on a tsm node contained a lot of different init.dat files (all erased). Is it possible to make a query in the tsm database to find out where these files are positioned in the filesystem - some kind of select command? Med venlig hilsen / Regards Henrik UrsinTlf./Phone +45 35878934 Fax+45 35878990 Email [EMAIL PROTECTED] Mail: UNI-C DTU, bygning 304 DK-2800 Lyngby
Re: TSM & AIX 5L
Pétur, Regarding your comment on 'I don't know if Tivoli supports AIX 5L', it certainly does for TSM. Whereas I understand the BA client version is the same on 4.3.x and 5.x, there is an entirely different set of AIX 5L lpp's which ship with TSM Server 4.2, covering the server code itself for 64bit platforms, as well as device support for both 32 and 64bit. Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Tel: 02392-56 0218 Mob: 07711 120 931 Internet: [EMAIL PROTECTED] Pétur Eyþórsson <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 10-09-2001 12:17:43 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Hi Tyagi, I don´t know if Tivoli Supports AIX 5L but if you want to download products simply point youre browser at ftp://service.boulder.ibm.com/Storage/Tivoli-storage-magaement/maintenance ore something like that. Kveðja/Regards Pétur Eyþórsson Tæknimaður/Technician Kerfisfræðingur IR IBM SUPPORT Microsoft Certified System Engineer Nýherji HfSími TEL: +354-569-7700 Borgartún 37 105 Iceland URL:http://www.nyherji.is - Original Message - From: "Sandeep Tyagi" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Friday, September 07, 2001 3:11 PM > Hello, > > Can anubody please tell me the location from where I can download the > binaries of TSM client 4.2 for AIX 5.1 ? > >Sandeep K Tyagi
Re: Ask again.
Sean, Sounds like you'll be wanting to include the: passwordaccess generate into your dsm.sys file - this 'remembers' your password and automatically generates a new one when required, thus removing the necesity for the client to prompt you for a password. Is this what you meant? David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] Sean McNamara <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 03-10-2001 16:20:33 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Ask again. Good Morning, I asked this question a bit earlier in the week and did not get much of a response. I am simply trying to do a "dsmc incr " on a server and it asks for a user id (interactively). I am attempting to run this command as part of a scheduled job and have not been able to figure out how to pass it a "return" in my unix script. Do I have to pass a return in the script or can I set an option to avoid the user id request? > Any ideas? > Sean
Re: Ask again.
Dwight, I guess it just depends upon whether you might have a security concern with hard coding passwords into scripts - this will also cause a problem when a password expires on the server (the period of which can be set of course) afterwhich you'll either have to update your password in each one of your scripts or on your server. It all comes down to how stringent your security regulations are I guess... With 'passwordaccess generate' in your dsm.sys you'll never have to faff about again! Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: Ask again. just add the -pass=blah to your dsmc incr and it shouldn't ask for the id... dsmc incr -pass=blah now this would be with passwordaccess prompt which is what we run our unix clients with... Dwight -Original Message- From: Sean McNamara [mailto:[EMAIL PROTECTED]] Sent: Wednesday, October 03, 2001 10:21 AM To: [EMAIL PROTECTED] Subject: Ask again. Good Morning, I asked this question a bit earlier in the week and did not get much of a response. I am simply trying to do a "dsmc incr " on a server and it asks for a user id (interactively). I am attempting to run this command as part of a scheduled job and have not been able to figure out how to pass it a "return" in my unix script. Do I have to pass a return in the script or can I set an option to avoid the user id request? > Any ideas? > Sean
Re: ibm3493(4)
Hi 3494-ers, I'm pretty sure than TCPCFG is your friend here - in service mode on the LM PC you'll need to open a service window (i.e. an OS/2 prompt), from where you'll type in TCPCFG (or is it TCPCONFIG? It's one of the two!). This will bring up a comprehensive TCP settings applet with about all the tweakable TCP/IP settings you'll be needing to change. I don't recall which settings you need to change in the Library Manager application though ... Hope this helps some... David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] Lloyd Dieter <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 04-10-2001 22:01:51 Please respond to [EMAIL PROTECTED] Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: ibm3493(4) Pretty sure you have to be in service mode to do that. You may want to let your CE do it, as you can get into menus where you can easily get into trouble, but usually the service password is left at "service". This will let you get into the menu to change the IP for the LM. As a side note, if you have any other controllers in the 3494 (like an A50/60, perhaps), you may also need to change the IP on those as well, depending on how they talk to the LM. If they are doing ARTIC and not ethernet to the LM, this will not apply. Good luck (& be careful) -Lloyd Gerald Wichmann wrote: > > Anyone know how to change the IP on an ibm3494 tape library? Currently > our TSM server was on 10.100.2.1/16 and we've moved it to > 217.16.217.1/24.. I need to update the tape library and change it's IP > to that subnet, then the TSM server to look to that new IP instead of > the old one. > > I'm not really sure how to do that on an ibm3494.. > > Gerald -- - Lloyd Dieter- Senior Technology Consultant Synergy, Inc. http://www.synergyinc.cc [EMAIL PROTECTED] Main:716-389-1260fax:716-389-1267 -
re - TSM Secure Web Admin Proxy
Hi, I posted this a few months ago, but never heard any replies until this week when someone mailed me as they too had the same problem, and wondered if I had a response. So, here goes again - does anyone else actually use this? > Hi All, > Do many people out in *SM-land make use of the Secure Web Admin Proxy > (SWAP) for TSM? > For those who do, have you too come across the same problem when upgrading > a TSM server to 4.x that the graphics on the web browser now no longer > appear when using the SWAP which shipped with TSM 3.7? > In an attempt to thwart this, I installed the new version of SWAP which > shipped with the TSM 4.2 set - upon installing this I see that there are > specific options for which verisons of TSM servers you are likely to be > administering - probably something to do with enhanced functionality (and > thus extra icons and graphics required in the Web Admin Client) in later > versions of the server. > However, inspite of a seemingly smooth install, I can now no longer log on > to *anything* *anywhere*... I am presented with an 'Invalid Login - The > userid or password that was specified is not valid'. Looking on the TSM > server I see an 'ANR0459W Signon for administrator DAVIDS refused - invalid > administrator name and/or password submitted.'. Two things struck me as > 'odd' at this point - 1. my login is correct as I use this very same login > from other admin clients to the same server without problems, and 2. This > isn't a 'normal' 'session xx for adminstrator yy refused - invalid password > submitted' refused access error. > I have also checked that the 'proxy' administrator is set up and functional > too. > Any ideas? Has anyone got the TSM SWAP working with 4.x (4.1.3 and 4.2 in > our case) servers? David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED]
Re: TDS v2.1.1 installation
Hi Finn, If you've installed from the flat TDS CD's you won't find the TDS for SMA (Storage Management Analysis) bits (i.e. guides) on there - you'll need a separate CD, which will have the Decision Support Loader (a special application which pulls data from the TSM servers' databases and populates an RDBMS) and sql schema's to build the tables into the database. I did install this and had it working last year (memory's a bit sketchy now, but I recall it was a bit of a faff), getting all kinds of stats from our TSM servers, but I think that the packaging has now changed. In our latest CD bundle from Tivoli we got a 'Tivoli Storage Resource Reporting' CD which I believe may contain what you are looking for... It may be worth looking at service.boulder.ibm.com ftp site too as I remember there to be some folders on there which may have what you're looking for... There's also some Redbooks - Tivoli Storage Manager Reporting - SG24-6109-00, and lots of installation documentation on the CD. Hope this helps, Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] "Leijnse, Finn F SITI-ISES-31" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 14/11/2001 12:00:40 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: TDS v2.1.1 installation Hi, I have just installed TDS v2.1.1 on a WinNT system as we want to test some different reporting tools for our TSM servers and I want to know if I am right to think that I need a discovery guide to get my reporting on the way? Is a discovery guide like a template for the TSM database? From that point on I can start filling cubes? On the CD I have just installed I cannot find any discovery guides... > met vriendelijke groeten, regards et salutations, > Finn Leijnse > > ISES/31 - Central Data Storage Management > Shell Services International bv. > email: [EMAIL PROTECTED] >
Hanging client + 'Destroy Mutex failed: 16' on AIX ba client
Guys, I've never seen this before in our environments, and then I see it twice in one day on two separate systems... TSM Client for AIX (4.3.3 - SP2 CWS) 3.7.1.0 - Server AIX 4.1.3.0 (4.3.3 SP2 WH2 Wide). We have the client scheduler running on these boxes, but suddenly backups stopped running for no apparent reason, when the client appeared to hang... Invoking 'dsmc' from the command line, followed by 'q se' or 'q files' etc. is fine, but as soon as I try an 'inc' it seems to hang, with no data being sent to the server. The session on the server sits in idle until it eventually times out. Obviously, the server is contactable from the client as the 'q se' and 'q files' commands work fine, but it's only when a backup attempt is made do I get problems. I tried (out of desparation) a 'sel' of a filespace, a 'show locks' on the server (none), a stop / start of the TSM server process, checked the dsm.sys and opt, even uninstalled and re-installed the clients (at the same level), but all with the same response / lack of response. Checking the dsmerror.log reveals: Destroy Mutex failed: 16 on each occasion. I'm not entirely sure if this is an error generated by my control c-ing from the client and severing the server connection (if there is one at this point), or as a result of the error. I guess I could check... Anyone come across this before, what on earth is this 'mutex destroy' which is failing (I've had a quick check through adsm.org and seen it mentioned in confusing despatches), and how can I fix it? Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Mail Point SGJ3, IBM, North Harbour, Portsmouth PO6 3AU, England Tel: 02392-56 0218 Mob: 07711 120 931 Internet: [EMAIL PROTECTED]
Re: Script to cancel certain sessions
Niklas/Dwight, Assuming you're using a UNIX box, here's some things to make this simpler: o) If you put a '-comma' after your -id= -pass= then you won't split accross two lines when you get to big sessno's. o) Also, probably simpler to 'sed' out the ',' using a 'sed s/,//g' Using this, we get: #!/bin/ksh for i in `dsmadmc -id= -passw= -comma q se | grep $1 | grep ^\"[0-9] | sed s/,//g | cut -d'"' -f2` do echo "dsmadmc -id= -passw= can se $i" done This would give a list of numbers of sessions belonging to a parameter that you pass into this script. I've put an 'echo' just so you don't blat the wrong session to begin with. When you're happy it works, wipe out the echo" dsmadmc ... can se $i" and you're away! So, a : ./my_script NT_CLIENTS would kill off all of your naughty NT client sessions which were left hanging around - not a bad thing at all! Any offers? Rgds, David McClelland --- Tivoli Storage Management Team IBM EMEA Technical Centre, Internet: [EMAIL PROTECTED] "Cook, Dwight E (SAIC)" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 29/11/2001 14:45:34 Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] cc: Subject: Re: Script to cancel certain sessions Well, just remember that if you want to cancel either a session or a process, you can't have any commas in the number so you can't just do a query & cut the number out. Here is an example of how I deal with process numbers #!/bin/ksh for PROCNUM in $(dsmadmc -id=someid -pass=somepass q pro | cut -c1-8 | grep ^' ' | grep [0-9] do echo $PROCNUM | grep , 1>/dev/null 2>&1 if [ $? -eq 0 ] ; then FIRST=$(echo $PROCNUM | cut -d',' -f1) SECOND=$(echo $PROCNUM | cut -d',' -f2) PROCNUM=$FIRST$SECOND fi echo dsmadmc -id=someid -pass=somepass cancel process $PROCNUM done exit so just change the above to do a "q session" and grep for your node name(s) and cut the first 6 characters and use that to cancel the sessions now if your sessions get above 99,999 you will have problems because the session number itself will be across two lines of output... hope this helps later dwight -Original Message- From: Niklas Lundstrom [mailto:[EMAIL PROTECTED]] Sent: Thursday, November 29, 2001 6:29 AM To: [EMAIL PROTECTED] Subject: Script to cancel certain sessions Hello TSM:ers I'm trying to write a script that should cancel the sessions for certain servers if their backup still runs at 9 am but I'm stuck. How can I do that? It should be automated by a command script. Regards Niklas Lundström Föreningssparbanken IT 08-5859 5164
Re: TSM on Linux RedHat 6.2
Janeth, I remember Andy Raibeck and I covering exactly this about TSM and Redhat 6.2 in the list a couple of years ago - here's the link: http://msgs.adsm.org/cgi-bin/get/adsm0108/735.html Unless anything has changed since then - Andy? Hope this helps! David McClelland Global Management Systems Reuters Ltd -Original Message- From: Lopez Janeth [mailto:[EMAIL PROTECTED] Sent: 01 July 2003 13:13 To: [EMAIL PROTECTED] Subject: TSM on Linux RedHat 6.2 I found the following information in the IBM Tivoli WEB When will Tivoli Storage Manager (TSM) support Linux RedHat 6.2? The TSM client will probably work under Linux RedHat 6.2 because it does not appear from Redhat's site that the Kernel changed; however, it is not officially supported and not on the roadmap to be supported. This is a quote from the technical evangelist in response to the question: The LINUX client will be supported on SuSE 6.3, RedHat 6.1, Caldera 2.3, and TurboLinux 6.0 and the Client Manual said: Software Requirements The backup-archive client requires the following software to run: Linux kernel 2.2.13 or higher Linux kernel 2.4.0 or higher glibc 2.1.2 or higher, glibc 2.2 libstdc++2.9.0 or higher X Window System X11R6 (for end user GUI only) RPM 3.0.0 or higher, 4.0 The following Linux distributions meet these requirements: SUSE 7.0, 7.1 Red Hat 7.0, 7.1 Caldera Linux 2.4 Turbo Linux 6.0 Now, if the kernel de Linux RedHat 6.2 is 2.4 , Tivoli support Linux RedHat 6.2? Janeth --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Forcing backup clients to disk...
Guys, I'm probably missing something quite obvious or fundamental here, so forgive if this sounds like a silly question... We're limited to quite a small disk storage pool on one of our NT TSM Servers. Clients backing up to this eventually fill the disk storage pool, and then begin backing up directly to tape. Meanwhile, a migration process is also underway, trying to migrate data from the disk pool onto tape, thus contending with our limited number of tape drives. My question is whether it is possible to prevent the clients from backing up to the successor (i.e. tape) storage pool, and force them into a media wait state on the disk storage pool, so that they will only continue when the migration processes have freed sufficient space for them to carry on backing up to the disk storage pool. Any ideas? Am I indeed forgetting something really basic...? Rgds, David McClelland --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: LTO throughput - real world experiences
Michael, Regarding automation of collecting throughput performance stats directly from SAN switches, I would suggest, although I haven't tried it, that automation is possible through the SNMP agent on most switches. IBM 2109's, for example, I recall have a fairly comprehensive set of data available reporting throughput per port, and an SNMP get script could pull these stats at a given interval to a machine elsewhere on the network. IBM/Tivoli Netview I seem to remember has performance monitoring via MIB's in-built and can automate this fairly easily, even displaying the data in a chart if necessary. That being said, I'm sure there are dozens of other SAN management tools out there which perform similar performance/stat/event gathering functions - it's just a question of how complex they are, and how much you'd want to pay for them! Rgds, David McClelland Global Management Systems Reuters, London -Original Message- From: Wheelock, Michael D [mailto:[EMAIL PROTECTED] Sent: 08 July 2003 18:41 To: [EMAIL PROTECTED] Subject: [spam] Re: LTO throughput - real world experiences Hi, If you have fibre connected LTO drives, one really simple way is to connect to the web console of the switch and monitor the performance of the ports on the switch. This isn't easily automatable, but it is a very accurate representation of what you are seeing in terms of throughput. Michael Wheelock Integris Health of Oklahoma -Original Message- From: Shawn Price [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 08, 2003 12:36 PM To: [EMAIL PROTECTED] Subject: Re: LTO throughput - real world experiences What is the best way to determine what your throughput is for each process? Are you just going by the activity log? Thanks! Shawn >>> [EMAIL PROTECTED] 07/08/03 1:34 PM >>> >I'm curious as to what kind of MB/sec throughput people are seeing with TSM and LTO drives. It varies drastically for us based upon the objects being moved, of course. 10-15MB/s / drive is normal for us overall. >How many MB/sec does a migration process produce in your environment? We achieve about 12-14MB/s per drive when performing migration. >Does anyone have any DB's streaming directly to LTO and some figures? Appreciate any feedback Yes, our Exchange boxes stream nicely at about 15MB/s per drive. HTH! Chris Murphy IT Network Analyst Idaho Dept. of Lands Office: (208) 334-0293 [EMAIL PROTECTED] This e-mail may contain identifiable health information that is subject to protection under state and federal law. This information is intended to be for the use of the individual named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or use of the contents of this information is prohibited and may be punishable by law. If you have received this electronic transmission in error, please notify us immediately by electronic mail (reply). -- -- Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
MSCS Win2K, TSM Journal Backups and SAN attached disks!!!
*SMers, I understand from searching back on the list that this one might be a bit of a hot potato, but here goes anyway: We'd ideally like to set up TSM Journaling on a Win2K MSCS Cluster using TSM 4.2 Client with SAN attached disk. So, my questions are: o) Does the TSM Journal Engine support SAN-attached disks - I recall something about the Win32 api ReadDirectoryChangesW only monitoring 'local' file system changes - does this preclude SAN attached disk? o) Would this work in an MSCS cluster? Has anyone done this, or tried to? Any advice? Any help greatly received, as always! Rgds, David McClelland Global Management Systems Reuters Ltd --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Error in deleting filespace
Hi Zosi, Two quick questions which might help here: O - what level is your TSM server at the moment? O - have you upgraded your TSM server from an earlier version recently? Rgds, David McClelland Global Management Systems Reuters Ltd -Original Message- From: Zosimo Noriega [mailto:[EMAIL PROTECTED] Sent: 23 July 2003 05:42 To: [EMAIL PROTECTED] Subject: Error in deleting filespace Hi all, I got the error in deleting filespace, please see the log from activity log. I hope everybody can help me. thanks. Zosi Noriega 07/23/03 08:33:59 ANR2017I Administrator ZBN3669 issued command: DELETE FILESPACE SAPPLIBP1 3 NAMETYPE=FSID TYPE=ANY DATA=ANY WAIT=NO 07/23/03 08:33:59 ANR0984I Process 2546 for DELETE FILESPACE started in the BACKGROUND at 08:33:59. 07/23/03 08:33:59 ANR0802I DELETE FILESPACE Registries (fsId=3) (backup/arc- hive data) for node SAPPLIBP1 started. 07/23/03 08:33:59 ANR0800I DELETE FILESPACE Registries (fsId=3) for node SAPPLIBP1 started as process 2546. 07/23/03 08:33:59 ANR0609I DELETE FILESPACE started as process 2546. 07/23/03 08:33:59 ANR0104E imutil.c(7529): Error 2 deleting row from table "Expiring.Objects". 07/23/03 08:33:59 ANRD imfsdel.c(1863): ThreadId<79> Error 19 deleting group leader 0 48984968. 07/23/03 08:33:59 ANR0985I Process 2546 for DELETE FILESPACE running in the BACKGROUND completed with completion state FAILURE at 08:33:59. --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Lotus Notes Non-TDP backups
Stefan, Gordon, Urrgh - no! As soon as you try to restore any of these files which will have changed during the backup, even with open file support, you'll more than likely get a corrupt .nsf database! Notes .nsf files are pretty sensitive and any change somewhere in one part of the db will have repercussions elsewhere in the db and before you know it you won't be able to open up the .nsf at all, and will get 'b-tree structure invalid' or similar complaints from Notes. You need to have the Notes server process 'down' in order to quiece the databases and prevent them from being written to before backing them up. The *usual* way of handling Notes backups without using TDP is to use a 'backup' server - the concept works like this: You have a separate Notes server (i.e. a 'backup Notes server) which contains replicas of the databases on the live Notes servers. Using Notes replication, all changes to the live databases are replicated to the replicas on the backup server. At a time controlled by you, you take the Notes server process down on the backup server (as no users connect directly to the backup Notes server, there will be no outage) and then perform the backups of the now quiesced .nsf files using the normal TSM BA client. Once the backup is complete, bring up the Notes server on the backup server and begin replication with the live servers to the backup .nsf's up to date again. Depending upon hardware, you can have many live Notes server's worth of .nsf's contained on a single backup Notes server - just ensure you have enough time to replicate the data from live to backup server. In terms of recoveries, as the backup Notes server is down during backups, you might want to have an additional Notes partition somewhere on a backup server which you can use as a 'recovery server' - a Notes server which is *always* up, regardless of whether a backup is taking place. Users can connect to this directly and pull back any recovered .nsf databases, or even just documents from a .nsf. Hope this helps :o) David McClelland Global Management Systems Reuters Ltd -Original Message- From: Stefan Holzwarth [mailto:[EMAIL PROTECTED] Sent: 29 July 2003 07:06 To: [EMAIL PROTECTED] Subject: AW: Lotus Notes Non-TDP backups I would try openfile support in 5.2 . First tests look quite good. Regards Stefan Holzwarth -Ursprüngliche Nachricht- Von: Gordon Woodward [mailto:[EMAIL PROTECTED] Gesendet: Dienstag, 29. Juli 2003 04:01 An: [EMAIL PROTECTED] Betreff: Lotus Notes Non-TDP backups We currently have over 160Gb of Notes mail databases that need to be backed up nightly. Due to incompatabilities with the Notes TDP, our version of TSM (v4.2.2.5) and the way compaction runs on our Notes servers, we have to use the normal Tivoli backup client to backup the mailboxes. It takes about 12 hours for all the databases to get backed up each night but the vast amount of this time seems to be spend trying and then retrying to send mailboxes to the TSM server. A typical schedule log looks like this: 28-07-2003 19:51:53 Retry # 2 Normal File--> 157,548,544 \\sdbo5211\d$\notes\data\mail\beggsa.nsf [Sent] 28-07-2003 19:52:28 Normal File-->70,778,880 \\sdbo5211\d$\notes\data\mail\bingleyj.nsf [Sent] 28-07-2003 19:54:05 Retry # 1 Normal File--> 349,437,952 \\sdbo5211\d$\notes\data\mail\bignasck.nsf [Sent] 28-07-2003 19:55:10 Normal File--> 131,072,000 \\sdbo5211\d$\notes\data\mail\Bishnic.nsf Changed 28-07-2003 19:56:58 Normal File--> 265,289,728 \\sdbo5211\d$\notes\data\mail\bellm.nsf [Sent] 28-07-2003 19:58:08 Retry # 1 Normal File--> 131,072,000 \\sdbo5211\d$\notes\data\mail\Bishnic.nsf [Sent] 28-07-2003 20:00:46 Normal File--> 387,186,688 \\sdbo5211\d$\notes\data\mail\BLACKAD.NSF Changed 28-07-2003 20:03:52 Normal File--> 367,263,744 \\sdbo5211\d$\notes\data\mail\BERNECKC.NSF Changed 28-07-2003 20:06:18 Retry # 1 Normal File--> 387,186,688 \\sdbo5211\d$\notes\data\mail\BLACKAD.NSF [Sent] 28-07-2003 20:10:11 Normal File--> 1,011,613,696 \\sdbo5211\d$\notes\data\mail\binneyk.nsf Changed 28-07-2003 20:11:52 Retry # 2 Normal File--> 953,942,016 \\sdbo5211\d$\notes\data\mail\andrewsj.nsf [Sent] 28-07-2003 20:12:01 Retry # 1 Normal File--> 367,263,744 \\sdbo5211\d$\notes\data\mail\BERNECKC.NSF [Sent] 28-07-2003 20:12:05 Normal File-->10,485,760 \\sdbo5211\d$\notes\data\mail\bousran.nsf [Sent] 28-07-2003 20:13:40 Normal File--> 720,633,856 \\sdbo5211\d$\notes\data\mail\BLACKC.NSF Changed 28-07-2003 20:18:58 Retry # 3 Normal File--> 1,863,057,408 \\sdbo5211\d$\notes\data\dbecna.nsf Changed Is there anything we can do reduce the window for this backup? Both the TSM server and our Notes server have dedicated 1Gb links so bandwidth isn't a problem. The Backup Copy Group for
FW: Lotus Notes Non-TDP backups
Stefan Fair enough, and if you've made proven progress in your company of using the openfile snapshot feature (which I personally have not yet played with) then good stuff! My only experience (using the below configuration) was in an AIX Notes server environment which was a) pretty large and b) pretty active. In the amount of time taken for the ba client to have piped a 100MB or larger (sometimes well into GB for some users) .nsf mailfile or database to its TSM server it would the majority of the time have gotten written to and caused an inconsistency, which for a user's mailfile backup just wasn't worth the risk. Nor could we afford downtime on the live service. Admittedly, it certainly wasn't a cheap solution, requiring lots of extra hardware and support, but our guys looked into using TDP for Domino and it just wasn't even slightly feasible in the size of our environment at the time (using 3494/3590 and local SSA disk as we were), with projections for simple restores taking *so* many tape mounts and *so* much time. So, in summary - whatever works for your scale of environment is good, but just ensure that *plenty* of testing is carried out and carries on being carried out to ensure that your restores are good ones. After all how many times have we said to our customers, "Oh yes, the backups are running fine!" and then muttered under our breath, "it's the restores that are going to be the problem..." ;o) All the best, David (now using Outlook instead of Notes!) McClelland Global Management Systems Reuters Ltd -Original Message- From: Stefan Holzwarth [mailto:[EMAIL PROTECTED] Sent: 29 July 2003 13:49 To: [EMAIL PROTECTED] Subject: AW: Lotus Notes Non-TDP backups Hi David, as i understood the openfile feature a snapshot is made for the whole filesystem. Therefore there should be no problem with db-consistency between db-files if they live all on the same volume. Since in my company our lotus db files have proofen some kind of robustness (we only have a small domino environment) i can not total agree with your absolute no to this topic. Domino uses an underlaying simple database that has to maintain some robustnes towards sudden failures like power off, lost connectivity to the db on a networkshare or some bluescreens. From the other side if an openfile agent waits (configurable) for seconds for inactivity there should not occur a cut through a write operation. I'm sure there are better and more saver ways doing backups of Domino, but most need more efforts or resources. Kind regards, Stefan Holzwarth -Ursprüngliche Nachricht- Von: David McClelland [mailto:[EMAIL PROTECTED] Gesendet: Dienstag, 29. Juli 2003 10:44 An: [EMAIL PROTECTED] Betreff: Re: Lotus Notes Non-TDP backups Stefan, Gordon, Urrgh - no! As soon as you try to restore any of these files which will have changed during the backup, even with open file support, you'll more than likely get a corrupt .nsf database! Notes .nsf files are pretty sensitive and any change somewhere in one part of the db will have repercussions elsewhere in the db and before you know it you won't be able to open up the .nsf at all, and will get 'b-tree structure invalid' or similar complaints from Notes. You need to have the Notes server process 'down' in order to quiece the databases and prevent them from being written to before backing them up. The *usual* way of handling Notes backups without using TDP is to use a 'backup' server - the concept works like this: You have a separate Notes server (i.e. a 'backup Notes server) which contains replicas of the databases on the live Notes servers. Using Notes replication, all changes to the live databases are replicated to the replicas on the backup server. At a time controlled by you, you take the Notes server process down on the backup server (as no users connect directly to the backup Notes server, there will be no outage) and then perform the backups of the now quiesced .nsf files using the normal TSM BA client. Once the backup is complete, bring up the Notes server on the backup server and begin replication with the live servers to the backup .nsf's up to date again. Depending upon hardware, you can have many live Notes server's worth of .nsf's contained on a single backup Notes server - just ensure you have enough time to replicate the data from live to backup server. In terms of recoveries, as the backup Notes server is down during backups, you might want to have an additional Notes partition somewhere on a backup server which you can use as a 'recovery server' - a Notes server which is *always* up, regardless of whether a backup is taking place. Users can connect to this directly and pull back any recovered .nsf databases, or even just documents from a .nsf. Hope this helps :o) David McClelland Global Managemen
Re: Accidentally issued delete volume ...
Arnaud, Depending upon how much you want the files on this volume back, it *is* possible, but not using any cosy commands like restore volume etc. As TSM only deletes references from the database and not the actual data from the tapes, providing you haven't overwritten the data yet on the volume that you have erased you'll probably be okay. To be extremely brief, you'd install an additional instance of TSM Server on your server (make sure you use a different TCPPort etc. in your dsmserv.opt!) and restore a TSM database backup from before you deleted the tape. You would then be able to get hold of the data on the tape, and if nothing else restore it to a temporary staging area... There's been mention of this on the list a few times before - this process is also useful for getting back expired data, depending upon how dynamic your tape pool usage is... Rgds, David McClelland Global Management Systems Reuter Ltd -Original Message- From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED] Sent: 13 August 2003 13:27 To: [EMAIL PROTECTED] Subject: Accidentally issued delete volume ... Hi List, Quick question : I accidentally issued "delete volume" on a false primary volume (typing error), after I realised my mistake I cancelled the job, but some files where already deleted. Is there a chance, using "restore volume", to get this data back, or does TSM consider those files as deleted in copy pool too ? TIA. Regards. Arnaud =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= | Arnaud Brion, Panalpina Management Ltd., IT Group | | Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland | | Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01 | =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Recover TSM Database with ERROR
Eric, Hmn - looks like you'll need to add some entries into the dsmserv.opt file on your TSM server. This is the TSM Server options file, and the TSM server process re-reads this every time you start it up. The entries you need might in the simplest form look like this: VOLHIST volhist.out DEVCONFIG devconfig.out These will all you to dump the volume history and device configuration files in the TSM server directory. This should get you on your way for now - all of this info is in the TSM Server Admin guide though David McClelland Global Management Systems Reuters Ltd -Original Message- From: TechnicalLib [mailto:[EMAIL PROTECTED] Sent: 22 August 2003 10:06 To: [EMAIL PROTECTED] Subject: Re: Recover TSM Database with ERROR Oscar, The following is the output while I execute the 'backup devconfig' command . would uyou please give me more advice for the following error . Thank you . ==output of the 'backup devconfig' command == tsm: TSM>backup devconfig ANR2017I Administrator ADMIN issued command: BACKUP DEVCONFIG Do you wish to proceed? (Yes/No) yes ANR2017I Administrator ADMIN issued command: BACKUP DEVCONFIG ANR1434W No files have been identified for automatically storing device configuration information. ANR2395I BACKUP DEVCONFIG: Device configuration files have NOT been defined for automatic recording - specify a file name for device configuration information. ANS8001I Return code 3. ANR2017I Administrator ADMIN issued command: ROLLBACK tsm: TSM> - Original Message - From: Oscar Kolsteren <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Friday, August 22, 2003 4:48 PM Subject: Re: Recover TSM Database with ERROR Hi Eric, did you make a devconfig and volhist backup after the DB backup and before the DB restore? - backup devconfig - backup volhist If you didn't, startup TSM (if you still can !!) and make those backups. Then the restore will succeed. succes, Oscar -Original Message- From: TechnicalLib [mailto:[EMAIL PROTECTED] Sent: vrijdag 22 augustus 2003 10:41 To: [EMAIL PROTECTED] Subject: Recover TSM Database with ERROR Hello , All For now , I can backup the TSM database sucessfully , however , I can not do a sucessful database recover , and the following is the command result . please see it Backup Database sucessfully=== tsm: TSM>backup db type=full devclass=file_device_class ANR2017I Administrator ADMIN issued command: BACKUP DB type=full devclass=file_device_class ANR0984I Process 3 for DATABASE BACKUP started in the BACKGROUND at 00:32:00. ANR2280I Full database backup started as process 3. ANS8003I Process number 3 started. tsm: TSM>ANR8340I FILE volume /yszhang/tsmdb/61537520.DBB mounted. ANR1360I Output volume /yszhang/tsmdb/61537520.DBB opened (sequence number 1). ANR4554I Backed up 512 of 557 database pages. ANR1361I Output volume /yszhang/tsmdb/61537520.DBB closed. ANR4550I Full database backup (process 3) complete, 557 pages copied. ANR0985I Process 3 for DATABASE BACKUP running in the BACKGROUND completed with completion state SUCCESS at 00:32:01. ==The following is I want to recover the TSM Database , and with the error messages.== # ./dsmserv restore db ANR7800I DSMSERV generated at 08:00:25 on Dec 7 2000. Tivoli Storage Manager for AIX-RS/6000 Version 4, Release 1, Level 2.0 Licensed Materials - Property of IBM 5698-TSM (C) Copyright IBM Corporation 1990,2000. All rights reserved. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corporation. ANR0900I Processing options file /usr/tivoli/tsm/server/bin/dsmserv.opt. ANR000W Unable to open default locale message catalog, /usr/lib/nls/msg/C/. ANR8200I TCP/IP driver ready for connection with clients on port 1500. ANR0200I Recovery log assigned capacity is 108 megabytes. ANR0201I Database assigned capacity is 116 megabytes. ANR0306I Recovery log volume mount in progress. ANR1437E No device configuration files could be used. # 'No device configuration files could be used.' what is the meaning ?? and what should I do in this case ? Thank you ! Best Regards! Eric -- Eric Zhang (Yongsheng Zhang) Beijing Visionsky Information Technology Co.,Ltd. Tel: 8610-88091533/4/5 Ext. 212 Fax: 8610-88091539 Mobile Phone: 13601030319 E-mail: [EMAIL PROTECTED] Web Site: http://www.visionsky.com.cn Room 830,Building B,Corporate Squrare, NO.35 Finance Street Xicheng district, Beijing, 100032, P.R.China - ATTENTION: The information in this electronic mail message is private and confidential, and only intended for the addressee. Should you receive this message by mistake, you are hereby notified that any disclosure, reproduction, distribution or use of this message is strictly prohibited. Pleas
Re: *Real* admin interface (Was: q vol f=g ??!?)
Thomas, And if I recall correctly, at this point, the next poster usually says: "But you *can* still use the old ADSMv3.1 admin GUI with current server versions...". I know I do for lots of things such as tape/volume manipulation and viewing filespace listings etc, and it does the job more than adequately. To be frank, performing most of the operations that feature in server versions > 3.1, I would really only want to do from the command line anyway. All the same, I would *love* to see an updated version... Rgds, David McClelland Global Management Systems Reuters Ltd -Original Message- From: Thomas Rupp, Vorarlberger Illwerke AG [mailto:[EMAIL PROTECTED] Sent: 22 August 2003 14:36 To: [EMAIL PROTECTED] Subject: *Real* admin interface (Was: q vol f=g ??!?) Hello, this "the old admin GUI is much better than the Web interface" subject pops up now and then. The poster of the first message always dreams of a Windows or Java GUI that supports the latest TSM server (btw I'm dreaming too). A few minutes later the list gets drowned by "me too" messages. I think there was/is a SHARE requirement for a *real* admin interface (can you filter your tape volumes with the web interface?). I don't understand why Tivoli isn't listening to their customers. Tivoli should start a survey on how many customers would like to have such an animal and on what platform. Based on this results it should be easy to provide a GUI for the platform users want. So please Tivoli, LISTEN! Ok, enough grumbling for today. Have a nice weekend Thomas Rupp Vorarlberger Illwerke AG --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: TSM & SQL databases
Hi Ruth, In response: Q >>> Once the SQL backup is complete there is a need to delete this backup file from the server to ensure there is adequate database space prior to the next scheduled database backup. I'm currently using a different backup product that actually backs up the file and then deletes the file from the server [...] We are looking at moving this server to TSM, however from what I can see TSM does not do any type of file deletion from the client. A >>> Sounds like a prime candidate for a TSM Archive with the '-deletefiles' option - once the file has been sent off to TSM storage, it gets deleted from the client, thus freeing up space and meeting your requirement. With a TSM 'archive' you can specify exactly how long, in days, that you wish to retain this archived file for (in the archive copygroup 'retver' setting). Q >>> Is there anyone using TSM to backup SQL databases and if so how do you delete the files from the client once they are backed up?? A >>> Yes, TDP for SQL is product you'll be looking at (or ITSM for Databases or any number of nominal variations - we just call them TDP's, for Tivoli Data Protection as they were once known). This backs up your MSSQL database at an API level, directly from the database, without any intermediate files/exports/dumps etc required, and therefore no need to worry about deleting any files afterwards. Hope that helps - there's lots in the docs, including a whole redbook entitled "Using Tivoli Data Protection for Microsoft SQL Server.pdf" which you will find by a quick search on http://publib-b.boulder.ibm.com/Redbooks.nsf/Portals/Tivoli Rgds, David McClelland Tivoli Storage Manager Certified Consultant Operations Backup and Recovery Projects Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Ruth Peters <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 11/18/2004 10:54 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To [EMAIL PROTECTED] cc Subject TSM & SQL databases I have a windows 2000 server with a SQL database. The database is backed up using SQL's enterprise utility. Once the SQL backup is complete there is a need to delete this backup file from the server to ensure there is adequate database space prior to the next scheduled database backup. I'm currently using a different backup product that actually backs up the file and then deletes the file from the server. We are looking at moving this server to TSM, however from what I can see TSM does not do any type of file deletion from the client. The other option would be to use TDP for SQL to manage the database backups, however there is additional cost with that. Is there anyone using TSM to backup SQL databases and if so how do you delete the files from the client once they are backed up?? TSM SERVER AIX 5.2.3 TSM Version 5.2.3 WINDOWS 2000 SERVER TSM 5.2.0.1 Service Pack Level 4.0 SQL Server 2000 Standard edition Service Pack 3 for SQL Ruth Peters Dasd Storage Administrator Watkins Motor Lines, Inc. - IT/HDQ [EMAIL PROTECTED] (863)688.6662 x5452 **NOTICE*** This e-mail, including any attachments, is intended for the receipt and use by the intended addressee(s) only and may contain privileged, confidential, work-product and/or trade secret information of a proprietary nature. If you are not an intended recipient of this e-mail, you are hereby notified that any unauthorized use, distribution or re-transmission of this e-mail or any attachment(s) is strictly prohibited and that all rights of the sender and/or intended recipients are hereby reserved without prejudice thereto. This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you receive this email in error, please immediately notify the sender. Please note that this financial institution neither accepts nor discloses confidential member account information via email. This includes password related inquiries, financial transaction instructions and address changes. - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
TDP SQL - 'set backups' / out of sync TDP backups
Guys, To begin, a familiar story for many of you I'm sure - we have a customer who has MSSQL databases, and wants TDP backups going back a month or so. Easy, bread and butter stuff. Happy with this, they want their backups from every Friday to be retained for 4 weeks, and from every 4th Friday to last for 7 years. As the TDP stores the SQL backups in a backup copygroup, we don't have that level of flexibility easily built in to the tool to provide what is essentially more of an 'archive' type request. The normal response at this point is to 'use different node names - ABC123_WEEKLY or ABC123_MONTHLY', and this is indeed what I have done frequently before. However, I can't help feeling this isn't quite perfect, having to make our non TSM savvy client (on a remote site) faff around with different node names and explain to them why TSM has to be handled in this way. It's not that big a deal really, but I'm aiming for simplification here. Now, my question is whether anyone is achieving the fulfilment of such requirements in another way, for example using 'set backups'. According to the docs, 'set backups are intended to be used in unusual one-of-a-kind situations [...] Because set backups are always uniquely named (like log backups), they do not participate in expiration due to version limit [...] The reason for using a set backup is if you do not want the backup to be part of your normal expiration process.' Sounds like a possibility - anyone using these already? We've a similar requirement coming in for Informix backups - again, from what I've seen of ONBAR so far, having out-of-sync weekly/monthly/yearly backups could be a challenge when using the same node name. With Oracle backups, we've managed to overcome this by customising the RMAN backup piece tags, and expiring manually from RMAN based upon these to identify logs, weekly and monthly backup pieces etc - works very smoothly indeed. Your thoughts, especially on a Friday afternoon, are always much appreciated :O) Rgds, David McClelland Tivoli Storage Manager Certified Consultant Operations Backup and Recovery Projects Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Remote Backups....
Monte, I echo everything that Juraj and Joe suggest, stressing the importance of nailing down your customers'/business' recovery requirements, and then designing a backup solution around those, rather than the other way around. I trust that when you say 'NT' servers, you mean Win2K and above, otherwise TSM Journalling Engine isn't going to work for you. In the event of a 'disaster, would you really be required to restore *all* of the data, or would the server be rebuilt from the OS/image upwards, and only key data need to be recovered. I commonly ask the question 'do we really need to have 1000's versions of notepad.exe in our TSM server'. Look at the main recovery scenarios that you are putting in a backup solution for: single file corruption/deletion should be possible in a short amount of time. Directory deletion again should be fine for any moderately sized directories. Otherwise, just work out the maths and ask the business if they can wait x hours for a full 40GB restore. Assuming 33% compression and a real-life average of just above 15KB/s (might be more if you're lucky) over your 256Kb line, you're looking at a maximum of 80MB/hour restore rate. If your 40GB server happens to be at the end of this one, you have a little over 20 days to get back your 40GB! Or perhaps my maths is a little skewed here... Rgds, David McClelland Tivoli Storage Manager Certified Consultant Operations Backup and Recovery Projects Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Salak Juraj Sent: 24 November 2004 16:51 To: [EMAIL PROTECTED] Subject: AW: Remote Backups perfect! in addition to this points, do make some planning & tests for restore. Basically, your problem is full restore. Either you can afford to wait long enoung to restore over remote line, (learning about "restart restore" may be important) or produce backup sets and send them per post to the remote location. best regards Juraj > -Ursprüngliche Nachricht- > Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag > von Joe Crnjanski > Gesendet: Mittwoch, 24. November 2004 17:25 > An: [EMAIL PROTECTED] > Betreff: Re: Remote Backups > > -Use compression > > -Use sub-file backup. (doesn't work on files larger than 2GB; > otherwise enormous improvements) > > -Encryption doesn't hurt if you are moving the data over public > network > (Internet)- will be improved in TSM 5.3 > > -Don't backup system object every day (around 200MB-300MB). > Make additional schedule for system object and C drive (maybe on > weekends) > > -Choose carefully what you need to backup (include/exclude) > > Regards, > Joe C. > > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf > Of Michael, Monte > Sent: Tuesday, November 23, 2004 3:32 PM > To: [EMAIL PROTECTED] > Subject: Remote Backups > > Fellow TSM administrators: > > My company is currently looking at backing up approximately 40 NT > servers at our remote locations, back to our local data center. Each > location has around 10gb - 40gb of storage, and very minimal daily > change activity on the files. Some of the locations are 256k data > lines, and some are t1 lines. > > Does anyone have a list of best practices? What are some of the > options that you have found to improve the process of remote backups > via TSM to a central location. Any help and input that you can > provide is much appreciated. > > > Thank You, > > > > Monte Michael > > > This communication is for use by the intended recipient and contains > information that may be privileged, confidential or copyrighted under > applicable law. If you are not the intended recipient, you are hereby > formally notified that any use, copying or distribution of this > e-mail, in whole or in part, is strictly prohibited. Please notify > the sender by return e-mail and delete this e-mail from your system. > Unless explicitly and conspicuously designated as "E-Contract > Intended", this e-mail does not constitute a contract offer, a > contract amendment, or an acceptance of a contract offer. > This e-mail does not constitute a consent to the use of sender's > contact information for direct marketing purposes or for transfers of > data to third parties. > > Francais Deutsch Italiano Espanol Portugues Japanese Chinese > Korean > > http://www.DuPont.com/corp/email_disclaimer.html > - Visit our
Successes with Solaris/Veritas disk performance with TSM
Guys, Not so much a question, but some sharing of my experiences with using TSM on Solaris with Veritas Volume Manager - I hope these may help some others who might be having similar experiences, and perhaps there might be some improvements on what we've been doing: Background are various SAN attached TSM Servers (versions 5.1.6.7 - 5.2.3.3) running on Solaris with Veritas Volume Manager and Veritas Cluster Services on Sun V480/V440 hardware. *Very* poor performance when backing-up/archiving from clients to our SAN diskpool - even on an uncontended 100Mb link, we were seeing only between 1.5MB/s to 3MB/s throughput to disk. So, performance troubleshooting time. When pointing the client to a tape pool instead (LTO2), the data flew through at the full 11MB/s (i.e. the 100Mb/s LAN was my bottleneck). When creating a temporary disk stgpool in /tmp on the TSM server, the data flew in as well at 11MB/s (/tmp is a memory area on Solaris, virtual disk), so it wasn't TSM writing to any old disk that was the problem. Writing to a local disk (/opt) and not our SAN disk, we were still seeing 1.5MB/s, so it didn't appear to be a SAN/FC/HBA related issue either. Finally, FTP from client to TSM server was consistently rating at the full 11MB/s over the LAN, which suggested that it was *something* to do with the way that TSM was interacting with the disk layer, rather than general slow disk performance. Anyway, a little Veritas Volume Manager tuning followed, and the following settings were applied: vxtunefs -s -o discovered_direct_iosz=512 (e.g. vxtunefs -s -o discovered_direct_iosz=512 /stgpool/tsma) Our 'discovered_direct_iosz' was previously around 256000. These were applied (after some trial and error and help from a VxVM-man here), and our disk write performance has picked up no-end (from a local client, backing up a file in /tmp, seeing 40MB/s plus instead of 1.5MB/s!), and so I understand from our support guys, has everything else. Bear in mind that, in order to ensure this vxtune takes effect between restarts/failovers, a file called /etc/vx/tunefstab, which has the following contents: /dev/vx/dsk/tsmlog_itsma_dg/tsmdb_itsma_vol02 discovered_direct_iosz=512 for each filesystem which you want to apply this to. I hope this helps someone out there - does anyone else have any improvements on the above or experiences of similar tweaks they'd like to share with the list? I'd like to try going with raw volumes next. Rgds, David McClelland Tivoli Storage Manager Certified Consultant Infrastructure Backup and Recovery Development Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: TDP for Oracle Error !!!!!!
Hi Josi, Has this ever worked before for you? Or has it suddenly stopped working? Things to check - are all of your tdpo.opt, dsm.opt, inclexcl.txt, dsm.sys and agent.lic files readable by the oracle user (or whoever you run your RMAN commands as)? Do `tdpoconf showenv` and `tdpoconf password` still work for you? Can you locate your 'tdpoerror.log' and publish this to us? Do a 'find / -name tdpoerror.log -print' for it if you don't know where it normally resides, as it's not always obvious... Rgds, _______ David McClelland IBM Certified Deployment Professional - TSM 5.2 Backup and Recovery Infrastructure Development Shared Infrastructure Development Reuters Ltd 85 Fleet Street London, EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jose Antonio Atala Olaechea Sent: 15 March 2005 20:34 To: ADSM-L@VM.MARIST.EDU Subject: TDP for Oracle Error !! Hi TSM'rs When I tried to do a full backup using the TDP for Oracle the following error message appear: RMAN> run 2> { 3> allocate channel t1 type 'sbt_tape' parms 4> 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin/tdpo.opt)'; 5> backup 6> filesperset 15 7> format 'df_%t_%s_%p' 8> (database); 9> release channel t1; 10> } 11> RMAN-00571: === RMAN-00569: === ERROR MESSAGE STACK FOLLOWS === RMAN-00571: === RMAN-03009: failure of allocate command on t1 channel at 03/12/2005 11:38:35 ORA-19554: error allocating device, device type: SBT_TAPE, device name: ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer Linux Error: 106: El otro extremo ya estaonectado Additional information: 7011 ORA-19511: Error received from media manager layer, error text: SBT error = 7011, errno = 106, sbtopen: system error Recovery Manager complete. In the same way when I tried to do a archive log backup using the TDP for Oracle the following message appear: RMAN> run { allocate channel t1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin/tdpo.opt)'; backup filesperset 5 format 'al_%t_%s_%p' (archivelog all delete input); release channel t1; } 2> 3> 4> 5> 6> 7> 8> 9> 10> RMAN-00571: === RMAN-00569: === ERROR MESSAGE STACK FOLLOWS === RMAN-00571: === RMAN-03009: failure of allocate command on t1 channel at 03/15/2005 10:46:26 ORA-19554: error allocating device, device type: SBT_TAPE, device name: ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer Linux Error: 106: El otro extremo ya estaonectado Additional information: 7011 ORA-19511: Error received from media manager layer, error text: SBT error = 7011, errno = 106, sbtopen: system error My environment is: Linux RHAS 3.0 Version Kernell Linux 2.4.21-27 compat-gcc-c++-7.3-2.96.128 acl-2.2.3-1 Oracle 9i - 9206 TSM SERVER 5.2 CLIENTE TSM 5.3 TDP 5.2 reagards Josi Antonio Atala Olaechea _ Charla con tus amigos en lmnea mediante MSN Messenger: http://messenger.latam.msn.com/ - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
TSM 5.3, AES-128 encryption and API/TDP backups
Hi Guys, Just a quicky - is anyone out here using the 128-bit AES encryption capabilities of the 5.3 API to encrypt TDPOracle on Solaris or TDPSQL backup? I believed this was possible in 5.3, but I'm not having many (or in fact any) hits in the online docs (or IBM.com or ADSM.org) trying to find out how to get this working, only how to get the BA client to encrypt via encryptiontype, include.encrypt etc. As ever, any help or pointers gratefully received. Many thanks, David McClelland IBM Certified Deployment Professional TSM 5.2 Tivoli Storage Manager Certified Consultant Infrastructure Backup and Recovery Development Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
FW: TSM 5.3, AES-128 encryption and API/TDP backups
An update, as I've sorted it for now - it is kinda buried away, but searching on 'transparent encryption' in the TSM 5.3 API manual reaps dividends: http://publib.boulder.ibm.com/infocenter/tivihelp/index.jsp?topic=/com.i bm.itsmc.doc/ansa77.htm I've got this working for TDPSQL on Win32 now (although the only simple way I could think of verifying it was working was to enable tracing on traceflags encrypt and encryptdetail and checking for AES128 comms), and will take a look at the Solaris TDPO as soon as I can. The storage of the encryption key within the TSM server database (i.e. not on the client itself as with the 'traditional' TSM encryption) is interesting, and does make me think about the possibility of managing offsite TSM DB backups separately (i.e. different physical location) from our offsite data tapes... At least, that's what I think security might pick up on when we run this past them... Rgds, David McClelland Reuters -----Original Message- From: David McClelland Sent: 22 April 2005 14:38 To: 'ADSM: Dist Stor Manager' Subject: TSM 5.3, AES-128 encryption and API/TDP backups Hi Guys, Just a quicky - is anyone out here using the 128-bit AES encryption capabilities of the 5.3 API to encrypt TDPOracle on Solaris or TDPSQL backup? I believed this was possible in 5.3, but I'm not having many (or in fact any) hits in the online docs (or IBM.com or ADSM.org) trying to find out how to get this working, only how to get the BA client to encrypt via encryptiontype, include.encrypt etc. As ever, any help or pointers gratefully received. Many thanks, David McClelland IBM Certified Deployment Professional TSM 5.2 Tivoli Storage Manager Certified Consultant Infrastructure Backup and Recovery Development Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: TDP for SQL restore seems to hang
How big is very big? If we're talking very large, what happens during a restore is that it will appear to 'hang' whilst MSSQL server formats the volumes it needs. This is nothing to do with TSM, which will only kick into action when MSSQL server has finished formatting its volumes. If you've a big database, and slow disks, this might take a while... Up your timeout, monitor activity on your SQL server and try again... Rgds, David McClelland Tivoli Storage Manager Certified Consultant Infrastructure Backup and Recovery Development Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Loon, E.J. van - SPLXM Sent: 28 April 2005 15:17 To: ADSM-L@VM.MARIST.EDU Subject: TDP for SQL restore seems to hang Importance: High Hi *SM-ers! I have a few SQL clients which seem to "hang" when trying to restore a large database. Restoring a small one (Northwind) works fine, but when they want to restore a larger database TDP just says "Waiting for TSM server.." and nothing else happens until the session is cancelled by the server after 3600 seconds. The client has a session in the SendW state, but the amount of bytes transferred is minimal. The input tape also gets mounted. The actlog just contains the following lines: 28-04-2005 13:18:11 ANE4991I (Session: 714765, Node: KL1012EZ-SQL) TDP MSSQL Win32 ACO3003 Data Protection for SQL: Starting full restore of backup object SCReport to database SCReport on server KL1012EZ. (SESSION: 714765) 28-04-2005 14:20:04 ANR0481W Session 714765 for node KL1012EZ-SQL (TDP MSSQL Win32) terminated - client did not respond within 3600 seconds. (SESSION: 714765) The tdpsql.log also doesn't show any cause, just that the restore is canceled due to the session cancel by the server: 04/28/2005 15:19:25 ACO5436E A failure occurred on stripe number (0), rc = 418 04/28/2005 15:19:25 ANS1017E (RC-50) Session rejected: TCP/IP connection failure 04/28/2005 15:19:26 Restore of SCReport failed. Does anybody know how I can find out why the restore is not working? Thanks in advance! Kindest regards, Eric van Loon KLM Royal Dutch Airlines ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. ** - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: [spam] Excluding "Command Line" verbiage
Dave, With the '-dataonly=yes' directive - eg. dsmadmc -dataonly=yes -id=id -passw=passw "q proc" If I recall, you'll need to be at client level 5.2 or above for this to work. David McClelland Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Dave Zarnoch Sent: 11 May 2005 14:21 To: ADSM-L@VM.MARIST.EDU Subject: [spam] Excluding "Command Line" verbiage Folks, Sorry if this is a FAQ.. How do I exclude the "Command Line Administrative Interface" verbiage when I run a command? Thanks! DaveZ - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: ANR0481W and COMMTIMEOUT setting
Neil, Rob et al, Similarly with MSSQL, particularly during restores of large databases where SQL Server (completely aside from TDPS) goes away and formats its database files leaving the TDPS session open and idle - I tend to recommend an hour (3600) for COMMTIMEOUT for this reason. Shame we can't have a 'global' COMMTIMEOUT setting which can be overridden by a node, group or domain level COMMTIMEOUT setting - that way I could have all of my 'TDPS_DOMAIN' nodes with a longer timeout that my normal BA client backups... Rgds, David McClelland Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Neil Rasmussen Sent: 11 May 2005 21:08 To: ADSM-L@VM.MARIST.EDU Subject: Re: ANR0481W and COMMTIMEOUT setting I noticed that the node in your example is a TDP SQL node (if I am reading correctly). It is not uncommon for nodes that are TDPs to require longer timeouts. What happens is that databases will start up a session with the TSM Server and then will go off and collect the information to send - depending on the amount of processing that occurs, which quite often correlates to the size of the database, can take a *very* long time. For instance, I know that with Oracle/TDP Oracle during an incremental of larger databases, Oracle may spend a long time trying to locate changed blocks. I have seen these times take longer than 30 minutes (although not common). The short of it is that you may need a time out of 1800+ to accomodate these databases. Regards, Neil Rasmussen Software Development Data Protection for Oracle Andrew Raibeck/Tucson/[EMAIL PROTECTED] Sent by: "ADSM: Dist Stor Manager" 05/11/2005 12:49 PM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: ANR0481W and COMMTIMEOUT setting 1800 seconds is a *very* long commtimeout setting. Assuming the clients in question are on relatively fast networks (e.g., not dial-up), I would tend to suspect that a problem in the network (though I could not tell you what that problem is), especially if it continues to occur. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" wrote on 2005-05-11 12:29:27: > Thanks John, appreciate the info. > > John Naylor wrote, on 05/11/05 01:42: > > Robert, > > My commtimeout is set 3600 with no problems Where you are seeing > > hits where you had none before you may want to consider/alleviate > > the cause per the description in Admin Ref > > > > Specifies how long the server waits for an expected client message during > > an > > operation that causes a database update. If the length of time > > exceeds this time-out, the server ends the session with the client. > > You may want to increase the > > time-out > > value to prevent clients from timing out. Clients may time out if there is > > a heavy > > network load in your environment or they are backing up large files > > John > > > > > > > > > > > > robert moulton <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor > > Manager" > > 10/05/2005 20:07 > > Please respond to > > "ADSM: Dist Stor Manager" > > > > > > To > > ADSM-L@vm.marist.edu > > cc > > > > Subject > > ANR0481W and COMMTIMEOUT setting > > > > > > > > > > > > > > Greetings ADSM List - Seeking your advice and/or a nudge toward applicable > > documentation ... > > > > TSM Version 5, Release 2, Level 4.0 > > AIX 5.2 > > > > We're seeing an increasing number of these ANR0481W messages in > > server > > logs: > > > > ANR0481W Session 38 for node M_SQLSAN01_CL_U (WinNT) terminated > > - client did not respond within 1800 seconds. > > > > As you can see our COMMTIMEOUT setting is 1800 seconds. Are there risks > > involved with boosting it even higher? > > > > Thanks in advance for your advice. > > > > Robert Moulton > > University of Washington > > Computing & Communications > > > > > > > > > > ** The information in this E-Mail is confidential and may be legally > > privileged. It may not represent the views of Scottish and Southern > > Energy Group. > > It is intended solely for the addres
Re: Re Windows 2000 client reconfiguration
Hi Farren, Been here before ourselves... might be interesting/useful to work out why the TSM client believes the file has changed. Run a backup of the files that you believe it should *not* be backing up but is, but with a trace enabled (hmn, I forget the exact traceflag we used now - might be worth you taking a look at Richard Sims' (not-so!)Quick Facts for the correct one) and this will tell you which attribute it is that it thinks has changed, be it NT permissions, modified date etc... I remember uncovering a somewhat undocumented '-testflag SKIPNTSECURITYCHANGES' during this saga last year which did exactly what the name suggests... Hope that helps point you in the right direction... Rgds, David McClelland Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Farren Minns Sent: 12 May 2005 09:08 To: ADSM-L@VM.MARIST.EDU Subject: Re Windows 2000 client reconfiguration Morning all TSMers Running TSM 5.1.6.2 on a Solaris server. Attached to 1*3494 library with two*3590H1A drives. I have a possible problem here. One of the sys admins for the Windows 2000 servers has informed me that they are going to need to replace an entire Windows 2000 server due to severe hardware issues that they have been experiencing. No amount of support has fixed the problem and hence the drastic move. The server has got some 820,000 files on it amounting to approximately 450GB. Here is what we want to do. Configure a new server and copy the data across in such a way that it doesn't look like it's changed. The new server will have the exact same Node name, file system layout etc. I don't really want to be faced with backing up the entire server all over again as we are getting low on both tape space in the library and database space. This was not something I had foreseen. >From what I have been told, early tests have not been promising and TSM still thinks files have changed even if the last change date/time etc has not altered. Does anyone have any experience with this or any advice they can give that may help us avoid a long backup that will hog system resources? Many thanks in advance Farren Minns Solaris System Admin / Oracle DBA IT - Hosting Services John Wiley & Sons, Ltd ## The information contained in this e-mail and any subsequent correspondence is private and confidential and intended solely for the named recipient(s). If you are not a named recipient, you must not copy, distribute, or disseminate the information, open any attachment, or take any action in reliance on it. If you have received the e-mail in error, please notify the sender and delete the e-mail. Any views or opinions expressed in this e-mail are those of the individual sender, unless otherwise stated. Although this e-mail has been scanned for viruses you should rely on your own virus check, as the sender accepts no liability for any damage arising out of any bug or virus infection. ## - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Path from NAS to existing Library/Drives
Hi Fred, Which NAS are you using? Things to double-check here are that your datamover definitions are correct (i.e. correct HLA, LLA, username and password). I witnessed similar errors (EMC Celerra) when I was given an incorrect IP address for the datamover, defined my TSM datamover against this and was unable to define any paths with the 'see previous error messages' error reported (after a 300 or so second wait). Once I'd got to the bottom of this and made sure the datamover had the correct IP address, the path definitions went through successfully almost immediately. Good luck. David McClelland Customer Domain Expert - Transactions Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of fred johanson Sent: 20 May 2005 20:22 To: ADSM-L@VM.MARIST.EDU Subject: Path from NAS to existing Library/Drives We've attached a new NAS to an existing TSM system, with a 3584 and 6 3592s. Following the steps in Chapter 6 of the Admin Guide, everything went smoothly until step 6, Defining Tape Drives and Paths. Since the drives already exist in TSM it looks like all I should have to do is define the path from NAS to drive: def path nasbox tsmdrive srct=datamover destt=drive devi=name-supplied-by hardware-guy. This produced ANR1763E, i.e., Command Failed - see previous error messages. But there are none of those. I tried variations of case for the device name with the same result. However, when I went to the WebAdmin and tried, leaving off the device name and using the AutoDetect button, I got a success message - for all six drives. But q path f=d shows nothing in the device name. So, in my confusion, I ask, did I miss something when I used the CLI? or is there something amiss in the Admin Guide? are the paths really there and usable? Fred Johanson ITSM Administrator University of Chicago 773-702-8464 - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: [spam] Encryption
Hi Eric, You're in luck, as TSM offers various options for encrypting data as it is sent from the client. Up until TSM 5.3, you were limited to 56bit DES for BA Client backups where you do have to manage the keys yourself on the client. At TSM 5.3 and above, you can have up to 128bit AES backups at the API level as well (in other words, your TDP backups can be encrypted too) - these can be managed using 'Transparent Encryption' which means that you no longer have to manage keys at the client side as they're stored on the TSM server along with the data. Hope that helps, David McClelland Customer Domain Expert - Transactions Shared Infrastructure Development Reuters 85 Fleet Street London EC4P 4AJ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jones, Eric J Sent: 25 May 2005 02:33 To: ADSM-L@VM.MARIST.EDU Subject: [spam] Encryption Good Evening. Running TSM 5.2.2 on AIX 5.2 Clients are a mix ofSolaris 7,8,9 AIX 4.2, AIX 5.2, Windows NT, Windows 2000 and Windows 2003 most running TSM 5.2.2. I've been reading the forums and was thinking I would probably not have to worry about this until now. I was asked to check and see what it would take to encrypt our data. I have 2 questions. 1: Is it a problem to use an encryption device to encrypt the data before it is sent to the TSM server?I know I would have to have the encryption key to restore the data but I was wondering if there were any problems that I would face. 2: Can TSM encrypt the data? I've read 1 article that indicated it was in TSM 5.3 but I did not see much on 5.2.2 which we are running. Are there any potential problems with using TSM to encrypt if it is possible? I know if you loose the key your done but other than that. Thanks for all the help, Eric - Visit our Internet site at http://www.reuters.com To find out more about Reuters Products and Services visit http://www.reuters.com/productinfo Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
TSM Server and DS43000 setup
Hi Guys, Good to be back after a little break away... I see things are as busy in *SM-land as ever. 2 x TSM Servers (5.2.4.5) running on AIX 5.2ML04 on two p570 LPARs, each with 2 CPUs and 4GBs RAM. 2 x DS4300 arrays, each populated with 14 x 73GB disks. Kind of a newbie question in a way - it's not very often some of us get to build a system from scratch, mainly having to firefight/keep running systems that are already installed. I have around 2TB of DS4300 disk spread across two DS43000's, half of which I can use for my 2 TSM servers, which equates to roughly 500GB of raw storage per TSM server instance. I'd like to poll for advice on what opinions for the best setup for these might be. I read and understand the 'best practices' about multiple (4 - 16) volumes for DB, a single volume for log (sequential access etc) - opinions vary about the use of JFS and RLV's for these and also for disk stgpool volumes. However, translating these thoughts into an optimal configuration for implementation with a DS4300 is something that I've not done before, especially given abstractive layers of DS4300 cacheing etc. etc. So, my question is whether anyone has implemented TSM with DS4300 (aka FastT600) before, and if so, what do you recommend to be the optimal DS4300 disk layout to fit in with TSM's requirements? Remember, I've two DS4300's, so I could split my two TSM servers between the two as necessary. Many thanks for your thoughts guys. David McClelland
ATL/Quantum L500 tape library - only supported on Linux???
Hi all, I'm looking at possibly using an ATL/Quantum PowerStor L500 tape library (2 or 3 DLT drives, 10 or so slots) as a test library for a test/development TSM installation. However, checking the TSM supported devices matrix, it seems that this ATL only appears in the Linux supported devices list, and not in the AIX/Solaris/Windows list - is anyone already using, or has anyone already used, an ATL/Quantum L500 with AIX/Solaris? And if so, why doesn't it appear in the supported devices matrix anymore? Seems as though there are a few hits on the list as to people who may have been using this ATL on these platforms in the past... Cheers for any insight guys, David McClelland
Re: server 5.1.9.0 rec log problem - multiple servers?
Alex, Be more explicit in your dsmserv extend log command - give the full path, e.g. >>> dsmserv extend log d:\tsmdata\server1\extrareclogspace 800 at the moment, it's looking for it in your PWD, not where you defined it in the first place. Take onboard Wanda's note about breaking the 13GB log size too - good luck, let us know how you get on. Rgds, David McClelland -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Alexander Lazarevich Sent: Tuesday, August 23, 2005 10:54 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] server 5.1.9.0 rec log problem - multiple servers? TSM 5.1.9.0 server on win2K server. I've got a very confusing problem, and our server is totally dead at the moment so I'm in a bit of an emergency. We've been running TSM 5.1.6.5 for 2-3 years, I recently (8 months ago) upgraded to 5.1.9.0, no problems. Last night, we ran out a tapes, and therefor the automatic backup of the database couldn't happen, and therefor the rec log filled up to maximum, and the server died and cannot restart until I increase the size of the log. Easy enough, I've had to do this before, but the standard commands are not working: dsmftm -log D:\tsmdata\server1\extrareclogspace 8000 that works fine, create/formats the new volume, but then: dsmserv extend log extrareclogspace 8000, dies with: C:\Program Files\tivoli\tsm\server1>dsmserv extend log extrareeclogspace 8000 ANR0900I Processing options file c:\program files\tivoli\tsm\server1\dsmserv.opt ANR7800I DSMSERV generated at 10:06:54 on Mar 18 2004. Tivoli Storage Manager for Windows Version 5, Release 1, Level 9.0 Licensed Materials - Property of IBM 5698-ISE (C) Copyright IBM Corporation 1999,2002. All rights reserved. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corporation. ANR0200I Recovery log assigned capacity is 12000 megabytes. ANR0201I Database assigned capacity is 32000 megabytes. ANR0306I Recovery log volume mount in progress. ANR9969E Unable to open volume C:\PROGRAM FILES\TIVOLI\TSM\SERVER1\EXTRAREECLOGSPACE. The most likely reason is that another TSM server is running and has the volume allocated. ANRD admstart.c(3483): ThreadId<23> Error 31 from lvmAddVol. ANR7835I The server thread 1 (tid 1448) terminated in response to server shutdown. ANR7835I The server thread 22 (tid 1548) terminated in response to server shutdown. ANR7835I The server thread 23 (tid 1096) terminated in response to server shutdown. ANR0991I Server shutdown complete. The problem is, we don't run two servers. We've got one server, that's all we've ever had, that's all we want. And this brings me back to something I've noticed since I first installed the server, which I thought was normal: In C:\Program Files\tivoli\tsm\ there are two server folders. a "server" and a "server1". In the "server" folder that's where most of the executables live, like dsmserv.exe and dsmsvc.exe. In "server1" folder there are .opt .log and .bat files, among others. But definately, it seems that the guts of the server live in "server" folder. However, the TSM management console window lists "TSM Server1" as the real server, it doesn't even see "Server". Whenever I have to deal with the MMC window, which I try to avoid at all costs, it only ever lists Server1, never just Server. As far as I remember, Server and Server1 have always both existed from when I first setup TSM 5.1.6.5 and it seemed wierd to me, but everything worked, so I thought Server1 and Server were somehow intertwinned, but they were basically the same server. Maybe that is wrong, and only now is the problem showing itself. Anyone have an idea? Again, we are down 100% until I can increase the reclog size. Thanks in advance, Alex If you are not an intended recipient of this e-mail, please notify the sender, delete it and do not read, act upon, print, disclose, copy, retain or redistribute it. Click here for important additional terms relating to this e-mail. http://www.ml.com/email_terms/
Re: message numbers not available
Hi Goran, This might help - double-check with `lslpp -l "tivoli.tsm*"` or similar that your tivoli.tsm.msg message filesets are all at the same/correct level and not still at the old level... David goc <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 30/08/2005 15:46 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] message numbers not available hi all, after upgrading from 5.2.4 to 5.3.1 i get tsm: TSM03>disa ses << Message number 2553 not available for language /USR/LIB/NLS/MSG/EN_US/ >> and other similar messages ! did i forget something ? thanks its on AIX 5.2 ml4 p640 model thanks goran
Re: message numbers not available
Goran, Did you rebuild the table of contents with an `inutoc .` in the directory in which you put the tsm messages fileset? David goc <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 31/08/2005 09:19 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] message numbers not available hi, sorry i forgot to ask, how to install just messages ? it seems that smitty menus are having problem with this fileset ... i get No installable software products were found on the media. message ?! i puted just tivoli.tsm.msg.EN_US.server 5.3.0.0 fileset into install directory ... what now ? thanks goran - Original Message - From: "Richard Sims" <[EMAIL PROTECTED]> To: Sent: Tuesday, August 30, 2005 5:00 PM Subject: Re: message numbers not available > Goran - From my notes in ADSM QuickFacts: > > Message number not available for language EN_US > These errors are generally seen when the TSM messages filesets > are not > at the same level as the TSM Server. As a result, certain > messages do > not exist in the message repository and cannot be displayed > within TSM. > In AIX, issue the 'lslpp -l tivoli.tsm.*' command to list all of > the TSM > filesets currently installed. Ensure that the messages filesets > are at > least at the same maintenance level as the server runtime fileset. > > Richard Sims > > On Aug 30, 2005, at 10:46 AM, goc wrote: > >> hi all, >> after upgrading from 5.2.4 to 5.3.1 >> i get >> tsm: TSM03>disa ses >> << Message number 2553 not available for language /USR/LIB/NLS/MSG/ >> EN_US/ >> >> >> and other similar messages ! >> >> did i forget something ? >> >> thanks >> >> its on AIX 5.2 ml4 p640 model >> >> thanks >> goran >> >
Re: TSM client wildcard
It's a little messy, perhaps the same result could be achieved, using the *same* single schedule which runs a script on each host - this .bat or .pl file might have the dsmc archive invocation specific to that host, containing the appropriate drive letter (hey, if you're good with perl, you could even script that and make it generic/identical across hosts too). It's not pretty, but it'll work... David McClelland Andrew Raibeck <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 14/09/2005 15:08 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] TSM client wildcard You can only wildcard file names in the archive, incremental, or selective command file specifications: Invalid: dsmc archive *:\abc\* Invalid: dsmc archive c:\abc*\* Valid: dsmc archive c:\abc1\* c:\abc2\* d:\abc\* Thus you will need to spell out the drive letters in your file specs, i.e., objects="c:\saq\prd\ed\donnee\backup\* e:\saq\prd\ed\donnee\backup\* g:\saq\prd\ed\donnee\backup\* h:\saq\prd\ed\donnee\backup\*" And yes, if the drive letters differ on each machine, you will need separate schedules for each machine. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" wrote on 2005-09-14 06:04:07: > I have to archive 4 servers that have data on same directory but > different disk letter.I would like to do only one schedule. > > I tried this object on a schedule define on TSM server. TSM has an > error with this wildcard, somebody did something like that ??? > > > Client schedules : ARCHIVE_EDD_WIN > > Policy Domain Name > Schedule Name ARCHIVE_EDD_WIN > Description Archive pour EDD 2 ans > Action ARCHIVE > Options -deletefiles -archm=archive_2ans -subdir=yes > Objects ?:\saq\prd\ed\donnee\backup\* > Priority 5 > Start date 2005-09-09 > Start time 08:45:00 > Duration 1 > Duration units HOURS > Period 1 > Period units DAYS > Day of Week ANY > Expiration - > Last Update Date/Time 2005-09-14 08:34:33.00 > Last Update by (administrator) SISUTCB > Managing profile - > > > > >Chantal Boileau > >Analyste Informatique Stockage > >Infrastucture des Serveurs > >SAQ > >* : 514-253-6232 > >* : 514-864-8471 > >* mailto:[EMAIL PROTECTED] > >* :http://www.saq.com > > > > > >
Re: restore node
Hi Mark, The 'restore node' command on the TSM server is reserved for use when restoring NDMP/NAS data to a datamover (e.g. EMC Celerra, Netapps filer etc), and shouldn't be confused with trying to restore 'normal' backup archive client data. Here's a link which explains this command: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp?topic=/com.ibm.itsmmsmunn.doc/anrsrf53348.htm You won't be able to perform your restore from the server, only from the client via the local CLI, the local GUI or the TSM Web Client. Hope that helps, David McClelland Mark Strasheim <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 20/09/2005 09:42 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] restore node Aloha when i try this command - i get this error message. restore node MOBILE / FILELIST=TIVsm-webadmin-5.2.1-0.noarch.rpm ANR1641E RESTORE NODE: The node MOBILE has a type that is not allowed for this command. ANS8001I Return code 3. has a type ??? what type ? can i change that type? From the client "dsmc" this command can be called res /usr/test/* and everything works as i wish. How can i run this restore command from the server? with regards MNibble -- -- definitiv! business applications GmbH & Co. KG Fresnostrasse 14 - 18 · DE-48159 Münster Tel. +49 (0) 251 21092 - 23 · Fax +49 (0) 251 21092 - 29 <mailto:[EMAIL PROTECTED]> mailto:[EMAIL PROTECTED] · <http://www.definitiv-ba.de/> http://www.definitiv-ba.de --
Re: Different Management Policy (Completed!)
Hi Sam, I have to say, everything looks good to me from what I see below - what are you looking at to check that your files are getting bound to the default management class and not to the 1YEAR class after all? What do you see if you perform a dsmc restore -pick -inactive against a file matching /BACKUP/outgoing/.../* or take a look at the 'show versions' command on the TSM server ('SHow Versions NodeName FileSpace' should do for you it I think). Rgds, David McClelland Sam Rudland <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 20/09/2005 10:08 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Different Management Policy (Completed!) Hi all, Accidently sent the mail before without adding all the info! Apologies for the resend. I am having probably a hopefully simple problem with management classes. I backup a server and it goes to the default management class, which has a retention of ninety days. There is one set of data on this server that I want backed up to a different management class so it is retained for a year instead of ninety days. Here is the dsm.sys entry on the client: include/BACKUP/outgoing/.../* CR_ONE_YEAR When I run a q inclexcl on the client I get the following: tsm> q inclexcl *** FILE INCLUDE/EXCLUDE *** Mode Function Pattern (match from top down) Source File - -- - Excl Filespace /FMS/fmsprod/gbls dsm.sys Excl Filespace /IBS/ibsprod/gbls dsm.sys Excl Directory /dev Server Excl Directory /unix Server Excl All /.../tmp/.../* Server Excl All /.../oradata/.../* Server Excl All /.../core Server Incl All /BACKUP/outgoing/.../* dsm.sys Excl All /BACKUP/online/.../* dsm.sys No DFS include/exclude statements defined. And on the server here is a q mgmt: tsm: BKP>q mgmt standard standard cr_one_year f=d Policy Domain Name: STANDARD Policy Set Name: STANDARD Mgmt Class Name: CR_ONE_YEAR Default Mgmt Class ?: No Description: Management Class For Critical Systems Space Management Technique: None Auto-Migrate on Non-Use: 0 Migration Requires Backup?: Yes Migration Destination: CRDATATAPE Last Update by (administrator): ADMIN Last Update Date/Time: 2005.06.24 09:38:24 Managing profile: And here is query of the backup copygroup for thius mgmt class: tsm: BKP>q copygroup PolicyPolicyMgmt Copy Versions Versions Retain Retain DomainSet Name Class Group Data DataExtra Only NameName NameExists Deleted Versions Version - - - - --- STANDARD ACTIVECR_ONE_Y- STANDARD 77 40 366 EAR STANDARD STANDARD CR_ONE_Y- STANDARD 77 40 366 EAR Does anyone have any idea what I am doing wrong? When I look on the server for the files it has it only shows the last ninety days still. Thanks! Sam - ATTENTION: The information in this electronic mail message is private and confidential, and only intended for the addressee. Should you receive this message by mistake, you are hereby notified that any disclosure, reproduction, distribution or use of this message is strictly prohibited. Please inform the sender by reply transmission and delete the message without copying or opening it. Messages and attachments are scanned for all viruses known. If this message contains password-protected attachments, the files have NOT been scanned for viruses by the ING mail domain. Always scan attachments before opening them. -
Re: Which Tape Technology?
Hi Rick, I haven't time to respond to all of the below, but on your point on LTO drives/streaming: >>> With any of the new tape drives, I'm concerned with throughput issues. The >>> newest drives (3592 and LTO3) are so fast that I wonder if it becomes a >>> problem keeping data streaming to them. The capacity is great, but if I >>> can't keep them spinning I wonder if the new drives could cause more >>> problems than they solve. The good news about LTO2 and LTO3 drives (as opposed to older LTO1s) is that they have an adaptive ability to match the speed at which they spin to as close a rate as possible as the data coming in to reduce the 'backhitch effect' - I think one of the terms for this is DSM or Digital Speed Matching. Additionally, they have increased buffer sizes (64 or 128MB? Maybe more now). I personally haven't ever done a comparison between small/large file access times and 3590/LTO-based drives - but I'm sure someone out there will have... David McClelland Richard Rhodes <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 22/09/2005 12:56 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Which Tape Technology? Hi Everyone, Currently our tape environment consists of IBM 3494 libraries with 3590H (60gb) drives. It's possible that we may need to greatly expand our environment with some new libraries and drives. This has brought up a discussion about what tape technology we would use. Our environment consists of a mix of large file backups (Oracle databases), small file backups (Netware servers) and lots of stuff in between. If we have to do this, I really can't see purchasing more 3590H drives with cartridges that are only 60gb. I would think we would want to go to the newest 3592 drives or LTO2/LTO3. What are your thoughts/comments/experiences with . . . . . 1) Given our mix of large and small file backups, would LTO tape drives work as well as our current 3590's? 2) Does anyone have any experience using IBMs newest 3592 tape drives? 3) If LTO, is LTO3 the way to go, or stick with older LTO2? 4) Or, should we stick with 3590 drives? With any of the new tape drives, I'm concerned with throughput issues. The newest drives (3592 and LTO3) are so fast that I wonder if it becomes a problem keeping data streaming to them. The capacity is great, but if I can't keep them spinning I wonder if the new drives could cause more problems than they solve. Of course, another option would be a VTL or just local DISK on the TSM server. We've done some initial pricing of some configurations (tape libraries/drives/tapes, disk, and vtl) and so far tape is the least expensive. Just looking for others experiences with this kind of decision . . . . Thanks! Rick - The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately, and delete the original message.
Re: TSM
Johnny, I'd recommend this one as a good all round starting point: "IBM Tivoli Storage Management Concepts This IBM Redbook describes the features and functions of IBM Tivoli Storage Manager. It introduces Tivoli Storage Management concepts for those new to storage management, in general, and to IBM Tivoli Storage Manager, in particular." http://www.redbooks.ibm.com/abstracts/sg244877.html?Open Rgds, David McClelland johnny cochran <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 22/09/2005 15:38 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] TSM Looking for a good beginners TSM guide link / redbook _ On the road to retirement? Check out MSN Life Events for advice on how to get there! http://lifeevents.msn.com/category.aspx?cid=Retirement
Re: Sun Clusters?
Hi Matthew, Take a look at the Redbook below from earlier this year - you don't say if you're using Veritas Cluster Services in your Sun environment, but this doc should be a help: http://www.redbooks.ibm.com/abstracts/sg246679.html?Open Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Large, M (Matthew)" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 26/09/2005 14:56 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Sun Clusters? Hi all, TSM 5.2.4 I'm having some ongoing problems with a SUN cluster - I've defined a node for each local disk, and a 'floating' node for the clustered services, but there's no clear documentation, from what I can find, on how to setup TSM to backup the SUN cluster. Is there any documentation you know of which explains how to set up TSM on a SUN clustered resource? Many Thanks, Matthew TSM Consultant ADMIN ITI Rabobank International 1 Queenhithe, London EC4V 3RL _ This email (including any attachments to it) is confidential, legally privileged, subject to copyright and is sent for the personal attention of the intended recipient only. If you have received this email in error, please advise us immediately and delete it. You are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Although we have taken reasonable precautions to ensure no viruses are present in this email, we cannot accept responsibility for any loss or damage arising from the viruses in this email or attachments. We exclude any liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided in this email or its attachments, unless that information is subsequently confirmed in writing. If this email contains an offer, that should be considered as an invitation to treat. _
Re: LTO1, LTO2 & LTO3 tapes in 3584 library
Hi David, >>> As I had suspected, LTO3 volser is treated DIFFERENTLY >>> than LTO1/LTO2! LTO3 ONLY uses 8 char volser on TSM, >>> LTO1/2 can use 6 or 8! How about that, sports fans? I'm a little confused - I haven't been following this thread, but I know in my recently installed LTO3-populated 3584's (only LTO3's, not a mix of LTO2's and LTO1's) here in London, using Atape 9.3.0.5 on AIX 5.2ML04 and with TSM 5.2.4.5 I can see 6 character volsers (configurable from the physical library control panel). In fact, when I first checked tapes in, I saw 8 character volsers - after changing the setting on the library control panel, they became six characters instead (actually, I had to re-check them in though...). Shoot me down if I'm getting the wrong end of the stick here... Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom mail: [EMAIL PROTECTED] int: 7-439306 ext: +44 (0) 207 021 9306 mob: +44 (0) 7711 120 931 David Longo <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 26/09/2005 20:28 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] LTO1, LTO2 & LTO3 tapes in 3584 library Well, folks, I had promised an update a couple of weeks ago, but was delayed with non TSM things here and going back and forth with IBM/Tivoli on my PMR. I had a hard time getting answers, but finally got some. As I had suspected, LTO3 volser is treated DIFFERENTLY than LTO1/LTO2! LTO3 ONLY uses 8 char volser on TSM, LTO1/2 can use 6 or 8! How about that, sports fans? IBM Technote 1217789, just released, explains this and has a table for LTO, 3592 and respective WORM classes, and the TSM Server platforms showing what you get depending on what you have. Also I noticed with my TSM 5.2.6.0 on AIX, if you use the Web Admin for DEF DEVCLASS or UPD DEVCLASS for LTO, the "Recording Format" pulldown does not have Ultrium3 or ULtrium3C as an option - bug. If you use the CLI, you can use those options. (I can see Andy Raibeck smilling!). A little more later after I do some testing. David B. Longo System Administrator Health First, Inc. 3300 Fiske Blvd. Rockledge, FL 32955-4305 PH 321.434.5536 Pager 321.634.8230 Fax: 321.434.5509 [EMAIL PROTECTED] ## This message is for the named person's use only. It may contain confidential, proprietary, or legally privileged information. No confidentiality or privilege is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it, and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Health First reserves the right to monitor all e-mail communications through its networks. Any views or opinions expressed in this message are solely those of the individual sender, except (1) where the message states such views or opinions are on behalf of a particular entity; and (2) the sender is authorized by the entity to give such views or opinions. ##
Re: busy rootvg during restarts
Michael, I'd think about checking/tuning the size of your BUFPOOL in TSM; make sure that it isn't so large that it's requesting lots of RAM which in turn is hitting into your paging space... As the other guys have said, double check with the AIX perf tools such as topas and check where your paging space is (lsps -a) etc. HTH, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Wheelock, Michael D" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 29/09/2005 16:23 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] busy rootvg during restarts Hi, Every time I halt and restart my tsm server, the rootvg disk becomes extraordinarily busy. I have moved all of the database, and logs to another location. The only file that I know of that is still on rootvg (besides the normal binaries and config files) is the volhist file. Does TSM hammer that volhist file on startup? Michael Wheelock Integris Health ** This e-mail may contain identifiable health information that is subject to protection under state and federal law. This information is intended to be for the use of the individual named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or use of the contents of this information is prohibited and may be punishable by law. If you have received this electronic transmission in error, please notify us immediately by electronic mail (reply).
Re: Tivoli Continuous Data Protection for Files
Richard, Certainly take a look at the newer 5.3 TSM client versions of journaling, if you haven't already, as there are many improvements over 5.2's and below at this level (including a new B-tree based journal DB which is much more reliable and isn't limited to 2GB in size). Otherwise, can you tell us what problems are you encountering with journaling? CDP sits on top of the same Win32 API that the TSM Journal engine does - I personally haven't tried it on large file servers, only on local workstations. I guess you'd need to make sure you tune/allocate enough disk space for the \RealTimeBackup local backup area for a start (from the CDP 'gui' in the Configure tree). As a matter of interest, is anyone else using CDP in this way yet (I know it's only been out and about externally for a few weeks). If you're not familiar with it, take a look here for more details: http://www-306.ibm.com/software/tivoli/products/continuous-data-protection/ Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Dearman, Richard" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 04/10/2005 17:31 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Tivoli Continuous Data Protection for Files Has anyone used " Tivoli Continuous Data Protection for Files" product on a file server with a very large amount of files and directories. I have a server with millions of files and directories that I can not get backed because the TSM client either runs out of memory and shuts down trying to do an incremental, I tried journaling but it keeps failing as well. This product seems to be journaling continuously which would be good so the journal would not fill up and fail but can it complete the initial filesytem backup. Thanks **EMAIL DISCLAIMER*** This email and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management 312.413.4947.
Re: Tivoli Continuous Data Protection for Files
> Actually, it is my understanding that CDP does *not* use the same Windows > API function (ReadDirectoryChangesW) that JBB uses. Rather, it is a kernel > filter. I stand corrected - now I've just gotta go and figure out what a kernel filter is (that'll be a different mailing list I'm sure)... :o) David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Andrew Raibeck <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/10/2005 16:33 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] Tivoli Continuous Data Protection for Files > CDP sits on top of the same Win32 API that the TSM Journal engine > does Actually, it is my understanding that CDP does *not* use the same Windows API function (ReadDirectoryChangesW) that JBB uses. Rather, it is a kernel filter. Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] IBM Tivoli Storage Manager support web page: http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence.
Re: Backing up VMware
Hi Joni, Take a quick look at this .pdf on the subject, as presented at the Oxford Symposium last month. http://tsm-symposium.oucs.ox.ac.uk/papers/How%20to%20Restore%20a%20Server%20Within%20Minutes%20using%20vmware%20and%20TSM%20(Matthias%20Fay).pdf Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Joni Moyer <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 20/10/2005 19:40 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Backing up VMware Hello, Our environment will soon include backing up VMware. Is anyone currently backing up such an environment? And if so, are there any best practices or how to guides with using a TSM client? Any help is appreciated! Joni Moyer Highmark Storage Systems Work:(717)302-6603 Fax:(717)302-5974 [EMAIL PROTECTED]
Re: TSM Performance with 3584 LTO-2 Drives
Hi Jim, If in doubt, back to basics - quite simply, I'd start by performing a simple backup or archive of a large file (or set of large files) of several GB's (use the 'lmktemp' command in AIX to create if you haven't got big files handy) locally from your TSM server's client (i.e. take network out of the equation), and send it to a management class with a copygroup defined to send the data directly to one of your tape drives. In this way, without too much messing around, you'll quickly and easily see much more of a 'raw' performance figure as to what your tape drive can handle with TSM is writing to it (ok, balanced against how quickly the source file is being read from disk on the TSM server/client), and be able to use that as a benchmark to work out the ballpark of where your performance bottleneck lies (e.g. a SAN, zoning issue, TSM migration or tape mount issue). HTH, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom John Schneider <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 21/10/2005 05:13 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] TSM Performance with 3584 LTO-2 Drives You may have stated this earlier in the thread, but could you clarify this statement? > I've double-checked the zoning, and everything seems as it should be > (server and disk in one zone and server and tape drives in another zone). Does the server have separate HBAs for the disk and tape traffic? I mean, certain HBAs do nothing but disk, and separate HBAs do nothing but tape? That is a TSM requirement. Best Regards, John D. Schneider Technology Consultant - Backup, Recovery, and Archive Practice EMC² Corporation, 600 Emerson Road, Suite 400, St. Louis, MO 63141 Phone: 314-989-3839 Cell: 314-225-9997 Email: [EMAIL PROTECTED] -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Bill Kelly Sent: Thursday, October 20, 2005 8:36 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] TSM Performance with 3584 LTO-2 Drives On Thu, 20 Oct 2005, Chet Osborn wrote: > Thanks for the replies, but no luck yet. Another possibility...have a look at APAR IC46349: When running a move data or migration to a collocated storage pool bad performance may be seen, with 5.3.1.0 server. . The bad performance appears to be triggered by a combination of the number of volumes in the source and target storage pools and the number of files and filespaces involved in the specific operation. I'm not sure how bad 'bad performance' is; it's all pretty vague-sounding. But, this is fixed at 5.3.2.0, so it might be worth a try. Regards, Bill > > The data being migrated all belonged to a single node, and only two > tape mounts were involved. > > The drive firmware is up to date. I'll be damned if I can figure out > how to determine what the 3584 library firmware level is or how to > download it. The device driver (Atape) software is up to data as of a > month o\r so ago. > > I've double-checked the zoning, and everything seems as it should be > (server and disk in one zone and server and tape drives in another zone). > > At 02:23 PM 10/20/2005, you wrote: > >Another factor to consider: does the tape pool in question have > >collocation turned on? If so, then depending on the number of tapes in > >the pool, the type of collocation in effect and the number of client nodes > >or filespaces to be migrated, there could be a very large number of tape > >mounts occurring. With only two drives, and depending on the mount > >retention period specified on the device class, I could believe that an > >awful lot of that 10.5 hours might've been spent fiddling around with tape > >mounts, idle drives, etc., and not actually writing data. > > > >Regards, > >Bill > > > >Bill Kelly > >Auburn University OIT > >334-844-9917 > > > > > It also wouldn't hurt to verify that your library and drive code > > > (firmware) are up-to-date. > > > > > > -Original Message- > > > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > > > Jim Skinner > > > Sent: Thursday, October 20, 2005 11:36 AM > > > To: ADSM-L@VM.MARIST.EDU > > > Subject: Re: [ADSM-L] TSM Performance with 3584 LTO-2 Drives > > > > > > I believe the first thing to check is the zoning of the fiber channel > > > network. TSM server and disk in one zone and in a different zone put tsm > > > server and tape. We had a similar proble
Re: EXPORT TO SERVER differences in sizes of moved data ?
Hi Zoltan, I've witnessed this too in the past when using EXPORT NODE and comparing occupancy between the two servers - I might expect the source server's data to occupy more space, and the target to occupy less space if, on the source server, data has been 'maturing' there and there might be some extra space taken by expired objects contained within larger aggregates (in the TSM internal storage sense) - during the export of the node's data, this effect would have been negated as the data is contained within newly created aggregates on the target server. It seems like a reasonable explanation to me - anyone with a more intricate knowledge of TSM aggregation internals able to offer a more detailed reasoning? Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Zoltan Forray/AC/VCU <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 24/10/2005 15:23 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] EXPORT TO SERVER differences in sizes of moved data ? I just moved a node from an MVS/zOS TSM server we are phasing out, to our AIX TSM server. I am wondering why the occupancy sizes are different. The file counts are the same. MVS TSM server (v5.2.4.2) Backup Files 1,027,310 Backup Size 383.2GB Archive Files 732,454 Archive Data 39.5GB AIX TSM server (v5.3.1.3) Backup Files 1,027,310 Backup Size 377.7GB Archive Files 732,454 Archive Data 39.3GB Difference 422.8GB (MVS) vs 417.0GB (AIX) I have 6TB to move and am a little concerned about these "losses" !
Re: Cleint connet via Firewall
Tim, One thing to double check with the network guys is that your firewall config is 'bidirectional' and allows connections to be initiated by either party (e.g. those inbound initiated by the TSM server are allowed through to the TSM client, as well as those outbound initiated by the TSM client to the TSM server). It's a common configuration issue I've witnessed before where firewalls don't let 'outside' hosts initiate connections with a host on the 'inside', only the other way around. Can cause problems in 'prompted' environments, and also with SAN Storage Agents. HTH, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Tim Brown <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 24/10/2005 16:00 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Cleint connet via Firewall I have a client that connects via a firewall, I have port 1500 open The cleint is coded for "SCHEDMODE POLLED". This has been working for quite some ime I have tried to change to "SCHEDMODE PROMPTED" but the session never starts ?? Tim Brown Systems Specialist Central Hudson Gas & Electric 284 South Ave Poughkeepsie, NY 12601 Email: [EMAIL PROTECTED] Phone: 845-486-5643 Fax: 845-486-5921
Re: HELP:windows schedule question
Hi Liming, Include exclude lists are a common area of confusion for new users to TSM - on the one hand they look as though they should be quite straightforward; on the next glance they might appear overly complicated; on final study, one understands the rationale behind why they necessarily are how they are and realises just how powerful and flexible they can be. If I understand your question correctly, I'll attempt to answer it in two ways - firstly, RTFM :o) See following links to the TSM Windows Backup Archive Client http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans652.htm#idx129 http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans653.htm#idx130 http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans653.htm#inexsec http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans655.htm#idx202 Next, some things you should know about exclude.fs: it takes precedence over any other 'normal' excludes or includes, as does exclude.dir etc - so logically, it will always be processed first, irrespective of actually order in your list. However, and most importantly here, EXCLUDE.FS is, I believe, a UNIX client only option, and not valid on Windows clients such as your example below. To achieve what you're looking for, try the following (deleting your existing includes and excludes from your dsm.opt first of all): exclude "d:\*" exclude "d:\...\*" include "d:\d270.jpg" Hope that helps, Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom liming <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 31/10/2005 09:26 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] HELP:windows schedule question Hi all,I'm confused by windows TSM schedule,my tsm client option file as fellow: TCPSERVERADDRESS 192.192.192.23 PASSWORDACCESS GENERATE EXCLUDE.fs "d:\" include "c:\dsm.opt" standard DOMAIN D: SUBFILEBACKUP NO BACKUPREGISTRY NO DFSBACKUPMNTPNT NO schedmode prompted INCLUDE "d:\270.jpg" STANDARD I only want to backup the file d:\270.jpg,but everytime the schedule backup all the files in filesystem d:,what's wrong with it?Thanks!
Re: HELP:windows schedule question
Hi Liming, Hmn, interesting - the inclexcl list looks fine to me (although you've a duplicate exclude for d:\...\* but that shouldn't make a difference). Are you *sure* it's continuing to backup all of the *files* or might it be the *directories* that you are seeing backed up? These will all get backed up using the inclexcl list above, but not their file contents. There was a discussion into this (and possible ways around it) in linked from Andy Raibeck's post on this same thread yesterday. Might you be able to send details of the schedule (f=d) as defined on the TSM server pls? Might also be worth mentioning the TSM client level etc as well. Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom liming <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 01/11/2005 03:10 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] HELP:windows schedule question Thanks David and Andy, I had inserted some lines into dsm.opt: exclude "d:\...\*" exclude.dir "d:\em\" exclude "d:\*" The client dsmc query inclexcl can get some lines as follow: Incl All d:\270.jpg dsm.opt Incl All c:\dsm.opt dsm.opt Excl All d:\* dsm.opt Excl All d:\...\* dsm.opt Excl All d:\...\* dsm.opt Excl Directory d:\em\ dsm.opt I had reboot the windows client,but the schedule still backup all files in d: drive. Thanks - Original Message - From: "David McClelland" <[EMAIL PROTECTED]> To: Sent: Monday, October 31, 2005 6:06 PM Subject: Re: [ADSM-L] HELP:windows schedule question Hi Liming, Include exclude lists are a common area of confusion for new users to TSM - on the one hand they look as though they should be quite straightforward; on the next glance they might appear overly complicated; on final study, one understands the rationale behind why they necessarily are how they are and realises just how powerful and flexible they can be. If I understand your question correctly, I'll attempt to answer it in two ways - firstly, RTFM :o) See following links to the TSM Windows Backup Archive Client http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans652.htm#idx129 http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans653.htm#idx130 http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans653.htm#inexsec http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans655.htm#idx202 Next, some things you should know about exclude.fs: it takes precedence over any other 'normal' excludes or includes, as does exclude.dir etc - so logically, it will always be processed first, irrespective of actually order in your list. However, and most importantly here, EXCLUDE.FS is, I believe, a UNIX client only option, and not valid on Windows clients such as your example below. To achieve what you're looking for, try the following (deleting your existing includes and excludes from your dsm.opt first of all): exclude "d:\*" exclude "d:\...\*" include "d:\d270.jpg" Hope that helps, Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery ? Storage Services IBM Global Services ? IBM United Kingdom liming <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 31/10/2005 09:26 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] HELP:windows schedule question Hi all,I'm confused by windows TSM schedule,my tsm client option file as fellow: TCPSERVERADDRESS 192.192.192.23 PASSWORDACCESS GENERATE EXCLUDE.fs "d:\" include "c:\dsm.opt" standard DOMAIN D: SUBFILEBACKUP NO BACKUPREGISTRY NO DFSBACKUPMNTPNT NO schedmode prompted INCLUDE "d:\270.jpg" STANDARD I only want to backup the file d:\270.jpg,but everytime the schedule backup all the files in filesystem d:,what's wrong with it?Thanks!
Re: Include/Exclude list and missed files
Hi John, In response to the first question, nope, I don't believe there is an easy way to do this from the server side (short of an SQL query that'll murder the server - anyone?). By far the simplest option generally is to pull the dsmsched.log (or other backup log output depending upon your scheduling mechanism) from the client. Regarding your second question, you don't state it implicitly below but it's best to check - I presume all the remaining shares you specify in your cloptset *are* getting backed up successfully? If so, is there anything particularly different about this share as it appears to the client? Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "John E. Vincent" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 01/11/2005 11:17 Please respond to adsm-l-alias To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Include/Exclude list and missed files Hi all, I ran into an issue over the weekend that I can't seem to figure out. We had one of our developers delete a folder from one of our Windows shares. I attempted to recover and realized that TSM had not been backing up this particular share! They ended up spending all weekend recreating the documents and I felt like a boob. This is really a two part question. The first is this: Does anyone have a query I can run against the database directly that will give me a list of all files backed up for a node during the last session? This will help as part of normal reporting. Previously we've been relying on checking for errors on our jobs but obviously this won't tell me WHAT was backed up. The second part is the client option set for the server I'm pasting below. Can anyone tell me why the \\clanas01\devteam share was NOT getting backed up based on this? I know I'm missing something simple but I want to make sure something else hasn't been getting missed. I created a test directory with a few different filetypes under that share and none of them got backed up last night. The server is running on Win2k and is TSM Version 5, Release 2, Level 3.0 Thanks in advance, John Optionset: CLANAS01 Description: Specific option set for the NAS Last Update by (administrator): ADMIN Managing profile: Option: CHANGINGRETRIES Sequence number: 0 Override: Yes Option Value: 3 Option: DOMAIN Sequence number: 0 Override: Yes Option Value: \\clanas01\backup Option: DOMAIN Sequence number: 1 Override: Yes Option Value: \\clanas01\devteam Option: DOMAIN Sequence number: 2 Override: Yes Option Value: \\clanas01\documentation Option: DOMAIN Sequence number: 3 Override: Yes Option Value: \\clanas01\HPNT Option: DOMAIN Sequence number: 4 Override: Yes Option Value: \\clanas01\InstallPoint Option: DOMAIN Sequence number: 5 Override: Yes Option Value: \\clanas01\Paul Option: DOMAIN Sequence number: 6 Override: Yes Option Value: \\clanas01\Reporting Option: DOMAIN Sequence number: 7 Override: Yes Option Value: \\clanas01\SHR_CIO Option: DOMAIN Sequence number: 8 Override: Yes Option Value: \\clanas01\Users Option: DOMAIN Sequence number: 9 Override: Yes Option Value: \\clanas01\VSS-Webloan Option: DOMAIN Sequence number: 10 Override: Yes Option Value: \\clanas01\distribution Option: INCLEXCL Sequence number: 0 Override: Yes Option Value: "EXCLUDE '\\clanas01\users\...\NTUSER.DAT'" Option: INCLEXCL Sequence number: 1 Override: Yes Option Value: "EXCLUDE.DIR '\\clanas01\users\...\Recent\'" Option: INCLEXCL Sequence number: 2 Override: Yes Option Value: "EXCLUDE.COMPRESSION '*:\...\*.zoo'" Option: INCLEXCL Sequence number: 3 Override: Yes Option Value: EXCLUDE.COMPRESSION '*:\...\*.zip' Option: INCLEXCL Sequence number: 4 Override: Yes Option Value: EXCLUDE.COMPRESSION '*:\...\*.arc' Option: INCLEXCL Sequence number: 5 Override: Yes Option Value: EXCLUDE.COMPRESSION '*:\...\*.arj' Option: INCLEXCL Sequence number: 6 Override: Yes Option Value: EXCLUDE.COMPRESSION '*:\...\*.avi' Option: INCLEXCL Sequence number: 7 Override: Yes Option Value: EXCLUDE.COMPRESSION '*:\...\*.bz2' Option: INCLEXCL Sequence number: 8 Override: Yes Option Value: EXCLUDE.COMPRESSION
Re: TDP for ORACLE question
Hi Marty, What's the level of TSM API client code on your test and production system? Any output from any other logs (dsierror.log etc). Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Lurz, Marty W." <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 09/11/2005 13:25 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] TDP for ORACLE question We are trying to implement encryption for our Oracle database backups. We are running Oracle 9i with TSM 5.3.0 . We created a test database, turned on encryption, run a database backup and do a restore validate on it with out any issues. I can turn on an API trace to verify that we are in fact encrypting the data. If I run the same scenario on our production database, I receive the folloiwng error message in the RMAN error log: RMAN-3002 failure of restore command at ORA-06510 PL/SQL: unhandled user-defined exception The only difference I can see is that in the first senario we are using TDP for Oracle: version 2.2.0.0 and for the production database we are using TDP for ORACLE: version 2.2.0.2 Does anyone have any experioence with this? Thanks, Marty
Re: TDP for Exchange problem
Hi Nicolas, Might be worth double-checking the admin privileges of the Windows user account you're running under - iirc, the installation of the Tivoli Data Protection for Exchange client must be performed by a user with Domain Administrator privileges (in my own personal notes I have Exchange Administrator privs noted down as well). Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Nicolas Muurmans <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 09/11/2005 14:06 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] TDP for Exchange problem I have a problem with my TDP for exchange installation. When i start the TDP agent i get this error message: ACN5237E Unable to communicate with the Microsoft Exchange Server. The Exchange serve IS installed and is working fine. I first thought that it was an API error in the Exchange server, but when i use NTbackup it manages to back up the exchange server just fine. So i think the problem is with the TDP agent. I would be grateful for any thoughts on this? Many Thanks Nicolas Muurmans
Re: Windows 2000 backup problem
Hi Gary, >>> I've looked in the client manual for how to set up a trace to try and find the problem, but no mention of trace flags. Where is this information? Try in the IBM Tivoli Storage Manager Information Center > ITSM Problem Determination Guide > Tracing (link below): >>> http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmm.doc/update/main.html Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Lee, Gary D." <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/12/2005 13:39 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Windows 2000 backup problem Have tsm server 5.2.4.0 on solaris 8. Client 5.2.0 on a windows 2000 machine. All of a sudden, the backup starts, then times out after around 4 hours when the 75 minute idle timeout has been reached. This began November 15, but the user claims that nothing has changed on the 2000 machine. I've looked in the client manual for how to set up a trace to try and find the problem, but no mention of trace flags. Where is this information? Thanks for the help. Gary Lee Senior System Programmer Ball State University -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.11/191 - Release Date: 12/2/2005
Re: TDP for Mail / EXCH
Hi Goran, Hmn, from your dsm.opt file, looks like you're running in a cluster... are you running the TDP with /EXCSERVER= ? Check out this link for more details about running in an MSCS: >>> http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp?topic=/com.ibm.itsmfm.doc/ab5ex00160.htm Hope that helps, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom goc <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 06/12/2005 14:23 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] TDP for Mail / EXCH hi all, i'm pretty new with TDP for "anything" and of course i landed into problems right away ... after installing TDP and creating node and stuff and running TDP i got ACN5237E Unable to communicate with the Microsoft Exchange Server i'm obviosly missing something basicaly but i dont know what ? tdpexc.cfg - BUFFers 3 BUFFERSIze 1024 LOGFile tdpexc.log LOGPrune 60 MOUNTWait Yes TEMPLOGRestorepath LASTPRUNEDate 12/06/2005 12:29:28 LANGuage ENU dsm.opt -- NODename EXVS01 CLUSTERnode yes COMPRESSIon Off PASSWORDAccess Generate COMMMethod TCPip TCPPort 1500 TCPServeraddress 10.243.113.120 TCPWindowsize 63 TCPBuffSize 32 thanks in advance, my windows guys will be happy :-)
Re: Can't get Data path Failover working ...
Hi Arnaud, Have you double and triple-checked that the key you entered in was correct *and* the hex chars are in lower case (I think that's the right way around)? Note that you can configure all of your valid drives at once, instead of individually, to support DPF with "/usr/lpp/Atape/instAtape -a". Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom PAC Brion Arnaud <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 06/12/2005 16:16 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Can't get Data path Failover working ... Hi List, I'm trying to activate the data path failover feature we bought with our brand new 3584 library (12 LTO3 drives), without success ... O.S. is AIX 5.3.0.0 ml 02.- Atape driver is at 9.6.0.0 The actual (test) setup looks like : 6 drives are connected thru fcs2 and fcs3, the other ones thru fcs4 : fcs2 Available 06-08 FC Adapter fcs3 Available 09-08 FC Adapter fcs4 Available 0A-08 FC Adapter rmt1 Available 06-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt2 Available 06-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt3 Available 06-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt4 Available 09-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt5 Available 09-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt6 Available 09-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt7 Available 0A-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt8 Available 0A-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt9 Available 0A-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt10 Available 0A-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt11 Available 0A-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt12 Available 0A-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt13 Available 06-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt14 Available 06-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt15 Available 06-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt16 Available 09-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt17 Available 09-08-02 IBM 3580 Ultrium Tape Drive (FCP) rmt18 Available 09-08-02 IBM 3580 Ultrium Tape Drive (FCP) The zoning on our Cisco MDS has been defined so that both FCS2 and FCS3 are able to see the first 6 drives (therefore 18 drives are seen by the system) Following IBM's Redbook "implementing IBM tape in UNIX systems - sg246502", I have installed the DPF license key : dpf_keys Installed dpf keys: values = "" (hidden to protect the key) Now , each time I try to enable DPF for a drive, I get following error : chdev -l rmt1 -a alt_pathing=yes Method error (/etc/methods/chgAtape): 0514-018 The values specified for the following attributes are not valid: Feature Code 1681 License key not found for Alternate Pathing Feature Run dpf_keys -a key to install a license key Anyone having an idea of what I'm missing ? Thanks in advance ! Regards. Arnaud ** Panalpina Management Ltd., Basle, Switzerland, CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01 Direct: +41 (61) 226 19 78 e-mail: [EMAIL PROTECTED] **
Re: Restore very slow
Hi Chris, Sounds like the classic 'classic restore' versus a 'no query restore'. You say you're running a point in time restore which will invalidate the no query restore and will force the slower classic restore operation - particularly slower when you've several millions of objects to trawl through... You're seeing it in another way (nmon), but if you've an otherwise quiet TSM server, you'll see from a `q db f=d` that your 'Total Buffer Requests:' value will be shooting through the roof during this (on a quiet server you might expect it to increase on average by between 5 - 10 every second or so). There is some stuff from the admin guide on this, and what determines when a restore is a valid 'no query restore' candidate: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmc.doc/ans5135.htm Also, check Richard Sims' TSM QuickFacts for "No Query Restore" for more practical info. I've been here before too - I'm afraid it was just a case of being patient (only after my initial "nothing's happening" panic though!). Good luck. Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Christoph Pilgram <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 07/12/2005 15:27 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Restore very slow Hi, Since this morning (5 hours) I am running a PIT-restore for a user-profile on a w2k-client (TSM V 5.2.2.0). TSM-Server is a IBM p450 with AIX 5.2 ML6 and TSM-server-version 5.2.4.3. There are about 11 Mio files stored on the backup-server from this node. Since begin of rhe restore no access on any tape has been performed, no directory is created on client. The server works hard on the database (nmon). Nothing changes in bytes-sent (3.0 M) / bytes recvd (1.5 K) for the session. The client started backups about 4 weeks ago so there are not too many incrementals since then. The data are spread over 2 tapes (IBM-3592). I now stopped the restore and started a new one to restore only one empty directory. The behaviour is the same, haevy working on db-volumes but nothing comes back. Has anybody seen something like that and knows how to get the data back ? Thanks for help Chris
Re: File Device Class and File System Fragmentation with JFS2
Hi Allen and Andy, >>> Here is my thinking. From what I gather about scratch volumes, they are >>> opened, filled with what data they are recieving, and closed - but not >>> allocated full size. Yep, that's correct with FILE vols - I'm watching it happen right now here on one of my servers. Perhaps there might be another variable here in the fragmentation discussion which depends upon how you plan to use you FILE storage pool - long-term/indefinite storage or simply transitory use before migrating off to tape where the scratch vols will be automatically deleted and the filesystem emptied daily or more frequently. Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Andrew Carlson <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 09/12/2005 14:06 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] File Device Class and File System Fragmentation with JFS2 Allen S. Rout wrote: >I would think that the importance of contiguous placement would be very >strongly correlated with the disk tech. If I go to FILE devclasses on my SSA, >I would think preallocating would be indicated. If I were deplying on >something more abstracted (shark, netapp, etc) I would hope that the >inefficient placement would be swamped by the cache. > >Would your predefined vols be smaller or larger than your scratch vols? I'm >very interested in your thinking here. > > > Here is my thinking. From what I gather about scratch volumes, they are opened, filled with what data they are recieving, and closed - but not allocated full size. So, if I predefine, I might want to make my volumes smaller, since there is more potential wasted space.
Re: TDPO and different Management Class
Muthu, As the other guys have pointed out, there should be a single management class with the well-known set-in-stone backup copygroup retention policies as a destination for DP for Oracle backups. Take a look in the IBM Redbook "Backing Up Oracle using Tivoli Storage Management" >>> http://www.redbooks.ibm.com/redbooks/pdfs/sg246249.pdf around page 42. There are various methods one can use to keep Oracle backups for longer periods - the method suggested below of performing an 'export' dump of the data into a self-describing format from Oracle before TSM archiving it off for a set period of time is very valid, especially for longer retention periods: if for regulatory purposes one must keep data for, say 7 years, 10 years or even longer, this is particularly recommended as the SDF file should be readable by whichever generation of Oracle one happens to be using in the far-flung future, or even by another third-party application altogether. However, within Oracle/RMAN, there is a way of achieving this requirement using 'format' to modify the object name at the point of backup - for example, one might format one's monthly backups to have 'monthly' in their name. Your RMAN backuppiece expiration routine can then generate lists of 'expiration candidates' based upon the name of the backup object, ensuring that any backuppieces with 'monthly' in their title are expired on a different basis to those with, for example, 'daily' or 'weekly' in their name. This setup does require a little bit of logic and working out, but can work very well. Hope that helps, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Kurt Beyers <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 16/12/2005 06:12 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] TDPO and different Management Class Hi Muthu, The retention time of Oracle backups is completely controlled by Oracle. Any Oracle backup always receive a unique name in TSM and will thus never expire. The backupsets must be deleted with RMAN scripts and RMAN will delete then the corresponding entries in the TSM database too. If you want to keep certain backups like 1YEAR and 5YEAR, I just would take a full export of the database and include it then in the FS backup of TSM. There you can store it to the corresponding management class. best regards, Kurt From: ADSM: Dist Stor Manager on behalf of Muthukumar Kannaiyan Sent: Thu 12/15/2005 23:46 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] TDPO and different Management Class Hi All, Our Oracle DBAs want to backup their database under different management class. Do we need to take case anything special while creating management class such as ONEYEAR and FIVEYEAR. Separately, how do we specify management class while backing up from RMAN? Regards Muthu 202-458-8340 - Work
Re: VTL experiences?
>> My concern on the EMC VTL side is where the compression is being done? Is it software based compression or hardware based? I might be a little out of date with some of the latest iterations of vendors' offerings, but I think that the Quantum range of VTLs are the only ones offering hardware/in-line compression at the moment - is/has anyone used these and can offer a benchmark/judgement/experiences on how/whether compression affects throughput when off-loaded in hardware? David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Dearman, Richard" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/01/2006 16:14 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] VTL experiences? I have been using file device class disk backups for the past 5 years via san storage and just have not been satisfied with the storage systems. I have asked others in this forum before whether they experience problems with san based storage when using TSM because TSM seems to push storage systems with such high amounts of I/O that the controllers cann't handle the speed very well and the controller hangs or even crashes regardless of the storage vendor. I also have totally threw out the idea of using a large san storage system to share amongst other servers and applications with TSM because TSM will hog the san storage controller and cause problems on the other attached systems. My main reasons for looking at VTL was speed I heard was very good pushing large amounts of data, it is a separate disk based storage system for TSM only to use and compression. My concern on the EMC VTL side is where the compression is being done? Is it software based compression or hardware based? Our IBM rep stated that IBM's TS7510 is currently using software based compression and using compression will tax the controllers cpu and actually recommended not to use it. He also said they are currently testing hardware based compression and will be coming out with the TS7510 using hardware based compression that will off load the cpu cycles from the linux management server onto an adapter and compression will be much faster and less taxing to the management servers. The VTL systems just seem to be faster, more stable, allows compression so you get a bigger bang for you buck and manages like a tape library all give the VTL an advantage over device class FILE based storage. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of TSM_User Sent: Thursday, January 05, 2006 12:18 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: VTL experiences? Much of the documentation out there will tell you that the benefit of the VTL is the speed. It is true the VTL is very fast. Some of that same documentation talks about not turning on the virtual compression because it will slow the speed. I've seen in cut the speed in half. But... I've seen a VTL with Virtualized compression turned on still operate as fast as real tape. I make this point because I think you should use the virtualized compression. This way the same 5 TB's of disk space you were using for your file device class might yield 10 to 15 TB's or more of backup space under the VTL. True you could turn on client side compression. But, I like the compression being done on the back end so that there is no stress put on the servers that are backing up themselves. One thing I should also clear up. In my last post I mentioned IBM with SATA disk. I got a friendly reminder from EMC that the CDL's have been shipping with SATA disk since this past November. That also reminded me that the IBM VTL called the TS7510 is using the IBM DS line of disk which has been out for some time now. I remember EMC making the same note when it first came out with the CDL. See the disk subsystem's under both the IBM and EMC VTLs have been out for some time. So just like EMC correctly noted when the CDL first came out you should note today about the IBM TS7510. They really are not new products when it comes to the disk subsystem. In both cases you could choose to purchase the disk subsystems used by the VTLs directly from either IBM or EMC and use them with a file device class. Granted I realize that both EMC and IBM have a specific configuration of their disk subsystems that they put under their VTLs. In my own experience I've used a file device class with TSM V5.2 and earlier and an EMC CDL. I liked the CDL a great deal. We had the same class of EMC disk behind a clarion setup to use a file device class. The same amount of disk behind the CDL performed better. I believe part of the reason is the logic in the FalconStor software. It uses disk for it
Re: IBM 3581 Autoloader question
Hi Tom, Yes, the IBM3581 Autoloader is fully compatible with TSM for use a library, with or without barcode reader - I'm using two with a TSM server at the moment (although personally I'd never spec less than a two drive library for use with TSM as tape management can get rather more complicated in single-drive libraries). The 3581 can operate in both 'sequential' mode *and* 'random access' mode where the application controls which tape gets mounted - it is this latter one that TSM works with. On AIX, simply use the 'Atape' driver to see the /dev/smc and /dev/rmt devices - check out the following links for more info. IBM TotalStorage Tape Libraries Guide for Open Systems - http://www.redbooks.ibm.com/redbooks/pdfs/sg245946.pdf Implementing IBM Tape in UNIX Systems - http://www.redbooks.ibm.com/redbooks/pdfs/sg246502.pdf Device Driver Installation & User's Guide - ftp://ftp.software.ibm.com/storage/devdrvr/Doc/IBM_ultrium_tape_IUG.pdf Hope this helps, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Tomáš Hrouda <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 09/01/2006 09:45 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] IBM 3581 Autoloader question Hi all, I need advice with use of IBM3581 7-slot Autolader. As I know it is without barcode reader (in my case), but I think it is optional. Is it possible to use it as "library" in TSM (I mean mount any volume what I need) or does it only work in sequential (cycle) mode? Does this autoloader have in general changer device in system or not? Many thanks for your comments. Tom
Re: ANR1639I
Hi, I see this in a number of scenarios - questions to ask are: are there multiple interfaces in this machine/these machines? Have there been any network routing changes which might coincide with these? It is part of a cluster? Take a look a the following to see if they match: http://www-1.ibm.com/support/docview.wss?uid=swg21214023 http://www-1.ibm.com/support/docview.wss?uid=swg21142513 Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Dksh Cssc <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 11/01/2006 05:02 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] ANR1639I Hi all, 1. Could anyone enligthen me the attributes of some of the nodes that were changed automatically without human intervention? 2. Checked from the TSM messages information that this message will not bring any harm but would like to know why it happened and what are the things that could affect it from happening? 01/11/06 10:10:54 ANR1639I Attributes changed for node KULDB21P: TCP Address from to 10.208.14.26. (SESSION: 249836) This message repeated for a few days, not only on this particular node but other nodes also. Thanks and Warmest Regards,
Re: Pre-fetching a restore?
Hi Jim, In response to the first part of your question: > Here's what I mean: let's say I know that there is going to be some > filesystem maintenance on a client. Since we've been burned by that > kind of operation in the past, I'd like TSM to prefetch the > appropriate data from tape and have it ready to go (on disk) for a > restore. Have you thought about using the 'move nodedata' command (http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmaixn.doc/anrarf53246.htm) to pre-stage your data to a disk storagepool - this might be of some help depending upon the amount of data/size of your disk storagepool etc. Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Jim Zajkowski <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 11/01/2006 23:11 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] Pre-fetching a restore? Hi folks, I'm working on our internal "late night admin guide," and one of the things I'm thinking of is how can I get TSM prepared to do a restore. Here's what I mean: let's say I know that there is going to be some filesystem maintenance on a client. Since we've been burned by that kind of operation in the past, I'd like TSM to prefetch the appropriate data from tape and have it ready to go (on disk) for a restore. Archives would do the trick except that uses the client, potentially during business hours. Backupsets look like they might work but they're kind of rigid... can a backupset be restored followed by restoring the latest incrementals? So I could create a backupset on Friday before the procedure on Saturday, and then be able to restore the incremental we took before beginning the disk operation after that? Am I out of my tree? Do people do this? Thanks, --Jim
Re: tape encryption and TSM
Hi Jim, I believe that client-sided/initiated encryption is your only 'native' option here - prior to TSM 5.3, the 56bit DES encryption provided simply wasn't enough for some institutions, but with TSM 5.3,128bit AES encryption for both BA client *and* API backups (i.e. TDP's) has been brought in which has been useful for many sites. However, that doesn't quite answer your question. I believe you can buy devices which would sit *between* your TSM server and the tape drive to provide encryption - I've never used one, but have seen references to them on this list. Has/is anyone else using these? Experiences? Does it add an additional bottleneck to the tape throughput on higher end (e.g. LTO3) drives? Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Murray, Jim" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 13/01/2006 13:30 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] tape encryption and TSM I would be more interested in the answer not so much as recovery of data but in securing data. Being a financial institution we have regulatory requirements for data protection, new State laws say I must encrypt all data on tape that is moved off site. Jim Murray Senior Systems Engineer Liberty Bank 860.638.2919 [EMAIL PROTECTED] ~~ _/) ~~ -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Aaron Becar Sent: Thursday, January 12, 2006 8:00 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: tape encryption and TSM Unless you are willing to spen $500 an hour and send your tapes to Dallas, at a rate of I believe it was 8MB an hour they can rebuild your database. Then you can get data off your tape. So, yea it is pretty difficult. Just don't loose your encryption keys! Then you should be okay! Wish I had a better answer! >>> [EMAIL PROTECTED] 1/12/2006 2:24:58 PM >>> I know the topic of reading tapes written by TSM without having the DB has come up before, but I'm wondering if anything has changed from a couple of years ago with the implementation of 5.3 so here are a few questions. How hard is it to read tapes without the TSM database tape? Is there any tape encryption with TSM 5.3? Besides encrypting data from the client to the server is there anything else that can be done? What type of hit does encryption take on the client/server when in use? Thanks, Geoff Gill TSM Administrator SAIC M/S-G1b (858)826-4062 Email: [EMAIL PROTECTED] Unless you have received this email through the Liberty bank secure email system, before you respond, please consider that any unencrypted e-mail that is sent to us is not secure. If you send regular e-mail to Liberty Bank, please do not include any private or confidential information such as social security numbers, unlisted telephone numbers, bank account numbers, personal income information, user names, passwords, etc. If you need to provide us with such information, please telephone us at (888)570-0773 during business hours or write to us at 315 Main St. Middletown, CT 06457. The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you are not the intended recipient of this message you are hereby notified that any use, review, retransmission, dissemination, distribution, reproduction or any action taken in reliance upon this message is prohibited and may be unlawful. If you received this in error, please contact the sender and delete the material from any computer without disclosing it. Any views expressed in this message are those of the individual sender and may not necessarily reflect the views of the Bank. Thank you.
Re: Upgrade mystery
Hi Robert, You're running client 5.2.3.0 according to the below - I believe that 'DISKBUFFSIZE' only came in with TSM Client version 5.3. Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom Robert Ouzen <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 18/01/2006 13:00 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] Upgrade mystery Hi to all I try this on my dsm.opt in the TDP folder but got this: 01/18/2006 14:50:10 ANS1036S Invalid option 'DISKBUFFSIZE' found in options file 'C:\Program Files\Tivoli\TSM\TDPExchange\dsm.opt' at line number : 28 Invalid entry : 'DISKBUFFSIZE 32' My version of TDP for Exchange is 5.2.1.0 and client version is 5.2.3.0 Regards Robert Ouzen -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of LeBlanc, Patricia Sent: Wednesday, January 18, 2006 2:44 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: Upgrade mystery I applied the change to 2 of my windows exchange servers.here are the results... Without the DISKBUFFSIZE 32 Server 04 started at 20:00 and completed at 01:21 Server 05 started at 20:00 and completed at 00:22 WITH the DISKBUFFSIZE 32 Server 04 started at 20:00 and completed at 23:07 Server 05 started at 20:00 and completed at 22:35. I will make the change on the rest of the exchange servers today to see the result in the overall run time of the backups. Thanks!! -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Steven Harris Sent: Tuesday, January 17, 2006 8:12 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: Upgrade mystery FWIW Del I applied this change to my SAP nodes and lo-and-behold restore throughput just about quadrupled. But, I applied it to my exchange nodes and there was no discernable difference. YMMV Steve. > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf > Of Del Hoobler > Sent: Tuesday, 17 January 2006 11:47 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: [ADSM-L] Upgrade mystery > > > Patricia, > > See if this append I made last month helps at all: > > http://msgs.adsm.org/cgi-bin/get/adsm0512/204.html > > Thanks, > > Del > > > > "ADSM: Dist Stor Manager" wrote on > 01/17/2006 08:38:42 AM: > > > I just upgraded my windows2k and w2k3 servers from tsm > backup client > > 5.2 to 5.3.0.8. These servers also run the tdp for exchange > v5.2.1.0. > > > > Since then, my exchange backups take an hour longer to complete. > > > > Does anyone know of a conflict with the software? Or a > setting I need > > to set?? > > >
Re: 3584 installation instructions
Hi Geoff, Congratulations with the 3584 purchase - I've from moved from 3494's to 3584's over the last few years and have been very impressed with their performance and reliability (although I confess that I still get all nostalgic seeing 3494's in customer machine rooms though...) They key difference between the two classes of library is that communication between host and LM is inband over SCSI or fibre, and not via an ethernet or serial connection. Your 'Atape' driver will take care of this, and (depending upon how many control paths you have active) you'll see one or more /dev/smcx devices signifying your library arm/manager connection... Rgds, David McClelland Storage and Systems Management Specialist IBM Tivoli Certified Deployment Professional (ITSM 5.2) SSO UK Service Delivery – Storage Services IBM Global Services – IBM United Kingdom "Gill, Geoffrey L." <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/02/2006 16:54 Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] 3584 installation instructions I'll be receiving a 3584 this week and since I'm not real familiar with how that guy connects to the RS/6000 I was wondering if someone has setup instructions that define this library to AIX. Our current 3494 is direct serial attached and I'm curious if this is the same or not. If anyone has saved steps to attach the unit to the server and define within AIX I would truly appreciate any insight. Thanks for the help, Geoff Gill TSM Administrator SAIC M/S-G1b (858)826-4062 Email: [EMAIL PROTECTED]
Re: LAN FREE Backup
Rubie, >>> I have 3 clients to put on LAN FREE backup, but 2 of them cannot have more than 2 HBA's, what route should I go? I personally am not so keen on them, but you could consider dual-headed HBAs with two ports per card (e.g. Emulex LP1000DC - http://www.emulex.com/products/fc/1dc/ds.html). You'll be able to run both disk and tape from a single card as long as you've zoned the individual ports appropriately. Rgds, David McClelland Storage and Systems Management Specialist Shared Infrastructure Development Reuters Ltd -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Rubie Lim Sent: 07 March 2006 21:09 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] LAN FREE Backup Thanks. I have 3 clients to put on LAN FREE backup, but 2 of them cannot have more than 2 HBA's, what route should I go? And these 2 HBA's is needed for disk for redundancy. Rubie -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Nast, Jeff P. Sent: Tuesday, March 07, 2006 10:24 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] LAN FREE Backup Yes, you need to zone tape drives to the machine running Storage Manager. Multiple tape drives can use the same HBA. Tape and Disk cannot coincide on the same HBA. You need a separate adapters for disk and tape. This stems from the days of S360 (and earlier?) where there were seperate channels for streaming channels (tape) and block channels (disk). Guess this dates me... -Jeff "old timer" Nast -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Rubie Lim Sent: Tuesday, March 07, 2006 10:01 To: ADSM-L@VM.MARIST.EDU Subject: LAN FREE Backup I am trying to implement a LAN FREE backup, and my question on the Storage Agent/Client, do I need to create a zone from one of my HBA to the Fiber Channel Tape drives? And if I can share the HBA with disk and tape? Any help is greatly appreciated. Thanks, Rubie This e-mail communication and any attachments may contain confidential and privileged information for the use of the designated recipients named above. If you are not the intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. As required by federal and state laws, you need to hold this information as privileged and confidential. If you have received this communication in error, please notify the sender and destroy all copies of this communication and any attachments. To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Netview and/or TEC integration ???
Hi Michael, Any particular reason why you'd want to send TSM events to Netview and *then* to TEC, as opposed to sending them directly to TEC and cutting out the 'middle man'? Is Netview adding any extra value to these events for you or providing you with any extra visibility? Tivoli Storage Manager Server does include a TEC adapter built-in which requires minimal configuration (IP address of TEC server, TCP port that TEC is listening on, granularity of events etc), and also very comprehensive control over exactly which events TSM will forward on to TEC. Chapter 20 of the TSM Administrator's Guide gives you a lot of useful information on this: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.its maixn.doc/anragd53619.htm Additionally/alternatively, there is TSM Operational Reporting - this might be able to give you some alerting/eventing with a little more control/flexibility, including the ability to write custom reports/monitors. My esteemed colleague Steve Strutt from IBM/Tivoli here in the UK put together a couple of years ago Redpaper on integrating TSM Operational Reporting with the Tivoli Framework to send events to TEC (including some rules correllation of which you speak) - see this link: http://www.redbooks.ibm.com/abstracts/REDP3850.html?Open Hope that helps. Rgds, David McClelland Storage and Systems Management Specialist Shared Infrastructure Development Reuters Ltd -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Michael D Schleif Sent: 22 March 2006 13:14 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Netview and/or TEC integration ??? To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd. -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 We have been asked to manage events originating from my client's Tivoli Storage Manager systems. We have found reference to a couple of MIB's; which we can process via Netview, and forward on to TEC, as required. Anybody here care to comment on this process? What best practices do you recommend? Also, we find TSM documentation references to an event adapter (receiver), including these BAROC files: ibmtsm.baroc itsmuniq.baroc itsmdpex.baroc Again, anybody here care to comment on this process? What best practices do you recommend? I have worked enough with TSM to know, and to respect, the voluminous stream of events that these processes can engender. I am especially interested in recommendations regarding managing events on the TSM side. Equally important is correlation. Are there any rulesets available for managing TSM events? What do you think? - -- Best Regards, mds mds resource 877.596.8237 - - Dare to fix things before they break . . . - - Our capacity for understanding is inversely proportional to how much we think we know. The more I know, the more I know I don't know . . . - -- -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFEIU17LUOEaCtUQpwRAsLHAJwOAdEFtIbylgYfafJNTV5YgNZRQwCfaaBr IYc+n9CjMaXsoldTp9VIScw= =qRy+ -END PGP SIGNATURE-
Re: Netview and/or TEC integration ???
Hi again Michael, TSM Operational Reporting is a standalone application and will only run on a Windows platform (2000, 2003, XP etc) - usually one would host this application on a standalone server (it's not particularly resource intensive in common usage - I have it running on my laptop for testing purposes quite happily) or co-host it upon another storage management server (a web server is favourite as it makes shipping the TSM Operational Report HTML pages a little fussy). Rgds, David McClelland Storage and Systems Management Specialist Shared Infrastructure Development Reuters Ltd 30 South Colonade London E14 5EP -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Michael D Schleif Sent: 22 March 2006 15:27 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Netview and/or TEC integration ??? To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd. -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 * David McClelland <[EMAIL PROTECTED]> [2006:03:22:13:41:04+] scribed: > Additionally/alternatively, there is TSM Operational Reporting - this > might be able to give you some alerting/eventing with a little more > control/flexibility, including the ability to write custom > reports/monitors. My esteemed colleague Steve Strutt from IBM/Tivoli > here in the UK put together a couple of years ago Redpaper on > integrating TSM Operational Reporting with the Tivoli Framework to > send events to TEC (including some rules correllation of which you > speak) - see this link: > > http://www.redbooks.ibm.com/abstracts/REDP3850.html?Open Thank you, again, for your participation in this matter. I have begun reading this; and it appears to offer considerable value. However, it explicitly references TSM for Windows. We are using this: Storage Management Server for AIX-RS/6000 - Version 5, Release 3, Level 1.6 Is this Redpaper applicable to our TSM? What do you think? - -- Best Regards, mds mds resource 877.596.8237 - - Dare to fix things before they break . . . - - Our capacity for understanding is inversely proportional to how much we think we know. The more I know, the more I know I don't know . . . - -- -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFEIWzULUOEaCtUQpwRArulAKCW43ZKkGe49yiUoisldyF3occg6ACggPmF tkYt99UVmqCx4t6l/oeHkQo= =AMvK -END PGP SIGNATURE-
Re: TSM Journal Based Backup for AIX
Steve, Check again - Journal Backups are now available (as of 5.3.3) for AIX clients as well... Rgds, David McClelland Storage and Systems Management Specialist Shared Infrastructure Development Reuters Ltd 30 South Colonnade London E14 5EP REUTERS.KNOW.NOW. www.reuters.com www.reuters.com/customers -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Steven Harris Sent: 26 March 2006 13:55 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] TSM Journal Based Backup for AIX Hi Amos Journal backup only works on windows. There is generally no need on unix systems as they are fairly efficient. What is the problem you are trying to solve? There may be some tuning you can do. Steven Harris AIX and TSM Administrator On 26/03/2006, at 7:00 PM, Robert Ouzen Ouzen wrote: > Hi All > > Im trying to config Journal Based backup for AIX in version 5.3.3 > > And there is no notification or option in the tsmjbbd.ini file for the > NotifyFilter > > Does anyone manage to config it or to make it work? > > Best Regards > > Amos Hagay > > To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: [spam] Re: [ADSM-L] TSM Journal Based Backup for AIX
Hi Allen, Hmn, interesting - I was just going on information from the README file too: ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance /client/v5r3/AIX/v533/TSM533C_README_enu.htm "What's new in the IBM Tivoli Storage Manager Version 5.3.3 Clients:" "JBB is supported for non-HSM AIX clients." Now, I haven't tried it yet - is the README lying to us? Rgds, David McClelland Storage and Systems Management Specialist Shared Infrastructure Development Reuters Ltd 30 South Colonnade London E14 5EP REUTERS.KNOW.NOW. www.reuters.com www.reuters.com/customers -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Allen S. Rout Sent: 27 March 2006 02:39 To: ADSM-L@VM.MARIST.EDU Subject: [spam] Re: [ADSM-L] TSM Journal Based Backup for AIX >> On Sun, 26 Mar 2006 14:26:22 +0100, David McClelland <[EMAIL PROTECTED]> said: > Check again - Journal Backups are now available (as of 5.3.3) for AIX > clients as well... No, they aren't, though this is only acknowledged in a one-line mention in the README file for the 5.3.3 client. I was REALLY frustrated by this, because as of Oxford, they'd thought it was still going to be in the release. - Allen S. Rout To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Using TDP for SQL to reapply transaction logs against a BCV/Snapshot database recoverable image
Guys, Has anyone tried/done this before? If I was to use a tool such as EMC TimeFinder (in particular, using the EMC TimeFinder/SQL Server Integration Utility) to take a consistent database snapshot image (via the MS SQL VDI) of a SQL Server (2000) database (i.e. a 'recoverable database image' as opposed to a 'restartable' one), could I then use the Tivoli Data Protection for SQL Server agent to apply subsequent backed up transaction logs to this copy in order to roll the database forward to a point in time? Or, would TDPS only be able to action a transaction log recovery to a database image that has been backed up/restored using TDPS? Put simply, particularly with larger SQL Server databases (hundreds of GB's), I'm looking at ways to reduce the recovery time in the event of failure. Restoring a database image from a daily BCV/Clone would do most of the work very quickly indeed, but would only give me a recovery point of a maximum of 24 hours. However, if I were able to reapply transaction logs to this I would have the best of both worlds. As we use the TDP to whisk away transaction logs on (usually) an hourly basis, I'd be looking the to TDP to action the restore of the transaction logs to the BCV copy, but appreciate that what I'm asking might not be 'native' functionality. I'm happy to expand if any of the above is unclear! Any thoughts much appreciated. Thanks and Rgds, David McClelland IBM Tivoli Certified Deployment Professional (TSM 5.2) Shared Infrastructure Architecture and Design Reuters, London Ltd To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Using TDP for SQL to reapply transaction logs against a BCV/Snapshot database recoverable image
Hi Jeroen, >>> Don't know if you can apply the logs this way on MSSQL. It would work for SAP on Oracle databases, but there offline logs are used. Thanks for your response - yeah, working with Oracle is certainly a lot easier, and you can configure RMAN not to 'delete input' and to leave your archived redo logs on disk (as long as you housekeep!) to ease recovery. With SQL/TDP we don't have that option (that I know of). >>> You will get a problem with the expiration of the logs iif you don't make full backups to TSM. >>> The transactionlogs will never be marked inactive. So they will stay within TSM 'forever'. I'd still plan do full and differential backups to offline media using TSM in the normal manner, as well as my VDI Snapshot Backup to BCV, so I'm not so sure that would be a problem. I'm still just not sure if I'd be able to apply my TDPS-backed-up transaction logs to the BCV image. Rgds, David McClelland Reuters Ltd -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Hooft, Jeroen Sent: 14 July 2006 15:34 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Using TDP for SQL to reapply transaction logs against a BCV/Snapshot database recoverable image Don't know if you can apply the logs this way on MSSQL. It would work for SAP on Oracle databases, but there offline logs are used. You will get a problem with the expiration of the logs iif you don't make full backups to TSM. The transactionlogs will never be marked inactive. So they will stay within TSM 'forever'. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of David McClelland Sent: vrijdag 14 juli 2006 13:07 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Using TDP for SQL to reapply transaction logs against a BCV/Snapshot database recoverable image Guys, Has anyone tried/done this before? If I was to use a tool such as EMC TimeFinder (in particular, using the EMC TimeFinder/SQL Server Integration Utility) to take a consistent database snapshot image (via the MS SQL VDI) of a SQL Server (2000) database (i.e. a 'recoverable database image' as opposed to a 'restartable' one), could I then use the Tivoli Data Protection for SQL Server agent to apply subsequent backed up transaction logs to this copy in order to roll the database forward to a point in time? Or, would TDPS only be able to action a transaction log recovery to a database image that has been backed up/restored using TDPS? Put simply, particularly with larger SQL Server databases (hundreds of GB's), I'm looking at ways to reduce the recovery time in the event of failure. Restoring a database image from a daily BCV/Clone would do most of the work very quickly indeed, but would only give me a recovery point of a maximum of 24 hours. However, if I were able to reapply transaction logs to this I would have the best of both worlds. As we use the TDP to whisk away transaction logs on (usually) an hourly basis, I'd be looking the to TDP to action the restore of the transaction logs to the BCV copy, but appreciate that what I'm asking might not be 'native' functionality. I'm happy to expand if any of the above is unclear! Any thoughts much appreciated. Thanks and Rgds, David McClelland IBM Tivoli Certified Deployment Professional (TSM 5.2) Shared Infrastructure Architecture and Design Reuters, London Ltd To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Log prune on non-standard log names?
Hi Matthew, Is your customer's TSM client scheduled using the TSM Central Scheduler (I guess so if it's growing to over 100MBs) for backup operations? This kind of log pruning only happens during a TSM scheduled backup operation (not when the backup operation is scheduled via TWS/cron/Windows Scheduler etc). There's certainly nothing in the docs that I've found to suggest that changing the schedlogname will prevent pruning (it would be a travesty if it did). As an off-the-wall suggestion, have you tried switching the order in which the two options appear in the dsm.opt? Rgds, David McClelland Data Protection Specialist IBM Tivoli Storage Manager Certified Consultant Shared Infrastructure Architecture and Design -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Large, M (Matthew) Sent: 12 September 2006 08:27 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Log prune on non-standard log names? Hi All, One of my customers recently came to me to say that they had a 100MB log file from TSM sitting in their file system, which confused me since the setting clearly stated in the options file (Win2K) says SCHEDLOGRETENTION 7 schedlognamedsmsched_1yr.log Does anyone know if I should expect these settings to prune the dsmsched_1yr.log or must I manually trim this file when necessary? Clients - 5.2.4.4 Server - 5.2.7.1 Many Thanks, Matthew TSM Consultant ADMIN ITI Rabobank International 1 Queenhithe, London EC4V 3RL 0044 207 809 3665 _ This email (including any attachments to it) is confidential, legally privileged, subject to copyright and is sent for the personal attention of the intended recipient only. If you have received this email in error, please advise us immediately and delete it. You are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Although we have taken reasonable precautions to ensure no viruses are present in this email, we cannot accept responsibility for any loss or damage arising from the viruses in this email or attachments. We exclude any liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided in this email or its attachments, unless that information is subsequently confirmed in writing. If this email contains an offer, that should be considered as an invitation to treat. _ To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Newbie question 2!
Hey Angus, Are you talking about SQL statements appearing in a 'wide' format? If so, then 'set sqldisplaymode wide' will work for you >>> http://192.168.1.10:2297/help/topic/com.ibm.itsmmsmunn.doc/anrsrf53400.h tm Otherwise, certainly from my experience, as long as my terminal display is wide enough (for example, easy to resize in PuTTY) the output from queries will make use of the whole terminal width. Hope that helps, Rgds, David McClelland Data Protection Specialist IBM Tivoli Storage Manager Certified Consultant Reuters, London -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Angus Macdonald Sent: 22 September 2006 08:51 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Newbie question 2! How do I convince my console output to display in wide format, like in the redbooks? My course tutor did show me but that was 18 months before I touched Tivoli for real and it seemed so trivial I didn't make a note of it. Angus This email was sent to you by Reuters, the global news and information company. To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Scheduling TDP Oracle agents
Hi Aravind, I'd say that using a separate/dedicated nodename for TDPO backups is a must whichever scheduling mechanism you'd choose to use (for clarity I usually suffix the nodename/cluster name with _TDPO where the node is defined in a separate policy domain on the TSM server with an appropriately policied default mgmt class). TSM Scheduler is one I commonly use, but external scheduling tools can be useful as well (particularly when dealing with more advanced/complex inter-activity dependencies) - OEM (Oracle Enterprise Manager) provides some useful 'build your own RMAN script' GUI tools, scheduling and monitoring capabilities (and the DBAs like as they feel in control). David McClelland Data Protection Specialist London -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Aravind Kurapati Sent: 11 January 2007 00:28 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Scheduling TDP Oracle agents Hi all, Just wanted to know how people typically schedule TDP for Oracle jobs? One of the approaches is to define an alternate node name and use the TSM scheduler to launch the RMAN backup script via this alternate node name. Any other typical approaches out there? Thanks Aravind This email was sent to you by Reuters, the global news and information company. To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
ANR7837S Internal error TBUNDO012 and ANR9999D tbundo.c(256): Error1 on insert from table Backup.Objects for undo
Oops, Seems as though one of our TSM servers has not been looked after properly, has tried to overrun its recovery log and fallen over on its face with the usual ANR7837S LOGSEQ errors. Our friendly second-level guy went ahead and created a new log vol and tried to dsmserv extend to get it in, but it failed with such errors as: ANRD tbundo.c(256): Error1 on insert from table Backup.Objects for undo. ANR7837S Internal error TBUNDO012 detected It's a Win2K AS server running at lowly 4.1.3 - now as I remember, TSM recovery log hits a ceiling at around 5GB until after 4.2. I think, from looking at the logvols already assigned, that we're at that ceiling now, and aren't able to extend it any further. The logvol that was attempted to be added was only 20MB so I'm guessing we're *right* up to that ceiling - not good I know... So team, any ideas for how to get the TSM server back up again? Rgds, David McClelland Global Management Systems Reuters Ltd --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Slow Backup of Solaris Client
Bill, I take it you've tried an ftp session of a sizeable file from the client to the server or vice-versa - the summary at the end of a completed ftp transaction would indicate whether your TSM slow transfer symptoms are related to TSM or something more sinister to do with networky things or disk contention etc... Also, have you checked the CPU on this client - have you compression turned on at the client, which could be slowing things down on a heavily loaded box? Rgds, David McClelland Global Management Systems Reuters Ltd -Original Message- From: Bill Fitzgerald [mailto:[EMAIL PROTECTED] Sent: 18 September 2003 13:00 To: [EMAIL PROTECTED] Subject: Slow Backup of Solaris Client I have a client server a sun micro system running Solaris 5.6 with a TSM client of 5.1.1 TSM is running on a AIX 4.3.3 with TSM 5.1.6.5 network is 100 megabit this client is running extremely slow backups. over a 24 hour period it has only been able to backup 16 gig. This is the only server that is running slow. I have over 150 servers, of various types including one other Solaris, using TSM, Anyone have any ideas? Bill William Fitzgerald Software Programmer Munson Medical Center [EMAIL PROTECTED] --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
TSM Journalling Engine - experiences and MEMORYEFFICENTBACKUP needed?
Hi Guys, I've been tinkering with the TSM Journaling engine for a few weeks now, and wonder if anyone has come across this before: TSM Server - Win2K Advanced Server SP3 - 2xPIII 1GB RAM - TSM Server 5.1.6.2 TSM Client - Win 2K Advanced Server SP3 - 4xPIV 4GB RAM - TSM Client 5.1.5.0 My TSM client is a file server, on its first full incremental backup (with journaling turned on) stowed away nearly 9 million files on the TSM server - a perfect candidate for the TSM journaling engine I thought. However, the tsmjbbd.exe process bombed just before the end with a 'DB Access Critical Thread Return code 215' type error, although the backup continued. Anyway, I net started the `TSM Journal Service` (I have preserveDB on exit switched on and observed by journal files to be around 1.5GB) and kicked off another incremental backup. The TSM server now begins sending its inventory for this node to the TSM client dsmc.exe process. I started watching it grow in Task Manager on the client in line with how much data was being sent from the server. Now, 9 million files, at an average of maybe 500K per TSM database entry equals roughly 4.5GB. Was TSM trying to send the *whole* 4.5GB inventory for this node to the dsmc.exe process on the client? Needless to say, at 2GB (I believe the limit that Win2K places on a single process) the TSM client had had enough and ended with an 'ANS1030E System ran out of memory. Process ended'. So, what shall I do - is MEMORYEFFICIENTBACKUP YES my only get out of jail card here, and exactly what does this do differently? Is my understanding above what is actually happening? I'd be most grateful to hear of anyone else's positive or negative experiences of using the Journaling Engine, as it seems just so *ideal* for some of our file servers, yet my experiences so far suggest it might not be as easy and robust as I would ideally like it to be (i.e. cancelled backups forcing restart of journal, process bombing out midway through backup etc.), especially as a full or normal incremental backup can run into days to complete... Many thanks, David McClelland Management Systems Integrator Global Management Systems Reuters 85 Fleet Street London EC4P 4AJ E-mail [EMAIL PROTECTED] Reuters Messaging [EMAIL PROTECTED] --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: More TSM journaling stuff
Pete, Thanks for your responses... >> The error you are seeing in the journal daemon is probably caused because the journal db has exceeded the supported >> maximum of 2 gig. I was watching the journal files, and the big one never went above 1.6GB... This was during the initial full backup. The jbberror.log entries accompanying the termination of the service go something like these: 09/23/2003 20:06:37 jnlDbCntrl(): Error updating the journal for fs 'G:', dbUpdEntry() rc = -1, last error = 27 09/23/2003 20:06:38 JbbMonitorThread(): DB Access thread, tid 3100 ended with return code 215. 09/23/2003 20:07:39 NpOpen: Named pipe error connecting to server WaitOnPipe failed. NpOpen: call failed with return code:121 pipe name \\.\pipe\jnl 09/23/2003 20:07:39 NpListeningThreadCleanUp(): NpOpen(): Error -190 Still looks as though I'm seeing your error as described below though... >> That having been said, the real problem to look at is why the journal grew so large. Agreed! Although as I say, it didn't seem to go above the 2GB limit. >> Keep in mind that each journal entry represents a the most recent change for a file/directory, and that journal >> entries are unique, meaning the there can only be one entry for each object on the file system. Okay, well, this was the first full backup of a 9 million file filesystem, so would this cause a big journal file? If so, does it follow that in practice, we're best to do a normal 'unjournalled' initial backup of a filesystem so that we get all of the initial hit out of the way (an don't come a cropper (is that only an English term?) with a large journal file), and *then* do another incremental with journaling enabled so we get the journaling engine initialised? >> Are you running virus scan software and if so what type and version ? >> (example: Norton Anti-Virus Corporate Edition Version 8.00) >> Some virus protection software touches every file processed during virus scan processing, >> and this in turn floods the journal with change notifications and grows the journal. Okay, I'm running Sophos Antivirus 3.69. My include exclude list means I'm only backing up one filepath on a drive (e.g. g:\file_data\...\*), but I guess that the journal engine records all changes, regardless of include/exclude list specification. I'd be very interested in having a look at the journal proofing utility - please feel free to point me at it/mail it off-list if necessary. Pete - thanks for all your help so far... Rgds, David McClelland Management Systems Integrator Global Management Systems Reuters 85 Fleet Street London EC4P 4AJ E-mail - [EMAIL PROTECTED] Reuters Messaging - [EMAIL PROTECTED] -Original Message- From: Pete Tanenhaus [mailto:[EMAIL PROTECTED] Sent: 24 September 2003 14:26 To: [EMAIL PROTECTED] Subject: I'll try to answer/address your questions as best I can. >>> My TSM client is a file server, on its first full incremental backup >>> (with journaling turned on) stowed away nearly 9 million files on >>> the TSM server - a perfect candidate for the TSM journaling engine I >>> thought. However, the tsmjbbd.exe process bombed just before the >>> end>> >>> with a 'DB Access Critical Thread Return code 215' type error, although >>> the backup continued. The error you are seeing in the journal daemon is probably caused because the journal db has exceeded the supported maximum of 2 gig. If you look in your journal errorlog (jbberror.log) you'll probably see the following message: Error updating the journal for fs C:', dbUpdEntry() rc = 27 There is a bug the journal service which causes the process to shutdown when this error occurs and apar IC37040 has been opened and the fix will be included in an upcoming fixtest. That having been said, the real problem to look at is why the journal grew so large. Keep in mind that each journal entry represents a the most recent change for a file/directory, and that journal entries are unique, meaning the there can only be one entry for each object on the file system. Are you running virus scan software and if so what type and version ? (example: Norton Anti-Virus Corporate Edition Version 8.00) Some virus protection software touches every file processed during virus scan processing, and this in turn floods the journal with change notifications and grows the journal. There are circumventions from at least one of the virus protection vendors (Symantec) for this problem. >>>Now, 9 million files, at an average of maybe 500K per TSM database entry >>>equals roughly 4.5GB. Was TSM trying to send the *whole* 4.5GB inventory >>>for this node to the dsmc.exe process on the client? Needless to say, at >>>2GB (I believe the limit that Win2K places on a single pr
Journaling - not updating 'Last Backup Start/Completion Date/Time'
*SMers Hurrah! I now have 4 consecutive days worth of successful journal backups! Instead of my 12,000,000 million object filespace taking nearly a whole day to process, with journaling turned on I've managed to get this down to anything from 10 minutes to 1.5 hours depending upon how many files have changed - significantly better :o) Thanks guys! *However*, one thing strikes me - I look at the output of a `query filespace f=d` and refer to the 'Last Backup Start' and 'Last Backup Completed' date and time stamp, and these now don't ring true. In fact, they appear to point to the last successful full incremental backup that I performed prior to enabling journaling. Am I correct in assuming that TSM doesn't count a journaled-incremental as a full backup when it comes to looking at the `filespaces` table? I'm sure quite a few guys out there use the output of this for some reports... Rgds, David McClelland Management Systems Integrator Global Management Systems Reuters 85 Fleet Street London EC4P 4AJ Telephone +44 (0)207 542 4670 Mobile +44 (0)7711 120 931 E-mail [EMAIL PROTECTED] Reuters Messaging [EMAIL PROTECTED] --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Journaling - not updating 'Last Backup Start/Completion Date/Time'
> Guys, > > Sorry, please ignore me - the reason that it is not updating the 'last > backup start/completed date/time' is nothing to do with journaling, > but because the backup script seems to be backing up a particular > filespec/directory path of the filespace, rather than the whole drive, > therefore I will never get an updated date/time. Doh - sorry, I'll > make sure I check everything out before posting next time... > > Whilst I'm here though, I notice in the journaling files directory > that one of my files - tsmG__jdb.pag - is *huge*. Well, around 850MB. > I did an incremental backup with the journal engine engaged (only 10 > minutes - *so* much better), and it remains at 850MB. I would have > expected this to have been decreased somewhat... Am I right in > guessing that the size of this file will grow to the largest size > required, but won't shrink again when not needed? Is there anything I > can do to shrink it? > > Rgds, > > David McClelland > Global Management Systems > > -Original Message- > From: David McClelland > Sent: 29 September 2003 14:18 > To: [EMAIL PROTECTED] > Subject: Journaling - not updating 'Last Backup Start/Completion > Date/Time' > > *SMers > > Hurrah! I now have 4 consecutive days worth of successful journal > backups! > > Instead of my 12,000,000 million object filespace taking nearly a > whole day to process, with journaling turned on I've managed to get > this down to anything from 10 minutes to 1.5 hours depending upon how > many files have changed - significantly better :o) Thanks guys! > > *However*, one thing strikes me - I look at the output of a `query > filespace f=d` and refer to the 'Last Backup > Start' and 'Last Backup Completed' date and time stamp, and these now > don't ring true. In fact, they appear to point to the last successful > full incremental backup that I performed prior to enabling journaling. > > > Am I correct in assuming that TSM doesn't count a > journaled-incremental as a full backup when it comes to looking at the > `filespaces` table? I'm sure quite a few guys out there use the output > of this for some reports... > > Rgds, > > David McClelland > Management Systems Integrator > Global Management Systems > Reuters > 85 Fleet Street > London EC4P 4AJ > > Telephone +44 (0)207 542 4670 > Mobile+44 (0)7711 120 931 > E-mail[EMAIL PROTECTED] > Reuters Messaging [EMAIL PROTECTED] > > --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
TSM Journaling - Any Performance Degradation?
Hi list, Understandably, one of our customers is concerned about the impact of enabling TSM journaling on his customer-facing performance-critical production file-server - he asks: >> Do you know if there is any risk of performance degradation (i.e. slower read/write of files) >> resulting from the interaction between the journaling daemon and the Win32 API? Does anyone have any comments, experiences or statistics that I can get back to him with on this? I can understand that there might be a performance hit, what with the extra file I/O and writes to the journal file, but is this significant or quantifiable, and should he be concerned? Many thanks, David McClelland Management Systems Integrator Global Management Systems Reuters 85 Fleet Street London EC4P 4AJ E-mail [EMAIL PROTECTED] Reuters Messaging [EMAIL PROTECTED] -- -- Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Hyper-threaded CPU's
I don't know the answer to this, but as a side-note, there are big differences between how Windows 2000 and Windows 2003 handle HyperThreading: as a result, Windows 2000 boxes with HyperThreading enabled (toggled in the BIOS usually) can suffer performance hits in some circumstances. It stems, so I understand, from the fact that Windows 2000 isn't able to discriminate between a logical and a physical CPU. Say in a twin physically processored box, it would see four different standalone CPUs. When it comes to allocating threads between processors, it isn't able to balance them between the physical processors, only logical ones - the result can be that a single physical processor can get over loaded whilst another physical processor is relatively scarcely loaded. Windows 2003, on the other hand, is supposedly aware of HyperThreading, and takes the physical processors into account when allocating jobs. I'm not sure how much of an issue this might be with a standalone TSM server, but I've heard of Oracle people moaning about this and turning HyperThreading off. Definitely worth a little research... Anyone any offers on the above? I'd be interested to learn of others' experiences or thoughts, as we too have some Win2K hyperthreaded servers (DL380 G3's) running TSM Server. Rgds, David McClelland Management Systems Integrator Global Management Systems Reuters 85 Fleet Street London EC4P 4AJ E-mail [EMAIL PROTECTED] Reuters Messaging [EMAIL PROTECTED] -Original Message- From: Brian L. Nick [mailto:[EMAIL PROTECTED] Sent: 09 October 2003 12:50 To: [EMAIL PROTECTED] Subject: Hyper-threaded CPU's Good morning, Running a TSM 5.x server with the new licensing scheme how does TSM report on hyper-threaded CPU's? We have several of them and are in the process of moving to 5.2 and want to make sure that we are only counting physical CPU's. Anyone have any insight on this. Thanks, Brian Brian L. Nick Systems Technician - Storage Solutions The Phoenix Companies Inc. 100 Bright Meadow Blvd Enfield CT. 06082-1900 E-MAIL: [EMAIL PROTECTED] PHONE: (860)403-2281 *** CONFIDENTIAL: This communication, including attachments, is intended only for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, you are hereby notified that you have received this document in error, and any use, review, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy any and all copies of this communication. *** -- -- Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: delete filespace takes >24hours
Gosh, That's the first four letter word I've seen on this list for a long time. Excuse me whilst I blush and faint from shock... ;o) David McClelland Global Management Systems Reuters Ltd, London -Original Message- From: Remco Post [mailto:[EMAIL PROTECTED] Sent: 29 October 2003 14:52 To: [EMAIL PROTECTED] Subject: Re: delete filespace takes >24hours > Joachim Total buffer requests means shit.. this is like an uptime counter... it wraps every so many days (1.5 in my case iirc) -- Met vriendelijke groeten, Remco Post SARA - Reken- en Netwerkdiensten http://www.sara.nl High Performance Computing Tel. +31 20 592 8008Fax. +31 20 668 3167 "I really didn't foresee the Internet. But then, neither did the computer industry. Not that that tells us very much of course - the computer industry didn't even foresee that the century was going to end." -- Douglas Adams - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.
Re: Include/exclude not working
Hallo Eric, Hmn, my understanding of incl/excl lists tells me that we work from the bottom up as a rule, but with 'exlude.dir' and some others taking precedence. Personally, I try and avoid those as it only causes confusion. Looking at your dsm.opt include/exclude list below, I think that if I've understood you, you'll be wanting something like this: Exclude e:\* Exclude e:\...\* Include g:\inetpub\* Include g:\inetpub\...\* As a result, everything on the g: will be excluded, except g:\inetpub\ and its children. Is this what you were looking for? David McClelland Global Management Systems, Reuters Ltd, London -Original Message- From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED] Sent: 31 October 2003 13:15 To: [EMAIL PROTECTED] Subject: Include/exclude not working Hi *SM-ers! I must be doing something wrong. My PC (WinNT, 5.1.6 client) has a network drive E:. I want everything but E:\Inetpub (and all underlying files and directories) on E: excluded from the backup. My dsm.opt contains the following lines: INCLUDE "E:\Inetpub\...\*" EXCLUDE.DIR "E:\...\*" Exclude "E:\*" However, the GUI shows all files and directories excluded and the backup backs up nothing. I'm lost... Kindest regards, Eric van Loon KLM Royal Dutch Airlines ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. ** --- - Visit our Internet site at http://www.reuters.com Get closer to the financial markets with Reuters Messaging - for more information and to register, visit http://www.reuters.com/messaging Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Ltd.