Client backup with two different policy domains
Tivoli Server MVS 5.1.7.0 NetWare Client 5.2.0.0 on NetWare 5.10 server On this server, I need to back up one directory (called admbrim) and all of its subdirectories with seven days' retention. The rest of the directories (across two volumes) need to have 30 days' retention. I set up two nodes: BRIMSERVER2MAIL: For the admbrim directory BRIMSERVER2: for everything else. I have two BA directories with separate DSM.opt files. (DSMMAIL.OPT and DSM.OPT) respectively. Here are the pertinent differences in the .OPT Files: DSMMAIL.OPT -- * Setting Nodename * --- NODENAME brimserver2mail * Includes/Excludes * - INCLUDE brim_server2\data:admbrim/* INCLUDE brim_Server2\data:admbrim EXCLUDE brim_server2\data:* DSM.OPT * Setting Nodename * --- NODENAME brimserver2 * Includes/Excludes * - EXCLUDE brim_server2\*:/.../vol$log.err EXCLUDE brim_server2*:/.../tts$log.err EXCLUDE brim_server2*:/.../sys$log.err EXCLUDE brim_server2*:/.../events.log EXCLUDE brim_server2*:/.../secaudit.log EXCLUDE brim_server2*:/.../system.log EXCLUDE brim_server2\sys:system/cmaster.dba EXCLUDE brim_server2\sys:system/btrieve.trn EXCLUDE brim_server2\sys:system/tsa/tsa$temp.* EXCLUDE brim_server2\sys:_SWAP_.MEM EXCLUDE.dir brim_server2\sys:\queues EXCLUDE.DIR brim_server2\data:admbrim Both nodes are using the same schedule, but with a different policy domain. I used the -optfile parameter to point each schedule to its correct .OPT file. This a.m., only one scheduler, for the BRIMSERVER2 node, was running. However, info in the schedlog for that node appeared to include both nodes. For example: Time remaining until execution: Querying server for next scheduled event. 07/28/2004 19:18:58 Node Name: BRIMSERVER2 07/28/2004 19:18:58 Session established with server SERVER1: MVS 07/28/2004 19:18:58 Server Version 5, Release 1, Level 7.0 07/28/2004 19:18:58 Data compression forced off by the server 07/28/2004 19:18:58 Server date/time: 07/28/2004 19:19:16 Last access: 07/28/2004 15:19:16 07/28/2004 19:18:58 --- SCHEDULEREC QUERY BEGIN 07/28/2004 19:18:58 --- SCHEDULEREC QUERY END 07/28/2004 19:18:58 Next operation scheduled: 07/28/2004 19:18:58 07/28/2004 19:18:58 Schedule Name: NORMAL_DAILY_10PM_START 07/28/2004 19:18:58 Action:Incremental 07/28/2004 19:18:58 Objects: 07/28/2004 19:18:58 Options: 07/28/2004 19:18:58 Server Window Start: 22:00:00 on 07/28/2004 07/28/2004 19:18:58 07/28/2004 19:18:58 Command will be executed in 3 hours and 48 minutes. 07/28/2004 19:18:58 Time remaining until execution: Querying server for next scheduled event. 07/28/2004 19:20:29 Node Name: BRIMSERVER2MAIL 07/28/2004 19:20:29 Session established with server SERVER1: MVS 07/28/2004 19:20:29 Server Version 5, Release 1, Level 7.0 07/28/2004 19:20:29 Server date/time: 07/28/2004 19:20:47 Last access: 07/28/2004 15:20:47 07/28/2004 19:20:29 --- SCHEDULEREC QUERY BEGIN 07/28/2004 19:20:29 --- SCHEDULEREC QUERY END 07/28/2004 19:20:29 Next operation scheduled: 07/28/2004 19:20:29 07/28/2004 19:20:29 Schedule Name: NORMAL_DAILY_10PM_START 07/28/2004 19:20:29 Action:Incremental 07/28/2004 19:20:29 Objects: 07/28/2004 19:20:29 Options: 07/28/2004 19:20:29 Server Window Start: 22:00:00 on 07/28/2004 07/28/2004 19:20:29 07/28/2004 19:20:29 Command will be executed in 2 hours and 53 minutes. 07/28/2004 19:20:29 Then, when the backup kicked off, I got this: Executing scheduled command now. 07/28/2004 22:13:29 Node Name: BRIMSERVER2MAIL 07/28/2004 22:13:29 Session established with server SERVER1: MVS 07/28/2004 22:13:29 Server Version 5, Release 1, Level 7.0 07/28/2004 22:13:29 Server date/time: 07/28/2004 22:13:47 Last access: 07/28/2004 19:20:47 07/28/2004 22:13:29 --- SCHEDULEREC OBJECT BEGIN NORMAL_DAILY_10PM_START 07/28/2004 22:00:00 07/28/2004 22:13:29 Please enter NetWare user for "BRIM_SERVER2": Executing scheduled command now. 07/28/2004 23:06:58 Node Name: BRIMSERVER2 07/28/2004 23:06:58 Session established with server SERVER1: MVS 07/28/2004 23:06:58 Server Version 5, Release 1, Level 7.0 07/28/2004 23:06:58 Data compression forced off by the server 07/28/2004 23:06:58 Server date/time: 07/28/2004 23:07:16 Last access: 07/28/2004 19:19:16 07/28/2004 23:06:58 --- SCHEDULEREC OBJECT BEGIN NORMAL_DAILY_10PM_START 07/28/2004 22:00:00 07/28/2004 23:06:58 Incremental backup of volume 'BRIM_SERVER2\SYS:' 07
Re: Client backup with two different policy domains
Well, first...the include/excludes are processed from bottom to the top, stopping at the first hit...not the most qualified hit. So in your DSMMAIL.OPT, the EXCLUDE statement is hitting first, and the includes never get processed. If that's all you want to do is assigned different retentions to a directory, instead of 2 nodenames and 2 domains, just create a management class for your 7-day retention and assign that to the admbrim directory/files. Let the defaul management class of 30-days for the rest of the server. Just add to your DSM.OPT INCLUDE brim_server2\data:admbrim/.../* 7DAYMGMTCLASS and just run a single backup with a single nodename in a single domain. Reduces complexity. Bill Boyer DSS, Inc. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Kevin Kinder Sent: Thursday, July 29, 2004 8:32 AM To: [EMAIL PROTECTED] Subject: Client backup with two different policy domains Tivoli Server MVS 5.1.7.0 NetWare Client 5.2.0.0 on NetWare 5.10 server On this server, I need to back up one directory (called admbrim) and all of its subdirectories with seven days' retention. The rest of the directories (across two volumes) need to have 30 days' retention. I set up two nodes: BRIMSERVER2MAIL: For the admbrim directory BRIMSERVER2: for everything else. I have two BA directories with separate DSM.opt files. (DSMMAIL.OPT and DSM.OPT) respectively. Here are the pertinent differences in the .OPT Files: DSMMAIL.OPT -- * Setting Nodename * --- NODENAME brimserver2mail * Includes/Excludes * - INCLUDE brim_server2\data:admbrim/* INCLUDE brim_Server2\data:admbrim EXCLUDE brim_server2\data:* DSM.OPT * Setting Nodename * --- NODENAME brimserver2 * Includes/Excludes * - EXCLUDE brim_server2\*:/.../vol$log.err EXCLUDE brim_server2*:/.../tts$log.err EXCLUDE brim_server2*:/.../sys$log.err EXCLUDE brim_server2*:/.../events.log EXCLUDE brim_server2*:/.../secaudit.log EXCLUDE brim_server2*:/.../system.log EXCLUDE brim_server2\sys:system/cmaster.dba EXCLUDE brim_server2\sys:system/btrieve.trn EXCLUDE brim_server2\sys:system/tsa/tsa$temp.* EXCLUDE brim_server2\sys:_SWAP_.MEM EXCLUDE.dir brim_server2\sys:\queues EXCLUDE.DIR brim_server2\data:admbrim Both nodes are using the same schedule, but with a different policy domain. I used the -optfile parameter to point each schedule to its correct .OPT file. This a.m., only one scheduler, for the BRIMSERVER2 node, was running. However, info in the schedlog for that node appeared to include both nodes. For example: Time remaining until execution: Querying server for next scheduled event. 07/28/2004 19:18:58 Node Name: BRIMSERVER2 07/28/2004 19:18:58 Session established with server SERVER1: MVS 07/28/2004 19:18:58 Server Version 5, Release 1, Level 7.0 07/28/2004 19:18:58 Data compression forced off by the server 07/28/2004 19:18:58 Server date/time: 07/28/2004 19:19:16 Last access: 07/28/2004 15:19:16 07/28/2004 19:18:58 --- SCHEDULEREC QUERY BEGIN 07/28/2004 19:18:58 --- SCHEDULEREC QUERY END 07/28/2004 19:18:58 Next operation scheduled: 07/28/2004 19:18:58 07/28/2004 19:18:58 Schedule Name: NORMAL_DAILY_10PM_START 07/28/2004 19:18:58 Action:Incremental 07/28/2004 19:18:58 Objects: 07/28/2004 19:18:58 Options: 07/28/2004 19:18:58 Server Window Start: 22:00:00 on 07/28/2004 07/28/2004 19:18:58 07/28/2004 19:18:58 Command will be executed in 3 hours and 48 minutes. 07/28/2004 19:18:58 Time remaining until execution: Querying server for next scheduled event. 07/28/2004 19:20:29 Node Name: BRIMSERVER2MAIL 07/28/2004 19:20:29 Session established with server SERVER1: MVS 07/28/2004 19:20:29 Server Version 5, Release 1, Level 7.0 07/28/2004 19:20:29 Server date/time: 07/28/2004 19:20:47 Last access: 07/28/2004 15:20:47 07/28/2004 19:20:29 --- SCHEDULEREC QUERY BEGIN 07/28/2004 19:20:29 --- SCHEDULEREC QUERY END 07/28/2004 19:20:29 Next operation scheduled: 07/28/2004 19:20:29 07/28/2004 19:20:29 Schedule Name: NORMAL_DAILY_10PM_START 07/28/2004 19:20:29 Action:Incremental 07/28/2004 19:20:29 Objects: 07/28/2004 19:20:29 Options: 07/28/2004 19:20:29 Server Window Start: 22:00:00 on 07/28/2004 07/28/2004 19:20:29 07/28/2004 19:20:29 Command will be executed in 2 hours and 53 minutes. 07/28/2004 19:20:29 Then, when the backup kicked off, I got this: Executing scheduled command now. 07/28/2004 22:13:29 Node Name: BRIMSERVER2MAIL 07/28/2004 22:13:29 Session estab
AW: Client backup with two different policy domains
build a second schedule with the second nodename, psw and different dsmsched.log + dsmerror.log. why not build different managementclaases in 1 Domain and include the files and/or directories int he dsm.opt to the mgmntclasses? regards joachim -UrsprÃngliche Nachricht- Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Auftrag von Kevin Kinder Gesendet: Donnerstag, 29. Juli 2004 14:32 An: [EMAIL PROTECTED] Betreff: Client backup with two different policy domains Tivoli Server MVS 5.1.7.0 NetWare Client 5.2.0.0 on NetWare 5.10 server On this server, I need to back up one directory (called admbrim) and all of its subdirectories with seven days' retention. The rest of the directories (across two volumes) need to have 30 days' retention. I set up two nodes: BRIMSERVER2MAIL: For the admbrim directory BRIMSERVER2: for everything else. I have two BA directories with separate DSM.opt files. (DSMMAIL.OPT and DSM.OPT) respectively. Here are the pertinent differences in the .OPT Files: DSMMAIL.OPT â-- * Setting Nodename * --- NODENAME brimserver2mail * Includes/Excludes * - INCLUDE brim_server2\data:admbrim/* INCLUDE brim_Server2\data:admbrim EXCLUDE brim_server2\data:* DSM.OPT * Setting Nodename * --- NODENAME brimserver2 * Includes/Excludes * - EXCLUDE brim_server2\*:/.../vol$log.err EXCLUDE brim_server2*:/.../tts$log.err EXCLUDE brim_server2*:/.../sys$log.err EXCLUDE brim_server2*:/.../events.log EXCLUDE brim_server2*:/.../secaudit.log EXCLUDE brim_server2*:/.../system.log EXCLUDE brim_server2\sys:system/cmaster.dba EXCLUDE brim_server2\sys:system/btrieve.trn EXCLUDE brim_server2\sys:system/tsa/tsa$temp.* EXCLUDE brim_server2\sys:_SWAP_.MEM EXCLUDE.dir brim_server2\sys:\queues EXCLUDE.DIR brim_server2\data:admbrim Both nodes are using the same schedule, but with a different policy domain. I used the -optfile parameter to point each schedule to its correct .OPT file. This a.m., only one scheduler, for the BRIMSERVER2 node, was running. However, info in the schedlog for that node appeared to include both nodes. For example: Time remaining until execution: Querying server for next scheduled event. 07/28/2004 19:18:58 Node Name: BRIMSERVER2 07/28/2004 19:18:58 Session established with server SERVER1: MVS 07/28/2004 19:18:58 Server Version 5, Release 1, Level 7.0 07/28/2004 19:18:58 Data compression forced off by the server 07/28/2004 19:18:58 Server date/time: 07/28/2004 19:19:16 Last access: 07/28/2004 15:19:16 07/28/2004 19:18:58 --- SCHEDULEREC QUERY BEGIN 07/28/2004 19:18:58 --- SCHEDULEREC QUERY END 07/28/2004 19:18:58 Next operation scheduled: 07/28/2004 19:18:58 07/28/2004 19:18:58 Schedule Name: NORMAL_DAILY_10PM_START 07/28/2004 19:18:58 Action:Incremental 07/28/2004 19:18:58 Objects: 07/28/2004 19:18:58 Options: 07/28/2004 19:18:58 Server Window Start: 22:00:00 on 07/28/2004 07/28/2004 19:18:58 07/28/2004 19:18:58 Command will be executed in 3 hours and 48 minutes. 07/28/2004 19:18:58 Time remaining until execution: Querying server for next scheduled event. 07/28/2004 19:20:29 Node Name: BRIMSERVER2MAIL 07/28/2004 19:20:29 Session established with server SERVER1: MVS 07/28/2004 19:20:29 Server Version 5, Release 1, Level 7.0 07/28/2004 19:20:29 Server date/time: 07/28/2004 19:20:47 Last access: 07/28/2004 15:20:47 07/28/2004 19:20:29 --- SCHEDULEREC QUERY BEGIN 07/28/2004 19:20:29 --- SCHEDULEREC QUERY END 07/28/2004 19:20:29 Next operation scheduled: 07/28/2004 19:20:29 07/28/2004 19:20:29 Schedule Name: NORMAL_DAILY_10PM_START 07/28/2004 19:20:29 Action:Incremental 07/28/2004 19:20:29 Objects: 07/28/2004 19:20:29 Options: 07/28/2004 19:20:29 Server Window Start: 22:00:00 on 07/28/2004 07/28/2004 19:20:29 07/28/2004 19:20:29 Command will be executed in 2 hours and 53 minutes. 07/28/2004 19:20:29 Then, when the backup kicked off, I got this: Executing scheduled command now. 07/28/2004 22:13:29 Node Name: BRIMSERVER2MAIL 07/28/2004 22:13:29 Session established with server SERVER1: MVS 07/28/2004 22:13:29 Server Version 5, Release 1, Level 7.0 07/28/2004 22:13:29 Server date/time: 07/28/2004 22:13:47 Last access: 07/28/2004 19:20:47 07/28/2004 22:13:29 --- SCHEDULEREC OBJECT BEGIN NORMAL_DAILY_10PM_START 07/28/2004 22:00:00 07/28/2004 22:13:29 Please enter NetWare user for "BRIM_SERVER2": Executing scheduled command now. 07/28/20
Re: Infrastructure design questions -- I need input please
As I said there are two ways to look at it. For full DR we all back up so that if a site disaster were to happen we could keep the business running. If the hot site were to be the one destroyed then you do need a plan because the next backup will need to go somewhere. Remember the primary copy of everything wasn't at the hotsite so the business is still running. If you truly have legal or other requirements that dictate you need not only the primary copy of data but also a backup copy at two locations then Wanda's idea of having two libraries one at each location would be the best bet I think. Sometimes you also have database servers that need to dump their logs every hour. If the hot site were to go away then you might not be able to wait until a new site or server is built. For this reason part of your plan may be to have a 2nd TSM server sitting at your primary site. You could be running export nodes filed=all to keep the node definition and password in sync. Then when a disaster hits you just rename the instance to that of the one at the hot site (instance not computer name). Then change the DNS alias for the hot site to point to the local TSM server. Of course you have to create any schedules but that can be scripted easy enough. The servers will just start backing up to the new location. End the end no matter what design you pick you have to make sure to dot your "i's" and cross your "t's". Roger Deschner <[EMAIL PROTECTED]> wrote: Don't forget to consider the possibility that your disaster could happen the other way around - the swarms of locusts may consume your hotsite, leaving only your "primary site" functional. If the only copy of the data is over there, you're in the same boat, up the same creek, without the same paddle, as before you started all this DR planning. OTOH, if you aren't doing archives, and if none of the systems being backed up to TSM are at the hotsite, the "offsite backup" could be considered to be the original client node machines back at the primary site. Roger Deschner University of Illinois at Chicago [EMAIL PROTECTED] ==I have not lost my mind -- it is backed up on tape somewhere.= On Wed, 28 Jul 2004, TSM_User wrote: >Just a different thought why not back everything up to the a TSM server at the DR >hotsite. You should easily be able to backup 1.5 TB's of information in a night >though a 1 Gb connection. If this is new Fibre then you may have a 2 Gb connection or >more through DWDM (or what ever that acronym is). > >At the DR hotsite you don't need to make storage pool copies unless you want to >protect yourself from media issues. > >IP slowing you down, well IP definitely has more overhead than SCSI but today you >should be able to get at least 250 GB/hr though a 1 Gb NIC worse case. So if your >backup window is from 8:00 PM to 6:00 AM you can send 2.5 TB's of information again >assuming you are just running a 1 Gb Fibre connection. > >So no vaulting and no need for storage pool copies. > >I would also put a bunch of ATA disk at the other site as well and keep all small >files on disk. This will also reduce the need for tapes and drives. Your local server >could be used for fail over in case the link goes down. > >Don't flame me, this is just another idea. I'm sure there are many people out there >who can't believe I would suggest not running storage pool copies even if the primary >copy is offsite but we are looking at this approach ourselves. > >"Prather, Wanda" wrote: >I would recommend using the second server only in the event of a disaster. > >Since you are connected by fibre, the primary server can send the data >directly to the tape drives in the library at fibre speeds. > >You don't want to try and make the 2 servers talk to each other via >server-to-server communications, 'cause that will just slow you down to >TCP/IP speeds. > >-Original Message- >From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of >Thach, Kevin G >Sent: Wednesday, July 28, 2004 9:39 AM >To: [EMAIL PROTECTED] >Subject: Infrastructure design questions -- I need input please > > >My organization is developing a DR "hotsite" at one of our other >facilities across town, and we are considering making some radical >changes to our TSM environment. I know there are several folks on this >list that are heavy into TSM "design" and I could use all the input I >can get. > >Our current environment consists of the following: >* TSM server running 5.1.7.3 on AIX. The server is a 6-processor >6H1 w/ 8GB RAM and four 2Gb HBAs. >* Approximately 350 clients, and backup 1.5 TB nightly >* We use a SAN-attached 3584 with 12 LTO-1 tape drives. 60-day >retention policy for everything, so we are maintaining ~90 TB in our >local and offsite (copypool) tape pools. >* Disk storage pools, DB, Log, are all on SAN-attached IBM Shark >disk > >Our objective is to take advantage of the hotsite not only to improve >our DR methods, but to improve TSM r
ANS0101E on linux starting client scheduler
Not sure what changed between last week and today, but I can't seem to get the client scheduler started on a linux box. I was able to start it last week, but now when I try I get: ANS0101E Unable to open English message repository 'dsmclientV3.cat' I've searched the archives, and about all I've found on this error is to set LANG to en_US. I've done that. dsmclientV3.cat does exist in /opt/tivoli/tsm/client/ba/bin/en_US. What could be the problem? Thanks for your help, T. __ Do you Yahoo!? Yahoo! Mail is new and improved - Check it out! http://promotions.yahoo.com/new_mail
Unable to move files?
Okay so here is a new one for me on my AIX 5.2.0.0.ml3 system running TSM 5.2.3.0 while running a move data command on an offsite volume: ANR1171W Unable to move files associated with node USCASRV0060, filespace /sapmnt/TRN fsId 72 on volume AA1209 due to restore in progress. (SESSION: 11204, PROCESS: 390) But there are no processes or sessions running a restore. The messages manual suggest a "Q RESTORE F=D", but that also reports no matches. Background history: Two days ago, during a restore of that filesystem, the client system crashed hard. The client system has been repaired and successfully restored. How do I re-enable the movement of these files? I guess I could halt the server and see what happens, but I am wondering if anyone has run into this.
Re: ANS0101E on linux starting client scheduler
>Not sure what changed between last week and today, but >I can't seem to get the client scheduler started on a >linux box. I was able to start it last week, but now >when I try I get: > >ANS0101E Unable to open English message repository >'dsmclientV3.cat' > >I've searched the archives, and about all I've found >on this error is to set LANG to en_US. I've done >that. dsmclientV3.cat does exist in >/opt/tivoli/tsm/client/ba/bin/en_US. > >What could be the problem? http://people.bu.edu/rbs/ADSM.QuickFacts has supplementary info... Have you checked that the directory and file permissions allow access by the invoker, and that DSM_* environment variables are not pointing somewhere else? I assume there's no further error indication in dsmerror.log. If all else fails, you might try using DSM_DIR in a site script which starts the client scheduler, identifying the directory where the client config files are. Richard Sims
Re: Linux tsmscsi 2.4.26 RedHat 9.0
Well you could install the supported version of the kernel: http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg2628 Etienne Brodeur CORP Rick Willmore <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 07/28/2004 01:31 PM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To [EMAIL PROTECTED] cc Subject Linux tsmscsi 2.4.26 RedHat 9.0 Guys/Gals I am trying to have a linux redhat install 2.4.26 work with my TSM server. Apparently I need to use the tsm driver supplied by IBM, tsmscsi. [EMAIL PROTECTED] bin]# ./tsmscsi TSM device driver not available for kernel release 2.4.26 For a list of supported kernel levels, go to the IBM Tivoli Linux support web page [EMAIL PROTECTED] bin]# Seagate STD224000N DDS-3 DAT Drive. Linux sees the drive just fine and I can tar to the drive /dev/st0. TSM on the other hand... Any ideas? How can I go about compiling the driver for this kernel version or do I have another option? R.
Re: TDP for Domino crashes Domino server
Thanks Eduardo, I do have anti-virus running on the Domino server, but I stopped every Domino job manually making sure only server was still running and it still crashes. Do you mean to say I should uninstall the Domino anti-virus software ? Etienne Eduardo Esteban <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 07/28/2004 11:52 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> To [EMAIL PROTECTED] cc Subject Re: TDP for Domino crashes Domino server Do you have anti-virus software for Domino running? If so, disable it and see if this solves the problem. Either way you should contact Support. Eduardo. "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 07/27/2004 09:53:02 AM: > Hi, > I have a 6.5.2 Domino server running on Windows 2000 server SP4. I > am using TDP Domino 5.1.5 and BA client 5.2.2.10. If I try to start the > TDP GUI or command line client Domino server freezes completly and I have > to restart the entire machine. I have nothing in the logs that points me > to a particular process. This is what I get from the command line client: > C:\Program Files\Tivoli\TSM\domino>domdsmc query domino > IBM Tivoli Storage Manager for Mail: > Data Protection for Lotus Domino > Version 5, Release 1, Level 5.01 > (C) Copyright IBM Corporation 1999, 2002. All rights reserved. > Thread=[06DC:0002-068C] > Stack base=0x00122D68, Stack size = -3272 bytes > PANIC: OSVBlockAddr: Bad VBlock handle (0\0) > I have only one notes.ini on this server and I have deleted all logs and > files (actually deleted the Tivoli dir) and resinstalled the client and > TDP and same thing. > Thanks for your help, > Etienne Brodeur
Re: ANS0101E on linux starting client scheduler
As Homer says D'OH! Had my DSM_DIR variable set incorrectly. Thanks for the pointer. T. --- Richard Sims <[EMAIL PROTECTED]> wrote: > >Not sure what changed between last week and today, > but > >I can't seem to get the client scheduler started on > a > >linux box. I was able to start it last week, but > now > >when I try I get: > > > >ANS0101E Unable to open English message repository > >'dsmclientV3.cat' > > > >I've searched the archives, and about all I've > found > >on this error is to set LANG to en_US. I've done > >that. dsmclientV3.cat does exist in > >/opt/tivoli/tsm/client/ba/bin/en_US. > > > >What could be the problem? > > http://people.bu.edu/rbs/ADSM.QuickFacts has > supplementary info... > Have you checked that the directory and file > permissions allow access by the > invoker, and that DSM_* environment variables are > not pointing somewhere else? > I assume there's no further error indication in > dsmerror.log. > If all else fails, you might try using DSM_DIR in a > site script which starts the > client scheduler, identifying the directory where > the client config files are. > > Richard Sims > __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: TDP for Domino crashes Domino server
Etienne, No, you don't need to uninstall it. There are ways to exclude anti-virus software from "interfering" at startup. You might try going to: www.ibm.com Search for: +tdp +domino +virus If the suggestions there don't help, please call IBM support. Thanks, Del "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 07/29/2004 10:02:40 AM: > Thanks Eduardo, > > I do have anti-virus running on the Domino server, but I stopped > every Domino job manually making sure only server was still running and it > still crashes. Do you mean to say I should uninstall the Domino > anti-virus software ? > > Etienne
Re: archiving up files with single-quotes in the filename
Dang. I was sure I had tried that it worked. Thx. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Andrew Raibeck Sent: Wednesday, July 28, 2004 6:34 PM To: [EMAIL PROTECTED] Subject: Re: archiving up files with single-quotes in the filename Not sure why it is behaving this way, though it is almost certainly due to the single quotes (somehow). In the filelist file, try putting the file names in double quotes, like this: "/a/path/to/a/'file1'" "/a/path/to/a/'file2'" then retry the operation. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 07/26/2004 08:07:15: > Hello TSMers > > I am trying to perform an archive using the -filelist option on a unix > system. > > The filelist contains a list of files of the format; > > /a/path/to/a/'file1' > /a/path/to/a/'file2' > > unfortunatley (apparently) there is no way the filenames can be changed > such that they do not contain single quotes... > > When issuing > > Dsmc archive -filelist=/path/to/the/filelist > > > TSM generates a line of errors similar to; > > ANS1228E Sending of object '/a/path/to/*' failed > ANS4005E Error processing '/a/path/to/*': file not found > ANS1228E Sending of object '/a/path/to/*' failed > ANS4005E Error processing '/a/path/to/*': file not found > > > .. It seems to truncate the given path by the last element and replace > this with '*' > > > ..I have tried a few different combo's of escaping & quoting etc.. but > no joy yet... anyone else needed to do this before? > > > Matt. > > > > ___ Disclaimer Notice __ > This message and any attachments are confidential and should only be read by > those to whom they are addressed. If you are not the intended recipient, > please contact us, delete the message from your computer and destroy any > copies. Any distribution or copying without our prior permission is prohibited. > > Internet communications are not always secure and therefore Powergen Retail > Limited does not accept legal responsibility for this message. The recipient > is responsible for verifying its authenticity before acting on the contents. > Any views or opinions presented are solely those of the author and do not > necessarily represent those of Powergen Retail Limited. > > Registered addresses: > > Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry, CV4 8LG. > Registered in England and Wales No: 3407430 > > Telephone +44 (0) 2476 42 4000 > Fax +44 (0) 2476 42 5432
Re: D2D vs. tape backups with TSM?
We have always used compression going to Disk. We use exclude.compression for things like .zip etc. You may want to use compressalways yes to avoid resending data that grows. We are not collocating at all - why would you want to? From a restore perspective (using multi-session restore) it is better to have the data spread out across multiple volumes. -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 28, 2004 8:29 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? We are using 25 GB volumes right now. We are also still collocating the storage pools that use the file device class by node. This has worked out fine for us. Sad to admit but I wasn't aware of the Technical Exchange recommendation. Is there a white paper from that you could refer me to. We are contemplating turning on node compression everywhere to also help reduce disk space. Also, I made mention in a previous post that we were reclaiming down to 50% and that was fine. Well, like always when you make a comment like that it makes you think and they you go look. I found that we were using around 16 TB's of ATA space in all when you look at the "In Use" numbers. When I looked at the actual disk in use it was closer to 21 TB's of data. I am currently reclaiming everything down to 40 and I plan to get down to 25 again. At that point I will compare the numbers and see how much I can reduce the 21 TB's in use. Also somewhat interesting information. We have found that the I/O capabilities of the latest and greatest servers can really help push a lot more data to disk. We have always been told by our disk vendor that the bottleneck wasn't them. We ruled out many things except them. Finally we looked at a more detailed performance monitor of our systems and we found that the we were killing the processor during times when we were pushing a lot of data to disk. With these new servers we see migrations from Fibre disk to ATA disk at over 150 GB/hr. We do have 60 TB's of ATA space though so we have a lot of disks to write to. "Rushforth, Tim" <[EMAIL PROTECTED]> wrote: Just curious what size of file volumes are you using? We were originally using 25 GB, and then I listened to the "Disk Only Backup Strategies" Technical Exchange where they recommended 2-4 GB volumes. Thanks, Tim Rushforth City of Winnipeg -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 27, 2004 6:41 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? Funny, we set ours down to 25% as well just to see what would happen. This worked but we have since set all of the ATA Pools to 50% and we just leave them there. Theoretically what could happen is we could be wasting twice as much space but the fact is the volumes were going from 25% to 50% in a matter of days and when we looked at how many volumes were between 25% and 50% in our environment we determined there was no need to reclaim down that far. From all outward signs there was no issues with reclaiming down to 25% we just didn't think it was worth doing the extra work to get back such a small amount of disk. Disk is cheap, right! lol "Rushforth, Tim" wrote: We've set ours at 25%. We are just piloting an all disk backup pool for some clients on one of our servers and for small files on another. - Do you Yahoo!? Y! Messenger - Communicate in real time. Download now.
Re: backing up DB2
I have a TSM server 5.2.0.2 (on win2000) I have a DB2 ver 8.X (on win2000) .. Is there a book for that version.. I only found a book for DB2 ver 7.x with tsm 4.x The control center of the IBM DB2 is not the same ... and I don't really know DB2 P.S. .. if somebody got the installation procedure on paper ... please send me a copy ... (if U can..) Thanks Luc Beaudoin TSM / Network Administrator Hopital General Juif S.M.B.D. Tel: (514) 340-8222 ext:8254
3494 Scratch Categories
Is there a way to change the value of the 3494 scratchcategory and privatecategory without deleting the library, drive and path definitions and redefining them? David
Re: D2D vs. tape backups with TSM?
Re:Collocation - Maybe I don't understand how the restores *should* be working. In our case we have 2 drives. When I do a big restore, that's spread out across a lot of tapes, I don't see it using both drives. It mounts one, finds what it needs, mounts the next tape, etc. It doesn't seem to use the second drive. Collocation would help in that case, since it would cause less tape mounts. So then the question becomes...I take it this isn't what should be happening? Is there something special you have to do to make it use multiple tape drives? I use the webclient for initiating restores on netware/windows clients. I've never seen any settings in the webclient that appear to be for using all the tape drives instead of one. Is it something I can only do by using a command line restore with dsmc? Troy Frank Network Services University of Wisconsin Medical Foundation 608.829.5384 >>> [EMAIL PROTECTED] 7/29/2004 9:48:57 AM >>> We have always used compression going to Disk. We use exclude.compression for things like .zip etc. You may want to use compressalways yes to avoid resending data that grows. We are not collocating at all - why would you want to? From a restore perspective (using multi-session restore) it is better to have the data spread out across multiple volumes. -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 28, 2004 8:29 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? We are using 25 GB volumes right now. We are also still collocating the storage pools that use the file device class by node. This has worked out fine for us. Sad to admit but I wasn't aware of the Technical Exchange recommendation. Is there a white paper from that you could refer me to. We are contemplating turning on node compression everywhere to also help reduce disk space. Also, I made mention in a previous post that we were reclaiming down to 50% and that was fine. Well, like always when you make a comment like that it makes you think and they you go look. I found that we were using around 16 TB's of ATA space in all when you look at the "In Use" numbers. When I looked at the actual disk in use it was closer to 21 TB's of data. I am currently reclaiming everything down to 40 and I plan to get down to 25 again. At that point I will compare the numbers and see how much I can reduce the 21 TB's in use. Also somewhat interesting information. We have found that the I/O capabilities of the latest and greatest servers can really help push a lot more data to disk. We have always been told by our disk vendor that the bottleneck wasn't them. We ruled out many things except them. Finally we looked at a more detailed performance monitor of our systems and we found that the we were killing the processor during times when we were pushing a lot of data to disk. With these new servers we see migrations from Fibre disk to ATA disk at over 150 GB/hr. We do have 60 TB's of ATA space though so we have a lot of disks to write to. "Rushforth, Tim" < [EMAIL PROTECTED] > wrote: Just curious what size of file volumes are you using? We were originally using 25 GB, and then I listened to the "Disk Only Backup Strategies" Technical Exchange where they recommended 2-4 GB volumes. Thanks, Tim Rushforth City of Winnipeg -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 27, 2004 6:41 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? Funny, we set ours down to 25% as well just to see what would happen. This worked but we have since set all of the ATA Pools to 50% and we just leave them there. Theoretically what could happen is we could be wasting twice as much space but the fact is the volumes were going from 25% to 50% in a matter of days and when we looked at how many volumes were between 25% and 50% in our environment we determined there was no need to reclaim down that far. From all outward signs there was no issues with reclaiming down to 25% we just didn't think it was worth doing the extra work to get back such a small amount of disk. Disk is cheap, right! lol "Rushforth, Tim" wrote: We've set ours at 25%. We are just piloting an all disk backup pool for some clients on one of our servers and for small files on another. - Do you Yahoo!? Y! Messenger - Communicate in real time. Download now. Confidentiality Notice follows: The information in this message (and the documents attached to it, if any) is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken, or omitted to be taken in reliance on it is prohibited and may be unlawful. If you have received this message in error, please delete all electronic copies of this message (and the documents attached to it, if any), destroy any hard copies you may have created and notify
Re: D2D vs. tape backups with TSM?
Well we are talking about Volumes on disk (but tape is the same). To use mutli-session restore you would need a client that supports it(I don't think the api clients like tdp for exchange support it), the maximum mount points for the client must be > 1 and that resourceutilization setting in the client msut be > 1. -Original Message- From: Troy Frank [mailto:[EMAIL PROTECTED] Sent: July 29, 2004 10:40 AM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? Re:Collocation - Maybe I don't understand how the restores *should* be working. In our case we have 2 drives. When I do a big restore, that's spread out across a lot of tapes, I don't see it using both drives. It mounts one, finds what it needs, mounts the next tape, etc. It doesn't seem to use the second drive. Collocation would help in that case, since it would cause less tape mounts. So then the question becomes...I take it this isn't what should be happening? Is there something special you have to do to make it use multiple tape drives? I use the webclient for initiating restores on netware/windows clients. I've never seen any settings in the webclient that appear to be for using all the tape drives instead of one. Is it something I can only do by using a command line restore with dsmc? Troy Frank Network Services University of Wisconsin Medical Foundation 608.829.5384 >>> [EMAIL PROTECTED] 7/29/2004 9:48:57 AM >>> We have always used compression going to Disk. We use exclude.compression for things like .zip etc. You may want to use compressalways yes to avoid resending data that grows. We are not collocating at all - why would you want to? From a restore perspective (using multi-session restore) it is better to have the data spread out across multiple volumes. -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 28, 2004 8:29 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? We are using 25 GB volumes right now. We are also still collocating the storage pools that use the file device class by node. This has worked out fine for us. Sad to admit but I wasn't aware of the Technical Exchange recommendation. Is there a white paper from that you could refer me to. We are contemplating turning on node compression everywhere to also help reduce disk space. Also, I made mention in a previous post that we were reclaiming down to 50% and that was fine. Well, like always when you make a comment like that it makes you think and they you go look. I found that we were using around 16 TB's of ATA space in all when you look at the "In Use" numbers. When I looked at the actual disk in use it was closer to 21 TB's of data. I am currently reclaiming everything down to 40 and I plan to get down to 25 again. At that point I will compare the numbers and see how much I can reduce the 21 TB's in use. Also somewhat interesting information. We have found that the I/O capabilities of the latest and greatest servers can really help push a lot more data to disk. We have always been told by our disk vendor that the bottleneck wasn't them. We ruled out many things except them. Finally we looked at a more detailed performance monitor of our systems and we found that the we were killing the processor during times when we were pushing a lot of data to disk. With these new servers we see migrations from Fibre disk to ATA disk at over 150 GB/hr. We do have 60 TB's of ATA space though so we have a lot of disks to write to. "Rushforth, Tim" < [EMAIL PROTECTED] > wrote: Just curious what size of file volumes are you using? We were originally using 25 GB, and then I listened to the "Disk Only Backup Strategies" Technical Exchange where they recommended 2-4 GB volumes. Thanks, Tim Rushforth City of Winnipeg -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 27, 2004 6:41 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? Funny, we set ours down to 25% as well just to see what would happen. This worked but we have since set all of the ATA Pools to 50% and we just leave them there. Theoretically what could happen is we could be wasting twice as much space but the fact is the volumes were going from 25% to 50% in a matter of days and when we looked at how many volumes were between 25% and 50% in our environment we determined there was no need to reclaim down that far. From all outward signs there was no issues with reclaiming down to 25% we just didn't think it was worth doing the extra work to get back such a small amount of disk. Disk is cheap, right! lol "Rushforth, Tim" wrote: We've set ours at 25%. We are just piloting an all disk backup pool for some clients on one of our servers and for small files on another. - Do you Yahoo!? Y! Messenger - Communicate in real time. Download now. Confidentiality Notice follows: The information in this message (and the documents attached to it, if any) is confidential and may be
Re: D2D vs. tape backups with TSM?
>... Is there something special you have to do to make it use >multiple tape drives? ... Try - See "Multi-session Restore" in the TSM 5.1 Technical Guide redbook, which provides an excellent intro. Richard Sims
any way to delete a entry from volume history?
There is a tape that I want to reuse. If I look at my volhist, this is the out put for the volume Date/Time: 12/30/03 15:07:09 Volume Type: REMOTE Backup Series: Backup Operation: Volume Seq: Device Class: LTO2DEV Volume Name: 716MFS Volume Location: TSM_ROBIN Command: I tried to label libvol but would not let me because It has a entry of this tapes barcode (label) in the volume history. I can't use the delete volhist command for this because its not a db backup tape or export/import volume. Can't use delete volume because its not a volume in any storage pool I have tried to delete the entry in the volhist file (in my dsmser.opt I out the volhist to a file called volhist.dsm) but I does not take the volhist info from the volhist file. Is there a way refresh the volume history from volhist file? I guess better questions is is there any way I can reuse this tape? Thanks Tae
HP-UX Web client not using client option sets?
TSM server 5.2.2.5 on HP-UX 11.11 TSM client 5.1.5.0 & 5.2.2.0 HP-UX 11.11 I use client option sets to centralize & control our include/exclude list on different nodes, and although the scheduled incremental backups seem to work correctly, I have noticed that when I bring up the web client for my HP-UX nodes, it appears that none of my statements in the assigned client option set are being used. For example, I have an "exclude.fs '/crash'", but the web gui will allow me to back it up. It doesn't look like my windows clients have this issue. I haven't been able to find any reason for this in the user docs, this listserv, or IBM's web site. Can anyone out there confirm or deny that this is true for them? Steve Schaub Storage Systems Engineer II Haworth, Inc 616-393-1457 (desk) 616-886-8821 (cell) [EMAIL PROTECTED]
Re: D2D vs. tape backups with TSM?
I aggree about not using collocation but this customer has always collocated everything offsite and onsite. Moving to ATA was step one. Step two is getting them to turn off collocation on all the Large File data. Step three is going to be turnning off collocation for everything on disk. "Rushforth, Tim" <[EMAIL PROTECTED]> wrote:We have always used compression going to Disk. We use exclude.compression for things like .zip etc. You may want to use compressalways yes to avoid resending data that grows. We are not collocating at all - why would you want to? From a restore perspective (using multi-session restore) it is better to have the data spread out across multiple volumes. -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 28, 2004 8:29 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? We are using 25 GB volumes right now. We are also still collocating the storage pools that use the file device class by node. This has worked out fine for us. Sad to admit but I wasn't aware of the Technical Exchange recommendation. Is there a white paper from that you could refer me to. We are contemplating turning on node compression everywhere to also help reduce disk space. Also, I made mention in a previous post that we were reclaiming down to 50% and that was fine. Well, like always when you make a comment like that it makes you think and they you go look. I found that we were using around 16 TB's of ATA space in all when you look at the "In Use" numbers. When I looked at the actual disk in use it was closer to 21 TB's of data. I am currently reclaiming everything down to 40 and I plan to get down to 25 again. At that point I will compare the numbers and see how much I can reduce the 21 TB's in use. Also somewhat interesting information. We have found that the I/O capabilities of the latest and greatest servers can really help push a lot more data to disk. We have always been told by our disk vendor that the bottleneck wasn't them. We ruled out many things except them. Finally we looked at a more detailed performance monitor of our systems and we found that the we were killing the processor during times when we were pushing a lot of data to disk. With these new servers we see migrations from Fibre disk to ATA disk at over 150 GB/hr. We do have 60 TB's of ATA space though so we have a lot of disks to write to. "Rushforth, Tim" wrote: Just curious what size of file volumes are you using? We were originally using 25 GB, and then I listened to the "Disk Only Backup Strategies" Technical Exchange where they recommended 2-4 GB volumes. Thanks, Tim Rushforth City of Winnipeg -Original Message- From: TSM_User [mailto:[EMAIL PROTECTED] Sent: July 27, 2004 6:41 PM To: [EMAIL PROTECTED] Subject: Re: D2D vs. tape backups with TSM? Funny, we set ours down to 25% as well just to see what would happen. This worked but we have since set all of the ATA Pools to 50% and we just leave them there. Theoretically what could happen is we could be wasting twice as much space but the fact is the volumes were going from 25% to 50% in a matter of days and when we looked at how many volumes were between 25% and 50% in our environment we determined there was no need to reclaim down that far. From all outward signs there was no issues with reclaiming down to 25% we just didn't think it was worth doing the extra work to get back such a small amount of disk. Disk is cheap, right! lol "Rushforth, Tim" wrote: We've set ours at 25%. We are just piloting an all disk backup pool for some clients on one of our servers and for small files on another. - Do you Yahoo!? Y! Messenger - Communicate in real time. Download now. - Do you Yahoo!? New and Improved Yahoo! Mail - Send 10MB messages!
Re: TSM Client migration
Dear all, Please let me know how I can migrate a client's(Win2k,Linux & Solaris) data from one server(TSM 5.1.8.1) to another. Thanks & best regards, Sanjoy
Please reply anyone !! -- ANR2020E QUERY CONTENT: Invalid parameter - NAMETYPE and Other one is CODETYPE
Dear All, I have just installed TSM Extended Edition 5.2.2. Taken a backup and even restored it. While I am trying to "Query the contents of a storage pool volume" I am getting this error ANR2020E QUERY CONTENT: Invalid parameter - NAMETYPE ANR2020E QUERY CONTENT: Invalid parameter - CODETYPE Whereas the same in TSM Standard Edition 5.2 was working. Yes I understand there is some required combination like Filespace and File Space Nmae Type and File Space Code Page Type. In File space i put "\\tivoli\d$" as seen in Clients' file spaces. leaving the rest default as: FileSpace Name type = Server FileSpace Code page type = BOTH The error: ANR2020E QUERY CONTENT: Invalid parameter - NAMETYPE appears when i don't give the filespace. The error: ANR2020E QUERY CONTENT: Invalid parameter - CODETYPE appears when i give the filespace. Please help! Kind Regards Muhammad SaDaT Anwar Product Specialist Systems Management & Data Management Products Info Tech (Pvt) Limited 108, Business Avenue, Main Shahrah-e-Faisal, Karachi, Pakistan Ph: +92-21-111-427-427 Fax: +92-21-4310569 Cell: +92-21-300-8211943