Re: continuous restore from multiple cartridges
On Apr 6, 2010, at 2:34 AM, Mehdi Salehi wrote: > Hi all, > A client is to restore something from multiple cartridges (regardless of > being collocated or not). Server mounts the first cartridge and the > restoration process starts . Does TSM server prepare (mount) the next > cartridge before it is actually needed if there are enough tape drives in > the library? No. One would think that would be an administrative option in the product, but has not been. The apparent intention is to limit the number of drives that a client will have in use, regardless of actual availability of drives. Richard Sims
Re: continuous restore from multiple cartridges
>> On Tue, 6 Apr 2010 07:14:52 -0400, Richard Sims said: > On Apr 6, 2010, at 2:34 AM, Mehdi Salehi wrote: >> A client is to restore something from multiple cartridges >> (regardless of being collocated or not). Server mounts the first >> cartridge and the restoration process starts . Does TSM server >> prepare (mount) the next cartridge before it is actually needed if >> there are enough tape drives in the library? > No. One would think that would be an administrative option in the > product, but has not been. The apparent intention is to limit the > number of drives that a client will have in use, regardless of > actual availability of drives. If you're in the "not collocated" case, you can sometimes save time by running a 'MOVE NODEDATA'. make a temporary DISK or FILE stgpool, and MOVE NODEdata [targetnode] FROMSTG=[your primary tape pool] TOSTG=[temp stg] MAXPROC=[whatever] where 'whatever' is the number of drives you intend to devote to the process. This will cost you startup time, in that you won't be able to start the restore until the move is done. But it will save you time in the long run as you can mount lots of tapes simultaneously. You probably want at least three drives in on the move data, but 'as many as you can spare' is a good idea. - Allen S. Rout
Archive completed but not big enough.
Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance!
Re: Archive completed but not big enough.
Yes. On 06/04, Timothy Hughes wrote: > Hello all, > > We did a Archive and it only Archived 18.15GB it should have Archived > 76.6 GB, I am going to delete the Archive and try again. > > Question - By deleting the filespace via command line or GUI that should > get rid of the data that was archived correct? > > Thanks > > > 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive > data) for node ADOCGWC_10-16-09: 49344 > objects > deleted. > > Thanks in Advance!
Re: Archive completed but not big enough.
Am I correct in that below it looks like you've performed a `DEL FILESPACE ADOCGWC_10-16-09 *`? This will remove *all* of the data that you've either backed-up or archived belonging to this node. /DMc London, UK -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of km Sent: 06 April 2010 16:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Archive completed but not big enough. Yes. On 06/04, Timothy Hughes wrote: > Hello all, > > We did a Archive and it only Archived 18.15GB it should have Archived > 76.6 GB, I am going to delete the Archive and try again. > > Question - By deleting the filespace via command line or GUI that should > get rid of the data that was archived correct? > > Thanks > > > 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive > data) for node ADOCGWC_10-16-09: 49344 > objects > deleted. > > Thanks in Advance! No virus found in this incoming message. Checked by AVG - www.avg.com Version: 9.0.791 / Virus Database: 271.1.1/2780 - Release Date: 04/02/10 19:32:00
Re: Archive completed but not big enough.
You can get finer-grain control by doing a "UPDATE NODE ARCHDEL=YES" on the server side and then using "DELETE ARCHIVE" on the client. On 04/06/10 06:39, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance! -- -- Skylar Thompson (skyl...@u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
Re: Archive completed but not big enough.
The product provides the 'dsmc delete archive' command to dispose of a single Archive object.
Re: Archive completed but not big enough.
Hi km, Thanks the first try didn't seem to get rid of everything had to redo it and all data has been delete Tim km wrote: Yes. On 06/04, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance!
Re: Archive completed but not big enough.
Thanks Skylar for the command. Thanks again Richard, David and Km for your responses! I'm Still learning about this archive part of TSM. Tim Skylar Thompson wrote: You can get finer-grain control by doing a "UPDATE NODE ARCHDEL=YES" on the server side and then using "DELETE ARCHIVE" on the client. On 04/06/10 06:39, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance! -- -- Skylar Thompson (skyl...@u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
Re: Archive completed but not big enough.
F.Y.I - We are re-doing the archive again I will let you know how it goes. Timothy Hughes wrote: Thanks Skylar for the command. Thanks again Richard, David and Km for your responses! I'm Still learning about this archive part of TSM. Tim Skylar Thompson wrote: You can get finer-grain control by doing a "UPDATE NODE ARCHDEL=YES" on the server side and then using "DELETE ARCHIVE" on the client. On 04/06/10 06:39, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance! -- -- Skylar Thompson (skyl...@u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
Re: Archive completed but not big enough.
Hi David, Thanks for your rely, This node along with others we are creating are just being used to hold "Archived data only" they are not being backed up. No backup data involved. Tim David McClelland wrote: Am I correct in that below it looks like you've performed a `DEL FILESPACE ADOCGWC_10-16-09 *`? This will remove *all* of the data that you've either backed-up or archived belonging to this node. /DMc London, UK -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of km Sent: 06 April 2010 16:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Archive completed but not big enough. Yes. On 06/04, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance! No virus found in this incoming message. Checked by AVG - www.avg.com Version: 9.0.791 / Virus Database: 271.1.1/2780 - Release Date: 04/02/10 19:32:00
Sql query help
Tsm server 5.5.4 running on suse 9 linux under zvm 5.3. Trying to create a query which will give me the count of volumes in a storage pool, and its maxscratch setting on a single line. Nice to watch for filling pools which need a larger maxscratch value. Query follows: --- select a.stgpool_name as "Storage Pool Name", - a.devclass as "Device Class Name", - count( b.volume_name) as " # VOLUMES", - a.maxscratch as "volumes available" - from stgpools a, volumes b - where a.devclass <>'DISK' - and a.devclass = b.devclass_name - group by a.stgpool_name, a.devclass - query ends. Thanks for any help. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310
Failed to backup DB on Linux TSM 6.2
Hi All, For some reason I'm unable to backup my TSM DB on Linux (TSM Server 6.2.0.0, TSM Client 6.2.0.0). I get the infamous message: ANR2968E Database backup terminated. DB2 sqlcode: -2033. DB2 sqlerrmc: 106. ANR0985I Process 1 for DATABASE BACKUP running in the BACKGROUND completed with completion state FAILURE at 09:07:52 PM. I've done everything I could find on the net to resolve the problem but no luck!. [tsmin...@tsm62 ~]$ env | grep DSMI DSMI_DIR=/opt/tivoli/tsm/client/api/bin64 DSMI_LOG=/home/tsminst1/tsminst1 DSMI_CONFIG=/home/tsminst1/tsminst1/tsmdbmgr.opt [tsmin...@tsm62 ~]$ ls -l $DSMI_DIR total 5856 -rw-r--r-- 1 root root 17 Apr 6 20:30 dsm.opt -r--r--r-- 1 root bin 782 Mar 9 12:28 dsm.opt.smp -rwxrwxr-x 1 tsminst1 tsm 292 Apr 3 20:49 dsm.sys -r--r--r-- 1 root bin 971 Mar 9 12:28 dsm.sys.smp -rwsr-xr-x 1 root bin 2670186 Mar 2 09:58 dsmtca lrwxrwxrwx 1 root root 16 Apr 3 20:40 en_US -> ../../lang/EN_US lrwxrwxrwx 1 root root 16 Apr 3 20:40 EN_US -> ../../lang/EN_US -r-xr-xr-x 1 root bin 3259048 Mar 2 09:58 libApiTSM64.so drwxr-xr-x 2 root bin 4096 Apr 3 17:36 sample [tsmin...@tsm62 ~]$ ls -l $DSMI_LOG total 28 drwxrwxr-x 4 tsminst1 tsm 4096 Apr 3 16:32 NODE -rw-r--r-- 1 tsminst1 tsm0 Apr 6 20:25 tsmdbmgr.log -rw-r--r-- 1 tsminst1 tsm 29 Apr 3 16:39 tsmdbmgr.opt -rw--- 1 tsminst1 tsm 151 Apr 3 20:42 TSM.PWD [tsmin...@tsm62 ~]$ cat $DSMI_CONFIG SERVERNAME TSMDBMGR_TSMINST1 [tsmin...@tsm62 ~]$ cat $DSMI_DIR/dsm.sys Servername TSM62 COMMMethod TCPip TCPPort 1500 TCPServeraddress localhost servername TSMDBMGR_TSMINST1 commmethod tcpip tcpserveraddr localhost tcpport 1500 passwordaccess generate passworddir /home/tsminst1/tsminst1 errorlogname /home/tsminst1/tsminst1/tsmdbmgr.log nodename $$_TSMDBMGR_$$ [tsmin...@tsm62 ~]$ Anyone got a clue to where I probably missed the obvious? Richard.
Re: Sql query help
There is a field in the stgpools table that shows how many tapes are used, so I don't think you need a join. select MAXSCRATCH, NUMSCRATCHUSED from stgpools The way I monitor maxscratch is with this: select STGPOOL_NAME, MAXSCRATCH, NUMSCRATCHUSED from stgpools where MAXSCRATCH is not null and (MAXSCRATCH - NUMSCRATCHUSED)<10 If that command returns succesfully, then the Storage Pool will accept less than 10 new scratch tapes. (the "not null" is to exclude Random pools) Regards, Shawn Shawn Drew Internet g...@bsu.edu Sent by: ADSM-L@VM.MARIST.EDU 04/06/2010 03:10 PM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject [ADSM-L] Sql query help Tsm server 5.5.4 running on suse 9 linux under zvm 5.3. Trying to create a query which will give me the count of volumes in a storage pool, and its maxscratch setting on a single line. Nice to watch for filling pools which need a larger maxscratch value. Query follows: --- select a.stgpool_name as "Storage Pool Name", - a.devclass as "Device Class Name", - count( b.volume_name) as " # VOLUMES", - a.maxscratch as "volumes available" - from stgpools a, volumes b - where a.devclass <>'DISK' - and a.devclass = b.devclass_name - group by a.stgpool_name, a.devclass - query ends. Thanks for any help. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310 This message and any attachments (the "message") is intended solely for the addressees and is confidential. If you receive this message in error, please delete it and immediately notify the sender. Any use not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. The internet can not guarantee the integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore be liable for the message if modified. Please note that certain functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.
SV: Failed to backup DB on Linux TSM 6.2
What does DB2Diag.log tell you? Best Regards Christian Svensson Cell: +46-70-325 1577 E-mail: christian.svens...@cristie.se Skype: cristie.christian.svensson Supported Platform for CPU2TSM:: http://www.cristie.se/cpu2tsm-supported-platforms Från: Richard van Denzel [rden...@sltn.nl] Skickat: den 6 april 2010 21:19 Till: ADSM-L@VM.MARIST.EDU Ämne: Failed to backup DB on Linux TSM 6.2 Hi All, For some reason I'm unable to backup my TSM DB on Linux (TSM Server 6.2.0.0, TSM Client 6.2.0.0). I get the infamous message: ANR2968E Database backup terminated. DB2 sqlcode: -2033. DB2 sqlerrmc: 106. ANR0985I Process 1 for DATABASE BACKUP running in the BACKGROUND completed with completion state FAILURE at 09:07:52 PM. I've done everything I could find on the net to resolve the problem but no luck!. [tsmin...@tsm62 ~]$ env | grep DSMI DSMI_DIR=/opt/tivoli/tsm/client/api/bin64 DSMI_LOG=/home/tsminst1/tsminst1 DSMI_CONFIG=/home/tsminst1/tsminst1/tsmdbmgr.opt [tsmin...@tsm62 ~]$ ls -l $DSMI_DIR total 5856 -rw-r--r-- 1 root root 17 Apr 6 20:30 dsm.opt -r--r--r-- 1 root bin 782 Mar 9 12:28 dsm.opt.smp -rwxrwxr-x 1 tsminst1 tsm 292 Apr 3 20:49 dsm.sys -r--r--r-- 1 root bin 971 Mar 9 12:28 dsm.sys.smp -rwsr-xr-x 1 root bin 2670186 Mar 2 09:58 dsmtca lrwxrwxrwx 1 root root 16 Apr 3 20:40 en_US -> ../../lang/EN_US lrwxrwxrwx 1 root root 16 Apr 3 20:40 EN_US -> ../../lang/EN_US -r-xr-xr-x 1 root bin 3259048 Mar 2 09:58 libApiTSM64.so drwxr-xr-x 2 root bin 4096 Apr 3 17:36 sample [tsmin...@tsm62 ~]$ ls -l $DSMI_LOG total 28 drwxrwxr-x 4 tsminst1 tsm 4096 Apr 3 16:32 NODE -rw-r--r-- 1 tsminst1 tsm0 Apr 6 20:25 tsmdbmgr.log -rw-r--r-- 1 tsminst1 tsm 29 Apr 3 16:39 tsmdbmgr.opt -rw--- 1 tsminst1 tsm 151 Apr 3 20:42 TSM.PWD [tsmin...@tsm62 ~]$ cat $DSMI_CONFIG SERVERNAME TSMDBMGR_TSMINST1 [tsmin...@tsm62 ~]$ cat $DSMI_DIR/dsm.sys Servername TSM62 COMMMethod TCPip TCPPort 1500 TCPServeraddress localhost servername TSMDBMGR_TSMINST1 commmethod tcpip tcpserveraddr localhost tcpport 1500 passwordaccess generate passworddir /home/tsminst1/tsminst1 errorlogname /home/tsminst1/tsminst1/tsmdbmgr.log nodename $$_TSMDBMGR_$$ [tsmin...@tsm62 ~]$ Anyone got a clue to where I probably missed the obvious? Richard.
Re: Archive completed but not big enough.
Tim, Was there an error to indicate why the archive failed? Or did it seem to complete successfully? If so, is it possible that you have compression turned on at the client, through the client setting, or the client options? If so, it is not too surprising that a 76GB file could compress down to 18GB. Best Regards, John D. Schneider The Computer Coaching Community, LLC Office: (314) 635-5424 / Toll Free: (866) 796-9226 Cell: (314) 750-8721 Original Message Subject: [ADSM-L] Archive completed but not big enough. From: Timothy Hughes Date: Tue, April 06, 2010 8:39 am To: ADSM-L@VM.MARIST.EDU Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance!
Re: Sql query help
Thanks Shawn. That did it. Don't know how I missed that field. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310 -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn Drew Sent: Tuesday, April 06, 2010 3:18 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Sql query help There is a field in the stgpools table that shows how many tapes are used, so I don't think you need a join. select MAXSCRATCH, NUMSCRATCHUSED from stgpools The way I monitor maxscratch is with this: select STGPOOL_NAME, MAXSCRATCH, NUMSCRATCHUSED from stgpools where MAXSCRATCH is not null and (MAXSCRATCH - NUMSCRATCHUSED)<10 If that command returns succesfully, then the Storage Pool will accept less than 10 new scratch tapes. (the "not null" is to exclude Random pools) Regards, Shawn Shawn Drew Internet g...@bsu.edu Sent by: ADSM-L@VM.MARIST.EDU 04/06/2010 03:10 PM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject [ADSM-L] Sql query help Tsm server 5.5.4 running on suse 9 linux under zvm 5.3. Trying to create a query which will give me the count of volumes in a storage pool, and its maxscratch setting on a single line. Nice to watch for filling pools which need a larger maxscratch value. Query follows: --- select a.stgpool_name as "Storage Pool Name", - a.devclass as "Device Class Name", - count( b.volume_name) as " # VOLUMES", - a.maxscratch as "volumes available" - from stgpools a, volumes b - where a.devclass <>'DISK' - and a.devclass = b.devclass_name - group by a.stgpool_name, a.devclass - query ends. Thanks for any help. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310 This message and any attachments (the "message") is intended solely for the addressees and is confidential. If you receive this message in error, please delete it and immediately notify the sender. Any use not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. The internet can not guarantee the integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore be liable for the message if modified. Please note that certain functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.
Re: Archive completed but not big enough.
It probably doesn't help you in this case, but DELETE FILESPACE has a TYPE=ARCHIVE argument you can give it, so it will only remove archive data. On 04/06/10 10:29, Timothy Hughes wrote: Hi David, Thanks for your rely, This node along with others we are creating are just being used to hold "Archived data only" they are not being backed up. No backup data involved. Tim David McClelland wrote: Am I correct in that below it looks like you've performed a `DEL FILESPACE ADOCGWC_10-16-09 *`? This will remove *all* of the data that you've either backed-up or archived belonging to this node. /DMc London, UK -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of km Sent: 06 April 2010 16:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Archive completed but not big enough. Yes. On 06/04, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance! No virus found in this incoming message. Checked by AVG - www.avg.com Version: 9.0.791 / Virus Database: 271.1.1/2780 - Release Date: 04/02/10 19:32:00 -- -- Skylar Thompson (skyl...@u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
Changing DNS name of node - made server mad
We discovered that a new client node was getting an incorrect DNS name resolution, from the TSM server, due to someone reassigning the IP but not removing the DNS entry and the TSM server doing reverse-lookups. So we removed the DNS entry, thus making the TSM server mad (see errors below). It spewed out hundreds of the "Unable to resolve address" messages. This box does not have nor need a DNS entry. 4/6/2010 4:11:06 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:11:36 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:12:06 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:12:36 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:13:06 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:13:36 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:14:02 AM ANR2578W Schedule RO in domain RADONC for node RO-CVS has missed its scheduled start up window. My question is, do I need to do anything about this or will the TSM server just "get over it"? Zoltan Forray TSM Software & Hardware Administrator Virginia Commonwealth University UCC/Office of Technology Services zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and other reputable organizations will never use email to request that you reply with your password, social security number or confidential personal information. For more details visit http://infosecurity.vcu.edu/phishing.html
SV: Changing DNS name of node - made server mad
Hi, Take a look in dsm.sys / dsm.opt and see if you are using TCPClientAddress. Best Regards Christian Svensson Cell: +46-70-325 1577 E-mail: christian.svens...@cristie.se Skype: cristie.christian.svensson Supported Platform for CPU2TSM:: http://www.cristie.se/cpu2tsm-supported-platforms Från: Zoltan Forray/AC/VCU [zfor...@vcu.edu] Skickat: den 6 april 2010 22:02 Till: ADSM-L@VM.MARIST.EDU Ämne: Changing DNS name of node - made server mad We discovered that a new client node was getting an incorrect DNS name resolution, from the TSM server, due to someone reassigning the IP but not removing the DNS entry and the TSM server doing reverse-lookups. So we removed the DNS entry, thus making the TSM server mad (see errors below). It spewed out hundreds of the "Unable to resolve address" messages. This box does not have nor need a DNS entry. 4/6/2010 4:11:06 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:11:36 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:12:06 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:12:36 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:13:06 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:13:36 AM ANR8218W Unable to resolve address for team.ucc.vcu.edu. 4/6/2010 4:14:02 AM ANR2578W Schedule RO in domain RADONC for node RO-CVS has missed its scheduled start up window. My question is, do I need to do anything about this or will the TSM server just "get over it"? Zoltan Forray TSM Software & Hardware Administrator Virginia Commonwealth University UCC/Office of Technology Services zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and other reputable organizations will never use email to request that you reply with your password, social security number or confidential personal information. For more details visit http://infosecurity.vcu.edu/phishing.html
Re: Archive completed but not big enough.
Skylar Thanks, that command is good to know, I may need to use it sometime. I appreciate it. Thanks Skylar Thompson wrote: It probably doesn't help you in this case, but DELETE FILESPACE has a TYPE=ARCHIVE argument you can give it, so it will only remove archive data. On 04/06/10 10:29, Timothy Hughes wrote: Hi David, Thanks for your rely, This node along with others we are creating are just being used to hold "Archived data only" they are not being backed up. No backup data involved. Tim David McClelland wrote: Am I correct in that below it looks like you've performed a `DEL FILESPACE ADOCGWC_10-16-09 *`? This will remove *all* of the data that you've either backed-up or archived belonging to this node. /DMc London, UK -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of km Sent: 06 April 2010 16:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Archive completed but not big enough. Yes. On 06/04, Timothy Hughes wrote: Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance! No virus found in this incoming message. Checked by AVG - www.avg.com Version: 9.0.791 / Virus Database: 271.1.1/2780 - Release Date: 04/02/10 19:32:00 -- -- Skylar Thompson (skyl...@u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S048, (206)-685-7354 -- University of Washington School of Medicine
Re: Archive completed but not big enough.
Hi John, Thanks for your reply Yes, checking through the server log we found this error ANR0530W Transaction failed for session 175494 for node ADOCGWC_10-16-09 (NetWare) - internal server error detected. ANR0530W Transaction failed for session 175490 for node ADOCGWC_10-16-09 (NetWare) - internal server error detected. The stats also showed total number of objects failed 255 Also, The client sent me a snapshot of the console where the command was entered and before it ended it showed the following. ANS1301E Server detected system error ANS1999E Archive processing of 'Migrate:/GWC1016/*' stopped ANS1301E Server detected system error Tim John D. Schneider wrote: Tim, Was there an error to indicate why the archive failed? Or did it seem to complete successfully? If so, is it possible that you have compression turned on at the client, through the client setting, or the client options? If so, it is not too surprising that a 76GB file could compress down to 18GB. Best Regards, John D. Schneider The Computer Coaching Community, LLC Office: (314) 635-5424 / Toll Free: (866) 796-9226 Cell: (314) 750-8721 Original Message Subject: [ADSM-L] Archive completed but not big enough. From: Timothy Hughes Date: Tue, April 06, 2010 8:39 am To: ADSM-L@VM.MARIST.EDU Hello all, We did a Archive and it only Archived 18.15GB it should have Archived 76.6 GB, I am going to delete the Archive and try again. Question - By deleting the filespace via command line or GUI that should get rid of the data that was archived correct? Thanks 99 DELETE FILESPACE Deleting file space * (fsId=1) (backup/archive data) for node ADOCGWC_10-16-09: 49344 objects deleted. Thanks in Advance!
Backupset/Export pain
Hi All I have a customer that thinks he wants a monthly backup kept forever: cites legislative requirements and will not be disuaded. Current implementation is to use a series of backupsets. The issue with this is that the backupset generation can only be started when a drive is available - it will not wait for a drive. Further, if it gets bigger than one tape, and the server is busy then tape 1 is dismounted, another waiting process grabs the drive and tape 2 cannot be mounted, causing the backupset generation process to fail. The solution so far has been to break up the backupset generation of multiple nodes into tape sized bites, but as data volumes get larger one of these jobs seems to fail every month. Also I now need to run an export of a large database (2TB) and export seems to suffer from the same failing. Is this new behavior with 5.5? I don't remember coming across it before. I have tried to address this by setting up two device classes, exactly similar except that one has mountlimit of (number of drives) - 1 and the other has mountlimit of 1. I hoped that directing the export/backupset to that device class would reserve one drive for it. Testing this afternoon has shown that this is ineffective though I am at a loss to explain why. Do others have this problem? What work-arounds are there? Long term I hope to make the backupsets run server-to-server which will improve the reliability and tape utilization, but for the present I am stuck. Thanks Steve. Steven Harris TSM Admin Paraparaumu, New Zealand.