Re: ANS 1943E

2001-03-12 Thread Prather, Wanda

FYI, if you use the 4.1 client with a 3.1 server, you will not get a backup
of your registry.

I suggest using the 3.7.2 client instead.


-Original Message-
From: Fab System [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 12, 2001 10:37 AM
To: [EMAIL PROTECTED]
Subject: ANS 1943E


Hi

We are using the TSM4.1 W2K client with an ADSM 3.1 server, we get the
following errors in the dsmsched.log file. Thus showing the backup failed.
Any idea how I can disable this function so the beckup shows completed.

Executing scheduled command now.
09.03.2001 23:03:39 --- SCHEDULEREC OBJECT BEGIN INFRA.SCHED1 09.03.2001
23:00:00
09.03.2001 23:03:39 Incremental backup of volume '\\gbhinfraser0040\c$'
09.03.2001 23:03:39 Incremental backup of volume 'SYSTEM OBJECT'
09.03.2001 23:03:39 ANS1943E The operation is not supported: Downlevel
server version.



Sean Dudding
EMAIL: [EMAIL PROTECTED]
Tel: 0207 268 6774
Fax: 0207 268 1836

Sean Dudding
EMAIL: [EMAIL PROTECTED]
Tel: 0207 268 6774
Fax: 0207 268 1836


__

Registered Office:
Marks and Spencer p.l.c
Michael House, Baker Street,
London, W1U 8EP
Registered No. 214436 in England and Wales.

Telephone  (020) 7935 4422
Facsimile  (020) 7487 2670

www.marksandspencer.com

Please note that electronic mail may be monitored.

This e-mail is confidential. If you received it by mistake, please let us
know and then
delete it from your system; you should not copy, disclose, or distribute its
contents to
anyone nor act in reliance on this e-mail, as this is prohibited and may be
unlawful

_



Re: : IBM 3494 Automated Tape Library Information needed

2001-03-12 Thread Prather, Wanda

We have 2 3494's in house, attached to OS/390 for use with DFHSM (soon to be
used with a TSM server as well).
The oldest is 3494 is 2 years old.  We also have STK 9710 robots in house
for use with TSM on AIX, so I've worked with both.

The 3494 isnt' quite as fast as the STK robots, but it holds more
cartridges.  We have had one minor alignment problem with the 3494 that
caused some occasional strange behavior, but the problem was fixed within 2
days when we reported it to the CE.

We don't have any serious load on the 3494's, they are used all day long,
but we've never stressed them   They were installed here primarily to reduce
the mount of operator coverage required.  DFHSM backups (which use the
tapes/drives very much like TSM does) now run unattended overnight.  We
don't have any of the high-availblity features except the second hard disk;
no dual gripper or accessor.  Have never needed them.

We have no complaints at all about the 3494 hardware, and I don't think you
would ever be sorry you got one, it's rock solid.  If you have any
difficulties, it would probably be with the interface to VM, I have no idea
how that works; the interface to OS/390 is a little difficult to get used
to, because the documentation is somewhat lacking.

The best documentation is the red book, SG24-4632.  I think it actually has
some information about performance & mount times you can expect if you have
dual gripper vs. single gripper, stuff like that.




-Original Message-
From: Reinhold Wagner [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 12, 2001 1:11 PM
To: [EMAIL PROTECTED]
Subject: : IBM 3494 Automated Tape Library Information needed


David,

our 3494 is about 2 years old and the roboter is moving the whole day - no
problems. It's a
fantastic machine.

We purchased the box not on the second hand market - but during my mainframe
days most of
our IBM machines were purchased used. Maybe IBM fails in writing good
software
(TSM 4.1.2 ;-) )
but they know how to make good hardware!

Reinhold Wagner, Zeuna Staerker GmbH & Co. KG



Re: Why do backups have two sessions?

2001-03-12 Thread Prather, Wanda

The answer is, because they do.

I thought this started with the 3.7 client, not the 3.1.0.7 client, but I
can't swear to that.

The client now starts 2 sessions, the data flows on one, the control
information flows on the other.

The client may actually start MORE than 2 sessions, if it decides the system
has enough resources.
There is a parm that can be used to control it somewhat.  Look up the CLIENT
option
RESOURCEUTILITZATION.



-Original Message-
From: Brazner, Bob [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 12, 2001 11:48 AM
To: [EMAIL PROTECTED]
Subject: Why do backups have two sessions?


Many of my nodes appear to always initiate two sessions whenever I do a
backup.  It appears all the client data is being transferred to the TSM
server via one of the sessions (as I would expect).  The other session just
sits in "IdleW" status and racks up a few hundred or thousand bytes
sent/received.  Occasionaly, I'll see a situation where, let's say, 2K
bytes have been received, but 10 times that have been sent.  Can anyone
tell me what this second session is doing?  Does it have something to do
with "client owner"?  Note, we do not run the web backup-archive client.
System is recently TSM 4.1 (on AIX 4.3.3), but I noticed this happening on
ADSM 3.1.2.0. Clients I've seen this on range from 3.1.0.7, to 3.7, to
4.1.2.  Please cc me directly on any reply to the listserv.  Thanks.

Bob Brazner
Johnson Controls, Inc.
(414) 524-2570
[EMAIL PROTECTED]



Re: TAPE DEFINITION LOST

2001-03-14 Thread Prather, Wanda

Q libv only shows tapes that are in your library - once DRM spits the tape
out, it won't show up there.

Q vol only shows tapes that are in storage pools.

Your tape is some other type, probably a data base backup tape (since DRM
spits those out along with the copy pool tapes, by default.)

Non-storage pool tapes are defined in the volume history file.

Try this:  q volhist type=dbb

It will show up there if it's a data base backup tape.
But don't try to check it in or get it back; a DBB tape should go offsite
along with your copy pool tapes.  They are worthless without a data base
backup!


-Original Message-
From: Steve Hicks [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 14, 2001 2:23 PM
To: [EMAIL PROTECTED]
Subject: TAPE DEFINITION LOST


TSM spit out a tape this morning with it's DRM offsite tapes. I can not
find where it is defined to our server by doing a q libv or a q vol. When I
try to check it in or redefine it to the server it says that it is already
defined or that it contains export data. What does this mean? How do I get
this tape back? Are there any SQL Select commands I can run to help me out?



Re: client notification

2001-03-15 Thread Prather, Wanda

Part of the problem is, who is the "user".

Under WinnT, scheduled backups generally run under the System account.  Any
USER account may or may not be logged on at the time backups are run.  On
UNIX, the backups generally run as root.

What we do is query the eventlog on the server end, and generate mail
messages to be sent to users whose backups fail or miss.  That works if you
have a way to map the TSM node names into mail ids.  But, it doesn't take
into account any individual files that fail to back up, we really don't have
a solution for that yet.



-Original Message-
From: Rajesh Oak [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 15, 2001 12:37 AM
To: [EMAIL PROTECTED]
Subject: client notification


Everyone,
Can we notify a client that the backup was successful/not successful after
the backup schedule finishes. Say a screen pops up with the message "
Backup's done" or anything thing else which sends a signal to the user about
the Satus of his backup.

Regards.
Rajesh Oak


Get 250 color business cards for FREE! at Lycos Mail
http://mail.lycos.com/freemail/vistaprint_index.html



Re: Windows NT/2000 Daylight Savings Time Problem

2001-03-16 Thread Prather, Wanda

No such luck!

DST starts the FIRST Sunday in April.
(perhaps appropriately, this year, it's April 1)

-Original Message-
From: David Longo [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 16, 2001 10:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Windows NT/2000 Daylight Savings Time Problem


I believe we go to DST the last Sunday in April.  I hope we get clear on
this on this change and don't have a notice 3 days before telling us there
is a problem.  NO way I can update 100 clients with that notice.


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]


>>> [EMAIL PROTECTED] 03/16/01 04:38AM >>>
Hello

did I had to install the fix IP22088_17 on the NT/2000 servers, before next
week'end, where we are going to summertime???

Sincerely,
Bo Nielsen

FDB dataPhone: +45 4386 4671
Roskildevej 65 Fax: +45 4386 4990
DK-2620 AlbertslundE-mail: [EMAIL PROTECTED]
Denmark


> --
> Fra:  Andy Raibeck[SMTP:[EMAIL PROTECTED]]
> Svar til: ADSM: Dist Stor Manager
> Sendt:28. oktober 2000 08:44
> Til:  [EMAIL PROTECTED]
> Emne: Re: ATTN ALL TSM USERS: Windows NT/2000 Daylight Savings
> Time Problem
>
> Hello,
>
> The fixtests for this problem are now available for download from the FTP
> site.
>
> Please note that the original note I sent out earlier erroneously stated
> that the version 4.1 fixtest was 4.1.1.17 and the file names to download
> begin wtih IP22088_17. In fact, the fixtest version is 4.1.1.16 and the
> file names begin with IP22088_16. The 3.7 fixtest was reported correctly
> as
> being version 3.7.2.17 and the file names beginning with IP21933_17. I
> have
> included the corrected version of the original note below for your
> reference.
>
> The files are located on our anonymous ftp site, ftp.software.ibm.com:
>
> Version 4.1.1.16:
>
> Directory
> /storage/tivoli-storage-management/patches/client/v4r1/Windows/v411/single
>
> Please review the IP22088_16_readme.ftp file for information on
> downloading
> and installing the fixtest.
>
>
>
> Version 3.7.2.17:
>
> Directory
> /storage/tivoli-storage-management/patches/client/v3r7/Windows/v372/i386/s
> ingle
>
> Please review the IP21933_17_readme.ftp file for information on
> downloading
> and installing the fixtest.
>
>
>
> Andy
>
> Andy Raibeck
> IBM/Tivoli
> Tivoli Storage Manager Client Development
> e-mail: [EMAIL PROTECTED]
> "The only dumb question is the one that goes unasked."
>
> IMPORTANT - PLEASE READ THE FOLLOWING:
>
> A problem with the switch between Daylight Savings Time (DST) and Standard
> Time (STD) has just been discovered for the Windows TSM clients.
>
>
>
> BACKGROUND
>
> When Windows NT and 2000 systems automatically switch between DST and STD,
> the time attributes for files stored on NTFS file systems will be shifted
> by one hour. This is because NTFS displays time information as an offset
> from Greenwich Mean Time (GMT). Thus when the DST change is made, the
> offset from GMT is changed, causing the timestamps on your NTFS files to
> also change. (Note: Time information for Event Viewer events is affected
> in
> the same manner, but that is not pertinent to this discussion.) Further
> information on this subject is available in the Microsoft Knowledge Base,
> item Q129574. If you point your web browser to Microsoft's MSDN site,
> http://msdn.microsoft.com, and search on "Q129574" (without the quotes),
> you will find the information.
>
>
>
> THE PROBLEM
>
> When the system automatically adjusts between DST and STD, the TSM 3.7.2
> (and higher) clients will see that the modification time has changed for
> all files on NTFS systems, and will proceed to back everything up
> accordingly, even if the file has not really changed. This will occur only
> once after the time change, and thereafter incremental backups will
> proceed
> as normal. However, this will almost certainly affect the amount of data
> backed up by each client, effectively causing a full backup on all NTFS
> file systems. This could have a large impact on network and TSM server
> resources.
>
> The following bullets summarize the conditions under which this problem
> can
> occur:
>
> - TSM client is running on Windows NT 4.0 or Windows 2000. TSM clients
> running on Windows 9x-based operating systems (Windows 95, 98) are not
> affected.
>
> - The TSM client level is 3.7.2.x or higher (including all 4.1.x levels).
> TSM client levels below 3.7.2.x are not affected, as the problem was
> introduced in the 3.7.2.x code.
>
> - The file systems are formatted for NTFS. FAT and FAT32 file systems are
> unaffected by this problem.
>
> - The operating system's time zone settings are configured to
> automatically
> adjust for DST. You can check this by right-clicking on the Windows task
> bar and selecting the "Adjust Date/Time" item in 

Disaster Recovery Procedures for Win2K and WInNT.

2001-03-16 Thread Prather, Wanda

Several people have mailed me directly to ask for copies of my disaster
recovery procedures, so I have posted copies of them to the TSM Scripts
Depot at www.coderelief.com.

*   Click the link to Scripts Discussion Forum, then
*   click the link to Tivoli Storage Manager scripts,
*   then look in the Disaster Recovery topic.

PLEASE REMEMBER to read the instructions carefully, and take note of whether
your system levels match those that were used for my procedures.  Different
client and server levels DO matter.

BTW:
If you have any recovery procedures or scripts of your own that you can
share, it's very easy to post them on www.coderelief.com.  Read the
FORMATTING topic (link on the left of the page), it gives instructions for
uploading attachments.  It's really slick!



Re: TSM 3.7 to 4.1 Upgrade

2001-03-19 Thread Prather, Wanda

Is true.  We just did a 3.7.2 to 3.7.4 upgrade, AIX 4.3.3.  Something has
changed in Tivoli's install script, and the device type changes from "tape"
to "ADSMtape", so that you have to delete the drives and add them back, but
rmdev -dl doesn't work, you have to use odmdelete.

I've also done 2.x to 3.x, 3.1.x to 3.7.x, and it's the first time I've run
into this.  But nevertheless, it's true.

-Original Message-
From: Mark S. [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 18, 2001 10:51 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM 3.7 to 4.1 Upgrade


Shekhar Dhotre wrote:
>I have experienced this before  , upgrade deletes all drive info , and if
you
>try to reconfigure it , using smit tivoli ..you can`t you have to
>1) delete  drives from ODM  use odmdelete ..   (#rmdev -dl rmtx --does not
work)

Not true. I've upgraded from 2.x to 3.x, 3.1.x to 3.7.x, and 3.x to
4.1.x, and I've never seen an instance where 'rmdev -dl rmtX doesn't
work.

Trim your reply tree, Shekhar. It's much too long.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: client notification

2001-03-19 Thread Prather, Wanda

John,
I posted our email notification script in the Scripts depot at
www.coderelief.com.

It's under the Client Monitoring and Administration category, topic Monitor
Client & Admin Schedules

-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 15, 2001 2:44 PM
To: [EMAIL PROTECTED]
Subject: Re: client notification


We have insisted the the client CONTACT information contain an e-mail
address for this very reason.

Wanda,
  Would you care to share your e-mail generation routine?

TIA,
John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com


-Original Message-----
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 15, 2001 1:03 PM
To: [EMAIL PROTECTED]
Subject: Re: client notification


Part of the problem is, who is the "user".

Under WinnT, scheduled backups generally run under the System account.  Any
USER account may or may not be logged on at the time backups are run.  On
UNIX, the backups generally run as root.

What we do is query the eventlog on the server end, and generate mail
messages to be sent to users whose backups fail or miss.  That works if you
have a way to map the TSM node names into mail ids.  But, it doesn't take
into account any individual files that fail to back up, we really don't have
a solution for that yet.



-Original Message-
From: Rajesh Oak [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 15, 2001 12:37 AM
To: [EMAIL PROTECTED]
Subject: client notification


Everyone,
Can we notify a client that the backup was successful/not successful after
the backup schedule finishes. Say a screen pops up with the message "
Backup's done" or anything thing else which sends a signal to the user about
the Satus of his backup.

Regards.
Rajesh Oak


Get 250 color business cards for FREE! at Lycos Mail
http://mail.lycos.com/freemail/vistaprint_index.html



Re: TSM 3.7 assigns new filespace names?

2001-03-19 Thread Prather, Wanda

And in times past when I got utterly frustrated and never could get the
filespec working, I have resorted to renaming the filespace on the server
end - to something like DOG, that isn't hard to identify or type!


-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 19, 2001 3:44 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM 3.7 assigns new filespace names?


>The trouble is, I can't find any way to access the old files with the same
>path name that were stored under "/" filespace. I have tried (as per the
>ADSM Concepts manual) using many combinations of curly brackets (e.g.
>"{/xx/xx}/yyy") and/or wildcards with no success. I cannot get anything to
>show up except archives performed since the upgrade.

Rik - When using the braces from the Unix command line, be sure to quote
  the amalgam so that the Unix shell does not absorb and try to process
those special shell characters such that the TSM client command line program
does not get them; or enter the brace-encoded filespec under the interactive
mode of the command line client.  If that doesn't work, then it would seem
like a client defect.

  Richard Sims, BU



Re: DB restore

2001-03-19 Thread Prather, Wanda

Well, I'll take a stab at this.

I know when you do a DB restore, the same disk layout (i.e., number of
DBVOLS) is NOT required, been there done that.
I believe the requirement is that the RESTORE TO data base (and recovery
log) must have AT LEAST as much space available as the original, although I
cannot remember where that is documented.

If this is a one-time move, what I would do is use the REDUCE command to
pare the source DB down to 5 GB, then back it up, then restore it to the new
location.  If this is a DR drill, then I think you have a problem, but
congratulations on asking the right questions before it's too late!




-Original Message-
From: Joe Faracchio [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 19, 2001 5:26 PM
To: [EMAIL PROTECTED]
Subject: Re: DB restore


Good question.  I'd like to know from someone that has experienced a
restore.

Somewhere it implies that you  have to have the same disk layout because
its a record backup and not logical.

Is this true?  Is it documented?  I haven't found it, yet!

... joe.f.

Joseph A Faracchio,  Systems Programmer, UC Berkeley


On Mon, 19 Mar 2001 [EMAIL PROTECTED] wrote:

> If I have a 20GB database that is spread accross 10 dbvols but is only 10%
> utilized can I restore that database onto a server with 5 dbvols totaling
> 10GB since the 5 dbvols will hold over the 10% of utilization.
>



Re: What's wrong here

2001-03-21 Thread Prather, Wanda

The tape should be labelled, but not checked in.
Also check your DEVCLASS (q devclass class8mm f=d), make sure the MOUNTLIMIT
is set to 1 or DRIVES.
Then take the tape OUT of the drive so TSM will see the drive is free.
Then issue your BACKUP DB command.
TSM will put a message in the ACTLOG (or the console, if you have one open)
telling you when to mount a scratch tape.
THEN put the tape in the drive.


-Original Message-
From: Arshad Sheikh [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 21, 2001 2:31 PM
To: [EMAIL PROTECTED]
Subject: Re: What's wrong here


Thanks for the reply.

But since its just a 8mm 5GB tape drive and not a library so I think I can't
have any scratch volume, here is what I have got when I try to label a tape


tsm: TSM>label libvolume lib8mm tape01 checkin=scratch overwrite=yes
ANR8494E LABEL LIBVOLUME: An option specified is not valid for MANUAL
libraries.
ANS8001I Return code 3.



okay next I tried the same command without checkin=scratch statment and it
worked.


tsm: TSM>label libvolume lib8mm tape01 overwrite=yes
ANS8003I Process number 8 started.

tsm: TSM>q proc

Process Process Description  Status
  Number
 

   8 LABEL LIBVOLUME  ANR8804I Labelling volume TAPE01 in library
   LIB8MM.


tsm: TSM>q actlog begint=now-0:01

Date/TimeMessage

-
03/21/01   11:29:16  ANR8372I 002: Remove 8MM volume TAPE01 from drive
DRV8MM
  (/dev/mt0) of library LIB8MM.
03/21/01   11:29:16  ANR8800I LABEL LIBVOLUME for volume TAPE01 in library
  LIB8MM completed successfully.
03/21/01   11:29:16  ANR0985I Process 8 for LABEL LIBVOLUME running in the
  BACKGROUND completed with completion state SUCCESS at
  11:29:16.


Its a single tape drive attached to the system. How can I define scratch
volume for it. In stgpool I put maxscratch parameter as 10.

Arshad



>From: George Lesho <[EMAIL PROTECTED]>
>Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: Re: What's wrong here
>Date: Wed, 21 Mar 2001 13:06:11 -0600
>
>Arshad, It would seem that there are not scratch volumes available for use
>as a
>db backup tape. Try the following select at your dsmadmc prompt  to
>identify how
>many scratch tapes are in your library:
>
>select count(*) as num_scratch_tapes from libvolumes where status='Scratch'
>
>George Lesho
>Storage/System Admin
>AFC Enterprises
>
>
>
>
>
>Arshad Sheikh <[EMAIL PROTECTED]> on 03/21/2001 11:32:14 AM
>
>Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>
>To:   [EMAIL PROTECTED]
>cc:(bcc: George Lesho/Partners/AFC)
>Fax to:
>Subject:  What's wrong here
>
>
>
>Hi there,
>
>I am a newbie so forgive my naiveness. I want to setup TSM 3.7 on a box
>with
>8mm drive, so I create a library, drive,devclass,stgpool and label my
>volume. I also created the domain,policyset,mgmtclass and copypool for
>backing up files and register the node in it.
>Now when I tried to do a db backup
>
>backup db devclass=c8mm type=full
>
>I got the following from actlog
>
>03/21/01   09:10:51  ANR2017I Administrator ADMIN issued command: BACKUP DB
>   devclass=class8mm type=full
>03/21/01   09:10:51  ANR0984I Process 6 for DATABASE BACKUP started in the
>   BACKGROUND at 09:10:51.
>03/21/01   09:10:51  ANR2280I Full database backup started as process 6.
>03/21/01   09:10:51  ANR8447E No drives are currently available in library
>   LIB8MM.
>03/21/01   09:10:51  ANR1404W Scratch volume mount request denied - mount
>   failed.
>03/21/01   09:10:51  ANR4578E Database backup/restore terminated - required
>   volume was not mounted.
>03/21/01   09:10:51  ANR0985I Process 6 for DATABASE BACKUP running in the
>   BACKGROUND completed with completion state FAILURE
>at
>   09:10:51.
>
>
>What I need to do to make it work, I already labeled the volume, its
>already
>inserted in the drive. What is wrong here?? Checkin volume command only
>works with automated libraries so I can't check in the volume.
>
>Any help is deeply appreciated.
>
>Arshad.
>_
>Get your FREE download of MSN Explorer at http://explorer.msn.com

_
Get your FREE download of MSN Explorer at http://explorer.msn.com



Re: Backing up directories with 600K files

2001-03-21 Thread Prather, Wanda

IC29444 is an APAR number; here is the text from IBMLINK, although it
doesn't say much.
(Best viewed in a fixed font:)

Item IC29444


  APAR Identifier .. IC29444   Last Changed..01/03/16
  PERFORMANCE PROBLEM DURING BACKUP WHEN ANTI VIRUS SOFTWARE IS
  ACTIVE

  Symptom .. IN INCOROUT  Status ... CLOSED  PER
  Severity ... 2  Date Closed . 01/03/16
  Component .. 5698TSMCL  Duplicate of 
  Reported Release . 41W  Fixed Release  999
  Component Name TIVOLI STR MGR   Special Notice
  Current Target Date ..01/05/26  Flags
  SCP ... UNIX
  Platform  UNIX

  Status Detail: Not Available

  PE PTF List:

  PTF List:
  Release 41W   : PTF not available yet


  Parent APAR:
  Child APAR list:


  ERROR DESCRIPTION:
  Customer reports a performance problem when backing up a
  filespace where AV software is active. A trace with
  INSTR_CLIENT_DETAIL for filesystem with AV inactive shows:
  Process Dirs0.485  242.5  2
  this value compares to the following reported when AV active:
  Process Dirs0.844  422.0  2
  (sample values from this customer only, you will see
  significantly higher Process Dirs value compared to AV off)


  LOCAL FIX:
  Development provided 4.1.2 client based fixtest modules
  available form ftp.de.ibm.com/fromibm/aix as
  01920.fixtest.412modules.zip
  .
  Please copy the original files before replacing them with
  the files from this package.


  PROBLEM SUMMARY:
  
  * USERS AFFECTED: Windows client machines running anti-virus   *
  *programs. *
  
  * PROBLEM DESCRIPTION: Backup performance may be noticeably*
  *slower if an anti-virus program is also running.  *
  
  * RECOMMENDATION: Apply the fixing code when available.*
  


  PROBLEM CONCLUSION:
  The code was modified to avoid the performance degradation.


  TEMPORARY FIX:
  Windows client patch 4.1.2.12 also includes this fix.




-Original Message-
From: Short, Anne [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 21, 2001 1:41 PM
To: [EMAIL PROTECTED]
Subject: Re: Backing up directories with 600K files


I saw this statement also in the 4.1.2.12 readme, but have been unable to
find anything on what the problem was.  I would be interested reading about
this.  Is the IC29444 referring to an APAR or some other document?  I have
access to the Tivoli Knowledge Base, but can't find a hit doing a search on
that number.


Anne Short
Lockheed Martin Enterprise Information Systems
Gaithersburg, Maryland
301-240-6184
CODA/I Storage Management

-Original Message-
From: Tim Williams [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 21, 2001 11:40 AM
To: [EMAIL PROTECTED]
Subject: Re: Backing up directories with 600K files

Jerry,
I ran into something similar in the last couple weeks. I was running a NT4
server with 2.3 Million files and it was taking 13 hours to back up. The
root of the problem was actually the TSM client code I had recently applied
on the machine (TSM 3.7.2.18)
In the 4.1.2.12 remade there is a comment :
* IC29444 PERFORMANCE PROBLEM DURING BACKUP WHEN ANTI VIRUS
*
* SOFTWARE IS ACTIVE.
We upgraded to 4.1.2.12 and it now backs up in 4 hours.
-Tim Williams
[EMAIL PROTECTED]
-Original Message-
From: [EMAIL PROTECTED] [ mailto:[EMAIL PROTECTED]
 ]
Sent: Wednesday, March 21, 2001 7:48 AM
To: [EMAIL PROTECTED]
Subject: Backing up directories with 600K files

Date:   March 21, 2001  Time: 8:23 AM
From:   Jerry Lawson
The Hartford Insurance Group
860 547-2960[EMAIL PROTECTED]


-
I know I have seen this question on the list a long time ago, but I can't
remember any specifics on potential solutions.  Perhaps someone who has lost

fewer brain cells than I have can be of help
I have a customer who has a large number of small files on a server, and is
seeing long processing times.  He is running the SIEBEL Help desk
applications, and ultimately will have 1.5 million files on the server.  The

test he ran included about 600K files, totaling 5GB of space.  Obviously,
the individual files are not very big (he says 12K is typical.  They are
already compressed, so we have compression turned off for this client.
The backup was initiated from a command line, and took something like 8
hours to complete.  In the past he has been getting something like
2.5GB/hour throughput, so there is most likely not a network problem her

Re: What's wrong here

2001-03-21 Thread Prather, Wanda

Well,  the only other thing,
if you enter lsdev -Cctape, does the device show up as AVAILABLE to the OS?

If it does, then I'm stumped.


-Original Message-
From: Arshad Sheikh [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 21, 2001 4:04 PM
To: [EMAIL PROTECTED]
Subject: Re: What's wrong here


Here is what I got.

tsm: TSM>q devclass class8mm f=d

Device Class Name: CLASS8MM
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: 8MM
Format: 8900
Est/Max Capacity (MB): 5,120.0
Mount Limit: DRIVES
Mount Wait (min): 60
Mount Retention (min): 10
Label Prefix: ADSM
Library: LIB8MM
Directory:
   Server Name:
  Retry Period:
Retry Interval:
Last Update by (administrator): ADMIN
Last Update Date/Time: 03/20/01   16:34:14

tsm: TSM>backup db devclass=class8mm type=full
ANR2280I Full database backup started as process 9.
ANS8003I Process number 9 started.

tsm: TSM>q proc
ANR0944E QUERY PROCESS: No active processes found.
ANS8001I Return code 11.

tsm: TSM>q actlog begint=now-0:01

Date/TimeMessage

--
03/21/01   12:58:46  ANR2017I Administrator ADMIN issued command: BACKUP DB
  devclass=class8mm type=full
03/21/01   12:58:46  ANR0984I Process 9 for DATABASE BACKUP started in the
  BACKGROUND at 12:58:46.
03/21/01   12:58:46  ANR2280I Full database backup started as process 9.
03/21/01   12:58:46  ANR8447E No drives are currently available in library
  LIB8MM.
03/21/01   12:58:46  ANR1404W Scratch volume mount request denied - mount
  failed.
03/21/01   12:58:46  ANR4578E Database backup/restore terminated - required
  volume was not mounted.
03/21/01   12:58:46  ANR0985I Process 9 for DATABASE BACKUP running in the
  BACKGROUND completed with completion state FAILURE at
  12:58:46.
03/21/01   12:58:48  ANR2017I Administrator ADMIN issued command: QUERY
PROCESS

03/21/01   12:58:48  ANR0944E QUERY PROCESS: No active processes found.
03/21/01   12:58:48  ANR2017I Administrator ADMIN issued command: ROLLBACK


It seems like it doesn't find the drive however if I ran


tsm: TSM>q drive

Library Name  Drive NameDevice Type  DeviceON LINE
    ---  
---
LIB8MMDRV8MM8MM  /dev/mt0  Yes

it shows that the drive is there. Am at loss again.

Arshad.



>From: "Prather, Wanda" <[EMAIL PROTECTED]>
>Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: Re: What's wrong here
>Date: Wed, 21 Mar 2001 14:54:19 -0500
>
>The tape should be labelled, but not checked in.
>Also check your DEVCLASS (q devclass class8mm f=d), make sure the
>MOUNTLIMIT
>is set to 1 or DRIVES.
>Then take the tape OUT of the drive so TSM will see the drive is free.
>Then issue your BACKUP DB command.
>TSM will put a message in the ACTLOG (or the console, if you have one open)
>telling you when to mount a scratch tape.
>THEN put the tape in the drive.
>
>
>-Original Message-
>From: Arshad Sheikh [mailto:[EMAIL PROTECTED]]
>Sent: Wednesday, March 21, 2001 2:31 PM
>To: [EMAIL PROTECTED]
>Subject: Re: What's wrong here
>
>
>Thanks for the reply.
>
>But since its just a 8mm 5GB tape drive and not a library so I think I
>can't
>have any scratch volume, here is what I have got when I try to label a tape
>
>
>tsm: TSM>label libvolume lib8mm tape01 checkin=scratch overwrite=yes
>ANR8494E LABEL LIBVOLUME: An option specified is not valid for MANUAL
>libraries.
>ANS8001I Return code 3.
>
>
>
>okay next I tried the same command without checkin=scratch statment and it
>worked.
>
>
>tsm: TSM>label libvolume lib8mm tape01 overwrite=yes
>ANS8003I Process number 8 started.
>
>tsm: TSM>q proc
>
>Process Process Description  Status
>   Number
> 
>
>8 LABEL LIBVOLUME  ANR8804I Labelling volume TAPE01 in library
>LIB8MM.
>
>
>tsm: TSM>q actlog begint=now-0:01
>
>Date/TimeMessage
>
>-
>03/21/01   11:29:16  ANR8372I 002: Remove 8MM volume TAPE01 from drive
>DRV8MM
>   (/dev/mt0) of library LIB8MM.
>03/21/01   11:29:16  ANR8800I LABEL LIBVOLUME for volume TAPE01 in library
>   LIB8MM completed successfully.
>03/21/01   11:29:16  ANR0985I Process 8 for LABEL LIBVOLUME running in the
>

Re: What's wrong here

2001-03-21 Thread Prather, Wanda

OK ONE MORE THING,
Is this actually the same device, rmt0 and mt0?

The one time I tried to do this, I found that I couldn't use the 8mm drive
for ADSM ( this was a LONG time ago, so I don't remember exactly) and for
AIX, at the same time.  I had to make the rmt0 driver UNAVAILBLE to AIX,
before ADSM was able to use the drive as mt0.  At least that's what I
remember.

I don't remember whether I went into SMIT and found a way to make it
UNAVAILBLE, or I just DELETED it.
(Don't worry, it will come back when you run cfgmgr).
This was a LONG time ago, so it may not be the problem at all.

Now that's the end, I'm giving up, not saying another thing.




-Original Message-
From: Salman Ghani [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 21, 2001 4:27 PM
To: [EMAIL PROTECTED]
Subject: Re: What's wrong here


Here is the output.

rmt0 Available 00-00-0S-6,0 5.0 GB 8mm Tape Drive
mt0  Available 00-00-0S-6,0 Tivoli Storage Manager Tape Drive

Yeah, I really don't have a clue too.

Salman


>From: "Prather, Wanda" <[EMAIL PROTECTED]>
>Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: Re: What's wrong here
>Date: Wed, 21 Mar 2001 16:15:59 -0500
>
>Well,  the only other thing,
>if you enter lsdev -Cctape, does the device show up as AVAILABLE to the OS?
>
>If it does, then I'm stumped.
>
>
>-Original Message-
>From: Arshad Sheikh [mailto:[EMAIL PROTECTED]]
>Sent: Wednesday, March 21, 2001 4:04 PM
>To: [EMAIL PROTECTED]
>Subject: Re: What's wrong here
>
>
>Here is what I got.
>
>tsm: TSM>q devclass class8mm f=d
>
>Device Class Name: CLASS8MM
>Device Access Strategy: Sequential
>Storage Pool Count: 1
>Device Type: 8MM
>Format: 8900
>Est/Max Capacity (MB): 5,120.0
>Mount Limit: DRIVES
>Mount Wait (min): 60
>Mount Retention (min): 10
>Label Prefix: ADSM
>Library: LIB8MM
>Directory:
>Server Name:
>   Retry Period:
> Retry Interval:
>Last Update by (administrator): ADMIN
>Last Update Date/Time: 03/20/01   16:34:14
>
>tsm: TSM>backup db devclass=class8mm type=full
>ANR2280I Full database backup started as process 9.
>ANS8003I Process number 9 started.
>
>tsm: TSM>q proc
>ANR0944E QUERY PROCESS: No active processes found.
>ANS8001I Return code 11.
>
>tsm: TSM>q actlog begint=now-0:01
>
>Date/TimeMessage
>
>--
>03/21/01   12:58:46  ANR2017I Administrator ADMIN issued command: BACKUP DB
>   devclass=class8mm type=full
>03/21/01   12:58:46  ANR0984I Process 9 for DATABASE BACKUP started in the
>   BACKGROUND at 12:58:46.
>03/21/01   12:58:46  ANR2280I Full database backup started as process 9.
>03/21/01   12:58:46  ANR8447E No drives are currently available in library
>   LIB8MM.
>03/21/01   12:58:46  ANR1404W Scratch volume mount request denied - mount
>   failed.
>03/21/01   12:58:46  ANR4578E Database backup/restore terminated - required
>   volume was not mounted.
>03/21/01   12:58:46  ANR0985I Process 9 for DATABASE BACKUP running in the
>   BACKGROUND completed with completion state FAILURE
>at
>   12:58:46.
>03/21/01   12:58:48  ANR2017I Administrator ADMIN issued command: QUERY
>PROCESS
>
>03/21/01   12:58:48  ANR0944E QUERY PROCESS: No active processes found.
>03/21/01   12:58:48  ANR2017I Administrator ADMIN issued command: ROLLBACK
>
>
>It seems like it doesn't find the drive however if I ran
>
>
>tsm: TSM>q drive
>
>Library Name  Drive NameDevice Type  DeviceON LINE
>    ---  
>---
>LIB8MMDRV8MM8MM  /dev/mt0  Yes
>
>it shows that the drive is there. Am at loss again.
>
>Arshad.
>
>
>
> >From: "Prather, Wanda" <[EMAIL PROTECTED]>
> >Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> >To: [EMAIL PROTECTED]
> >Subject: Re: What's wrong here
> >Date: Wed, 21 Mar 2001 14:54:19 -0500
> >
> >The tape should be labelled, but not checked in.
> >Also check your DEVCLASS (q devclass class8mm f=d), make sure the
> >MOUNTLIMIT
> >is set to 1 or DRIVES.
> >Then take the tape OUT of the drive so TSM will see the drive is free.
> >Then issue your BACKUP DB command.
> >TSM will put a message in the ACTLOG (or the console, if you have one
>open)
> >telling you when to mount a scratc

Re: Disaster Happened!

2001-03-22 Thread Prather, Wanda

Yeah, been there done that about 9 months ago, and on an SP node, I think at
AIX 4.3.2.
Nothing sneaky;
Just as you are doing, we reloaded the image from a mksysb tape, then
restored each filesystem from TSM to get current.

If you've got multiple tape drives, open another window and start the
restore of /home in parallel with /var -- and you can go /home faster!

-Original Message-
From: David Longo [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 22, 2001 2:09 PM
To: [EMAIL PROTECTED]
Subject: Disaster Happened!


I'll try to make this brief.  Last night we were doing upgrades on some of
our systems, most notably and RS6000 SP Node running AIX 4.3.3.  During OS
maintenance level update we lost connectivity from Control W/S with CDROM to
the Client SP NOde - It went downhill from there!  WE shortly couldn't even
getb into the Node even from TTY.

With great effort from  IBM support we got in and got the NOde back in
config but no way to get back where we were.  Had to reload a new image on
the node and then we loaded the TSM 4.1.0.0 client.  (The node originally
had 3.1.0.6 client with API interfacing to ORacle's EBU for Oracle 7.3.4
database _ Peoplesoft App.)  DB's are on Extrenal IBM ESS so shouild be o.k.

We have restored /usr, now restoring /var and will go to /home and then /.
Looks good so far.   We have never restored a complete AIX system from ADSM
and certainlyu not an SP NODE.  I just got back in from 3 hours sleep while
another guy got us this far.

I plan to call Tivoli, but wanted to check with anybody that has done this
or similar with AIX.  And anybody that also has SP experience would be
helpful.

Are we going down wrong path?  What are gotcha's?  What about SP issues?

Default assumption is after we resotre these rootvg filesystems, we can
reboot and everybody will live happily everafter?!?!?!

We will not reboot until I can get some good feeling that it will come up as
we may not be able to get back in.  I know this may not make good sense as
I'm not 10% today but that's why we are checking with others.  I will also
try to get "DR restore manual" from onluine at Tivle/IBM also.

(We are alsio restoring a 30+ GB Novell volume - gues how long that will
ytake!!)

THanks in advance for any helpful comments.
I will post our "Final Answer" when it's all back up!


David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]



"MMS " made the following
 annotations on 03/22/01 14:14:48

--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify the
sender.  You must not, directly or indirectly, use, disclose, distribute,
print, or copy any part of this message if you are not the intended
recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where the
message states such views or opinions are on behalf of a particular entity;
and (2) the sender is authorized by the entity to give such views or
opinions.

===



Re: backing up storage pools

2001-03-26 Thread Prather, Wanda

1) No, they are only backed up once. (Assuming of course, that your BACKUP
STGPOOL command specified the same destination copy pool both times.).
BACKUP STGPOOL is an incremental backup; TSM checks the DB to decide whether
any file in the primary pool does or does not already exist in the copy
pool, and only makes a copy if there isn't already one in the copy pool

2) We back up the disk pool first, just because my primary pool is
collocated, so if I can backup the files while they are still on disk it
saves oodles of tape mounts to create the copy pool, which is not
collocated.  Whether you do disk or tape first doesn't really matter, if you
know for sure no migration will occur in between.

Whether or not you disable sessions depends on how precise you want to be
with your offsite copies.  If you want to guarantee, that EVERYTHING is
represented in your copy pool then yes, disable sessions, backup from disk
to tape, then tape to tape, then back up the DB.  In my case, with 500
machines backing up per day, I don't bother disabling sessions, 'cause I
don't have a large enough free window, and anything I miss on day one, will
surely make it to the offsite pool on day 2.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert







-Original Message-
From: Lee, Gary D. [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 26, 2001 10:35 AM
To: [EMAIL PROTECTED]
Subject: backing up storage pools


My hierarchy consists of two disk pools, both of which migrate to a single
3590 tape pool.  My questions are as follows:

1. If I backup the disk pools, then after some time they migrate and I
backup the tape pools, how are the migrated files handled? i.e. are they
backed up twice?

2.  Based on the answer to one, what is the order in which you folks are
doing storage pool backups, and should I disable sessions while this is
going on?

TIA for any insite.

Gary Lee
Senior Operating Systems Analyst
Ball State University
phone 765-285-1310



Re: Restore Question??

2001-03-26 Thread Prather, Wanda

You can get that result if you are not root.

I have also had similar problems if my AIX session was running low on
available colors - if you have Netscape open, try closing it to free more X
screen resources.


-Original Message-
From: Blaine Gilbreath [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 26, 2001 11:09 AM
To: [EMAIL PROTECTED]
Subject: Restore Question??


I am trying to do a restore on an AIX 4.3.3 server.  When I open a dsm
graphical session and try to view the files that are available for restore I
can only see the directory structure.

When I query my file space in ADSM I see that I have 275 GB worth of data on
the particular file space that I am looking at.  Am I just way off in left
field or am I going crazy.

Regards,
Blaine



Re: Performance from an OS/390 Mainframe to an AIX box

2001-03-28 Thread Prather, Wanda

I don't know.
But here's a simple test.
Go to your AIX box, find a file that is at least 100 MB, preferably 500MB.

Send it to your OS/390 mainframe via a simple FTP, and time it.
Do it 3 or 4 times to make sure you are getting consistent timings.

FTP is about the simplest data transfer you can do via TCP/IP.
TSM won't ever go any faster than FTP.

That will take TSM issues out of the picture, and see if you can push a
simple data transfer any faster to your OS/390 machine.

(BTW, you didn't mention your release of OS/390.  I think at about OS/390
R4, there were significant changes that improved the performance of OS/390
TCP/IP.)



-Original Message-
From: Greg Roschel [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 28, 2001 11:47 AM
To: [EMAIL PROTECTED]
Subject: Performance from an OS/390 Mainframe to an AIX box


I'm new to this list, so excuse my ignorance in asking this question which
may have already been kicked around.  We are having performance problems
doing backups from several AIX boxes to an OS/390 mainframe.  We've looked
at the network and tons of other things.  To make a long question short,
we've determined that the problem is that an AIX box will only do 5 GByte
per second ESCON channel connected - - we believe this is the weak link in
the chain, so to speak, causing our "ADSM backups take too long" & "ADSM
backups not using all the available bandwidth" problems.  However, we are
getting conflicting and widely varying info on this 5 GByte/second number.
Is 5 GByte/second really the max an AIX box can do ESCON channel connected
to whatever? Can anyone point me to an ADSM tuning document or whatever
which addresses issues like this?



Re: Daylight Savings Fix for Windows NT Clients/fix 4.1.2.12

2001-03-29 Thread Prather, Wanda

The 4.1.2.12 code is a complete replacement install for the client.
And 4.1.2.12 includes all prior fixes.
So you just flop it on top of the previous install, since you are already at
a 4.x client.
(Remember to stop the scheduler service before you start the install
though.)


-Original Message-
From: MC Matt Cooper (2838) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 29, 2001 7:09 AM
To: [EMAIL PROTECTED]
Subject: Re: Daylight Savings Fix for Windows NT Clients/fix 4.1.2.12


Hello all,
I downloaded the doc on 4.1.2.12 because I just put TSM in and have
about 30 NT servers with 4.1.0 client code in them.  There is a warning
saying THIS FIXTEST SOFTWARE HAS NOT BEEN FULLY SYSTEM TESTED.  Is this an
old warning?  Is it O.K. to put this fix in?  I haven't tried putting
maintenance on clients yet.  Do all the fixes go on at once or is it set up
to pick and choose which fixes you want?
Matt

-Original Message-
From: Williams, Tim [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 28, 2001 3:10 PM
To: [EMAIL PROTECTED]
Subject: Re: Daylight Savings Fix for Windows NT Clients


Andy/all.
The original post, I believe tht was referencing a "new" DST
problem, not a reminder
to the old DST problem.
see ftp read1stc info on ic28969...that has (daylight savings
fixtest) noted in the
readme...maybe the readme is wrong because pulling up the
apar...yes, it
doesn't reference DST...

ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance
/client/v4r1/Windows/i386/LATEST/IP22151_12_read1stc.txt
* IC28969 TSM CLIENT 4.1.1.16 (DAYLIGHT SAVINGS FIXTEST) FOR
*
* NT/2000 SHOWS SIGNS OF A MEMORY LEAK DURING
*
* SCHEDULED INCREMENTAL BACKUPS.
Thanks for the quick response.
Tim




Andy Raibeck <[EMAIL PROTECTED]>
03/28/2001 01:36 PM
Please respond to "ADSM: Dist Stor Manager"
<[EMAIL PROTECTED]>@SMTP@Exchange
To: [EMAIL PROTECTED]@SMTP@Exchange
cc:

Subject:Re: Daylight Savings Fix for Windows NT Clients

Hi Tim,

Actually I was referring only to the DST problem (IC28544), per the
subject
and content of the original post to which I responded. IC28969 has
nothing
to do with the DST problem; it is just another APAR fix that was
included
in the 4.1.1.16 client. Although I was not involved in fixing that
problem,
it is my understanding that it has been in the code for quite some
time,
and in fact is not limited to just Windows. If you are fairly
current on
your maintenance and are not experiencing the problem, then I
wouldn't
worry about it; you're fine.

As far as DST goes, as long as you are one of the versions that
fixes it
(listed in my first post on this subject, text included below), you
do not
need to rush into installing new clients just for DST.

Best regards,

Andy

Andy Raibeck
IBM Tivoli Systems
Tivoli Storage Manager Client Development
e-mail: [EMAIL PROTECTED]
"The only dumb question is the one that goes unasked."


"Williams, Tim" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
03/28/2001
12:14:44 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: Daylight Savings Fix for Windows NT Clients



Andy/all, I read your update, I read apar ic28969 and
ic28544.
Are you saying that
ic28544: "AUTOMATICALLY ADJUST CLOCK FOR DAYLIGHT SAVINGS
CHANGES"
  CAUSES INCREMENTAL TO DO FULL BACKUP OF NTFS FILES
and
ic 28969  TSM CLIENT SHOWS SIGNS OF MEMORY LEAK DURING
SCHEDULED
  BACKUPS.
are referring to the *same* problem as was discovered last
October.
ic28969 (memory leak) has been seen or "This problem was
witnessed
on the 4.1.1.16 client for NT, but
  can affect all platforms.  It may also affect earlier
client
levels." -< quote from the apar..
This would contridict your update proposal (that...4.1.1.16
is
ok).
I opened up an etr/pmr asking to narrow the client platforms
and
levels. Shops with large TSM client
installations can't respond quickly to upgrading TSM client
codeon a dime...
FYI Thanks Tim




Andy Raibeck <[EMAIL PROTECTED]>
03/28/2001 12:32 PM
Please respond to "ADSM: Dist Stor Manager"
<[EMAIL PROTECTED]>@SMTP@Exchange
To: [EMAIL PROTECTED]@SMTP@Exchange
cc:

Subject:Re: Daylight Savings Fix for Win

Re: Export Node Mount Waits

2001-04-02 Thread Prather, Wanda

You can set mount retention to 0.
I don't know of any reason not to, in a robotic library.

-Original Message-
From: Poehlman, James [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 02, 2001 12:52 PM
To: [EMAIL PROTECTED]
Subject: Export Node Mount Waits


TO  *SMers - All,
New to this so please bear with.

ENVIRONMENT:
TSM Server IBM RS6000 R50  2 Procs 1GB ram
AIX 4.3.3 MNT_lev 004
TSM Server Version 3.7.2
Library IBM3575 - L12 2 drives - NOT XL
No disk pools.
All pools sequential to 3575.

APPLICATION:
Export node every weekend - active files for client node.
(WHY? - Company Policy requires weekly full backups of this
client to go off site.)
This is in addition to daily incremental to the primary storage pool.
Exports:
  395,963 items
  160,980,891,020 bytes.
Copies active files from 71 sources tapes (2 storage pools) to 15 output
tapes.
Starts 0300 Saturday morning and runs until Sunday Afternoon.

PROBLEM:
Export node waits for retention timeout on source tape before
unmounting it and mounting the next source tape.  With a 5 minute
mount retention this wastes almost 3.5 hours. Even with 1 minute
retention wastes way over 1 hour.  I have other thing that need to
happen over the weekend that also need these drives.

QUESTION:
Is there a way around this?  Is there a fix for it?
Is there a better way to get a set of offsite tapes for
the active files for the node without doing a selective
backup to new storage pool?
Please don't mention backup sets.  Also, please don't tell
me I need more drives.  Upgrade request for them has been in the
works, for more than 6 months along with upgrade to a L18.
These systems are orphans, so to speak.  Upper management does
not like to spend money on them. Another story.

Thanks to all,

James D. Poehlman (Denny)
AIX / Unix Systems Administrator
Senior Technical Specialist
Mid-Range Server Technology Team
Black and Decker
North American Power Tools - Information Systems
[EMAIL PROTECTED]
Voice 410-716-3039
Cell  410-375-5974



Re: Anyone know a debug way of deleting a disk volume...?

2001-04-04 Thread Prather, Wanda

What happens if you run an AUDIT vol?

-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 9:19 AM
To: [EMAIL PROTECTED]
Subject: Anyone know a debug way of deleting a disk volume...?


Ok, over the years I've had disk failures that have left disks defined in
storage pools that I can't get rid of !

Anyone know of a sure fire way to purge these volumes from TSM ? ? ?

basic problem is they are offline because they physically don't exist
anymore, if I try to recreate them they won't define back in because it
doesn't find something it is looking for, a move data says there is no data
on the volume, yet a del vol blah gives a RC 13.

Yes, I could open up a problem with Tivoli (we pay maint.) but if I could
just get a quick command...

Dwight



Re: 3590-B1A to 3590-E1A

2001-04-04 Thread Prather, Wanda

Ditto.  No problem with J's &K's together.
And the 3590E's read and write the J's just fine.

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 1:39 PM
To: [EMAIL PROTECTED]
Subject: Re: 3590-B1A to 3590-E1A


>Now the question, we have been told that the new drives will read
>and write to our current "J" tapes just fine. They will read the 128 tracks
>256 tracks. As we cycle in the older tapes in for reclaiming they all
become
>256 track at some point. We generally have about 5-6 tapes a day go offsite
>and about that many returning each day as well. Is this correct??

I can't verify how many of your tapes go offsite each day, but can verify
that J tapes that are re-used get written as 256-track in 3590E drives and
thus double your capacity.

>I also understand the E1A drives will work the best if we go to the
>newer "K" tapes, however you can't have "J" and "K" tapes in the library at
>the same time for some reason.

Yes, you can, and many of us do.  Refer to the server README file for
particulars on upgrading your drives in a TSM environment.

  Richard Sims, BU



Re: Backup of Remote Sites

2001-04-04 Thread Prather, Wanda

We do it for some small machines.  It works.  The answer is, of course "it
depends on your situation."
These are small machines that aren't terribly critical - if we lost one, we
could take 24 hours to rebuild it and the users would still be happy (give
that their other choice is to have to recustomize).

If your clients are very large, or very time critical, it is less likely to
work for you.
Maybe this is a case where you want to back up just a few directories, not
the whole C: drive.
I would definitely turn on client compression.

This is also a good case for trying to use backupsets.
If they need a massive restore, you create a backupset on your end,
cut a CD, then fed-ex it overnight, and restore it there from the CD.

Just depends on what your requirements are for that system.
And giving it a try is harmless, the TSM backups won't interfere with their
current NT backups.
So you could also do both - use the ntbackup for local stuff, then use the
TSM backup as a failsafe/offsite backup in case the local is indeed
unreliable/not done.

Try one and see how it works out!


-Original Message-
From: David Nash [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 11:50 AM
To: [EMAIL PROTECTED]
Subject: Backup of Remote Sites


I have a question for all of the *SM/Network experts
out there.  We have a central office that we just
started using TSM at.  We also have several remote
offices that are connected to central office via
dedicated lines.  Theses sites currently are running
their own backups via NTBackup.  We are concerned that
these backups are unreliable/not offsite/not being done.
The dedicated lines are mostly 256Kbs lines but a few
are smaller.  Is it a good idea to try to back up these
sites across the WAN using *SM?  We realize that the first
backup would take a while, but after we suffer through that,
the amount of changed data would be small.  Is it a good
idea in this case to turn on client compression?  Any
suggestions would be appreciated.

Thanks,

--David Nash
  Systems Administrator
  The GSI Group



4.1.2 INstallation MSI error?

2001-04-04 Thread Prather, Wanda

Yikes -
Installing 4.1.2.12  on a Win2K machine, when we click the INSTALL button in
the wizard, a box pops up with:

!Internal Error 2755,1632,\\pathname\Tivoli Storage Manager Client.msi


We have installed 4.1.2.12 on other WIn2K machines with no problem.

Not being a Windows Wizard myself, I have NO idea what is going on.
Can anybody tell me what is happening, or even where to look for the
problem?



Re: 3590-B1A to 3590-E1A

2001-04-04 Thread Prather, Wanda

Yes.
So if you upgrade your drives from B's to E's, you get double the data on
the tape (10 GB native at 128 tr, 20 GB native at 256 tr.)

If you upgrade your cartridges from J's to K's and write on them with the
3590E's at 256 tr, you get twice as much tape in the cartridges, so it
doubles again, to 40 GB native.

So with compression, you will get over 80 GB on the cartridge.

-Original Message-
From: Tyree, David [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 3:34 PM
To: [EMAIL PROTECTED]
Subject: Re: 3590-B1A to 3590-E1A


Will the 3590E write to the J's as 256 tracks?

-----Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 2:23 PM
To: [EMAIL PROTECTED]
Subject: Re: 3590-B1A to 3590-E1A


Ditto.  No problem with J's &K's together.
And the 3590E's read and write the J's just fine.

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 1:39 PM
To: [EMAIL PROTECTED]
Subject: Re: 3590-B1A to 3590-E1A


>Now the question, we have been told that the new drives will
>read and write to our current "J" tapes just fine. They will read the
>128 tracks 256 tracks. As we cycle in the older tapes in for reclaiming
>they all
become
>256 track at some point. We generally have about 5-6 tapes a day go
>offsite and about that many returning each day as well. Is this
>correct??

I can't verify how many of your tapes go offsite each day, but can verify
that J tapes that are re-used get written as 256-track in 3590E drives and
thus double your capacity.

>I also understand the E1A drives will work the best if we go to
>the newer "K" tapes, however you can't have "J" and "K" tapes in the
>library at the same time for some reason.

Yes, you can, and many of us do.  Refer to the server README file for
particulars on upgrading your drives in a TSM environment.

  Richard Sims, BU



Re: Long Term Archive for Databases

2001-04-05 Thread Prather, Wanda

Suppose you did restore an Oracle data base that was 7 years old.
How confident are you that your Oracle software could still read it?

-Original Message-
From: Jim Taylor [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 05, 2001 10:48 AM
To: [EMAIL PROTECTED]
Subject: Long Term Archive for Databases


I keep getting this pressure from clients to keep copies of their 500GB
oracle database for 7 years.  They don't seem to know why they want it kept
for seven years.  Like most others they don't think of what their restore
requirements are.

Has anyone had to restore/retrieve a large database that was, say more than
2 years old.  If so was it successful and was it as simple as restoring just
the DB.

ThanX

> Jim Taylor
> Senior Associate, Technical Services
> Enlogix
> *  E-mail: [EMAIL PROTECTED]
> *  Office: (416) 496-5264 ext. 286
> * Cell:  (416)458-6802
> *   Fax: (416) 496-5245
>
>



Re: Export Question

2001-04-09 Thread Prather, Wanda

Enter Q OCC

That shows the same information as auditocc, but broken down by storage
pools.

-Original Message-
From: Blaine Gilbreath [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 09, 2001 10:06 AM
To: [EMAIL PROTECTED]
Subject: Export Question


I am working on an Export process and I am running into some capacity
discrepancies.
Below is the process that I followed.

1.  Run AUDIT LICENSE
2.  QUERY AUDITOCC
3.  The node shows that ADSM has 845GB of data.
4.  I run EXPORT NODE FILEDATA=ALL PREVIEW=YES and that shows that only
440GB will be exported.

Is my database corrupt or do the numbers from the audit and the export
differ?

I tried calling ADSM support but, as of 4/1/01 all support even 1st level
has been dropped.

Regards,
Blaine



Re: DATABASE backup question FULL or INCREMENTAL?

2001-04-10 Thread Prather, Wanda

I agree -
I normally do a full nightly (DB is only 16 GB).
But I have the DB backuptrigger set to allow incrementals.

If TSM decides to fire a DB backup because the log is getting full, I want
that to complete ASAP!

-Original Message-
From: Garrison, Tony [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 09, 2001 6:22 PM
To: [EMAIL PROTECTED]
Subject: Re: DATABASE backup question FULL or INCREMENTAL?


It is kind of hard doing a full of a 43gb TSM database when your recovery
log is sitting at 80+%  We perform a full daily and several other
incrementals on each of our 6 TSM servers.  This works best for our
environment.  This is something that you will have to decide based upon your
requirements and daily activity.  Good luck.

T

 -Original Message-
From:   Miles Purdy [mailto:[EMAIL PROTECTED]]
Sent:   Monday, April 09, 2001 11:49 AM
To: [EMAIL PROTECTED]
Subject:Re: DATABASE backup question FULL or INCREMENTAL?

IMHO:
I never do incrementals. Not of my 2 GB ADSM database nor my 30 GB Sybase
database nor my 10 MB database that hardly changes. When you are under the
gun to get a database restored, incrementals can be hard to restore if you
are the administrator  and even harder if someone else is trying to do the
restore. How hard is it just to type when your boos in standing behind you?
Remember what KISS stands for?

If you use incrementals, then any error on any tape along the way will sink
you. You will still only be able to restore to your last full backup and
good incremental. You can never go wrong always doing a full backup. Yes, it
uses more resources. But don't forget whose ass is on the line when things
go south.

Every day I do full backup of my ADSM DB and 50-60 Sybase databases. All the
Sybase databases go to disk, then offsite, then to tape on site. With this
scheme there is little chance of not being able to do a restore.

(Of course I do an incremental ADSM backup daily, but this still backups
whole files, I never backup just the changes to an individual file.)

miles



---
Miles Purdy
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
[EMAIL PROTECTED]
ph: (204) 984-1602 fax: (204) 983-7557

---

>>> [EMAIL PROTECTED] 09-Apr-01 9:22:31 AM >>>
I am doing a nightly database full backup with ADSM/TSM.  (TSM 3.7.4 on
OS390/2.10)
What is most commonly done a Full or Incremental?
What are the advantages/disadvantages of a full over an incremental backup
of the servers database?
The command we enter is as follows:
  backup db type=full devclass=dboffsite
 Where dboffsite is the a tape device class.



Re: Completing reclaims

2001-04-10 Thread Prather, Wanda

For each of your DESTROYED volumes, do RESTORE VOLUME PREVIEW=YES.
It will show you which of the offsite volumes is needed to rebuild the
onsite tape.

Bring back the volumes shown in PREVIEW.
Do RESTORE VOLUME on the onsite damaged cartridges, which will copy data
from the copy pool tapes back to a primary storage pool.

Once that's all done, your offsite volumes will to back to reclaiming
normally.

-Original Message-
From: Lawrence Clark [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 10, 2001 10:40 AM
To: [EMAIL PROTECTED]
Subject: Completing reclaims


Hi:
Our problem with drives on our newly fibre attached 3590 drives was
apparantly resolved by updating the microcode. However, in the interim we
had 10 cartridges that became unavailable and would not return to available
status after the upgrade. They were then marked destroyed.

However, we still have some 40 offsite copypool volumes that will not
reclaim and remain 99 % reclaimable. I assume the remaining files were on
the volumes marked destroyed.

Any suggestions as to how to complete reclamation on these copypool volumes?

Larry Clark
NYS Thruway Authority



Re: TSM/3494/9310

2001-04-10 Thread Prather, Wanda

Well, I can tell you a few things.
We run TSM on an AIX 4.3.3 box, using 9840 drives in a 9710 library driven
by ACSLS.
We are not sharing the robot with any other application, but I think the
setup will be the same.

The only thing that passes through ACSLS is robotic commands, not data.
The backup data still has to flow over a SCSI cable.  So the first thing you
will need to do is run SCSI cables from your AIX host to the 2 drives.

I don't know if a 9840 can be twin-tailed to more than one host; I doubt it,
but you could ask STK.  Certainly while the drive is online to TSM, it can
be used only by TSM, it can't be shared with any application running on the
Solaris box.

The next thing you will need to do is download the TSM ACSLS support module
for your level of TSM and install it with SMIT.
You will need statements in your /etc/inittab to bring up the two daemons
that provide the interface to ACSLS.  They must be started BEFORE TSM comes
up.

Use SMIT to create TSM device-special files for the two drives.
Then find out from your STK person (or whoever is working with ACSLS now)
what the ACSLS device address is for each drive - you will need the ACSLS
address, as well as the AIX address for the DEFINE DRIVE command..

Then you will need to define the Powderhorn as a new Library to TSM, and a
new DEVCLASS for the drives, and a new sequential storage pool.  Then define
the drives.  There are special parms on DEFINE LIBRARY and DEFINE DRIVE that
you use when the library is ACSLS driven.

You will need figure out how to segregate the TSM tapes in the Powderhorn
from the Solaris application that is using it.  When TSM uses a tape, it
puts an ACSLS lock on the tape so no other ACSLS application can access it.
But you will need to think about how you will set up tape pools in the
Powderhorn and do your CHECKINS so as to NOT stomp on the tapes used by the
Solaris application.  It isn't hard, you just need to make sure you don't
CHECKIN any Solaris tapes to TSM.

You will need to label your 9840 scratch tapes (to be used by TSM), and
check them in.  CHECKIN/CHECKOUT  operations are a little bit different with
a TSM ACSLS library than with a SCSI library.  And DRM MOVE MEDIA will NOT
do an automatic checkout from an ACSLS library, if you are using DRM, you
have a problem.  So managing the library will take some getting used to.

Because you have a new library and devclass, you will need to figure out
WHAT DATA you want to send to this new library.  TSM will not let you use
different devclasses for the same storage pool, so any DATA you will send to
these drives must go to a different storage pool than your existing data.
This creates  some management questions about what data you are going to
send to the Powderhorn.

Now if you still want to do this, I will be happy to send you my DEFINE
DRIVE/DEFINE LIBRARY statements and /etc/inittab statements you can use as
an example.

ANd maybe there is someone else out there with a shared ACSLS library that
can give you some more specifics.


-Original Message-
From: Henk ten Have [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 10, 2001 7:16 AM
To: [EMAIL PROTECTED]
Subject: TSM/3494/9310


We are running a TSM-server on a AIX box (S7A, AIX 4.3.3.0) with a 3494
attached
to it with 7 3590B drives. What I like to do is to use also a couple of 9840
drives in our PowderHorn 9310 by that TSM-server. ACSLS is running on
Solaris.

- Is anyone dealing with this kind of situation? If so, could you please
  contact me?
- Does anyone know what I need to do to make this possible?

Btw, our PowderHorn is also used by two SGI machines (Origin 2000 and 3800).

Cheers,
Henk ten Have.



Re: Question on del volhist

2001-04-11 Thread Prather, Wanda

Removing the STGNEW and STGDELETE volhist entries does not affect any data
that may be on those tapes.
Those entries do not participate in any type of DB recovery, either.

The only thing I know of that those entries are used for is to provide a
good idea what tapes need to be audited, if you ever have to do a DB restore
and back-level your system.

I believe it is safe to delete any of those entries that are older than your
oldest DB backup.


-Original Message-
From: Brazner, Bob [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 11, 2001 10:26 AM
To: [EMAIL PROTECTED]
Subject: Question on del volhist



   Subject: Question
on del volhist

 Attachment(s): 

My vol history for stgnew and stgdelete goes back to the beginning of time.
I'd like to use Del Volhist to get rid of records that aren't needed any
more. but I can't see how to do this without possibly corrupting the vol
history.  If I understand things correctly, if a tape is in stgdelete
status as of a given point in time (let's say 30 days ago), then I should
be able to delete all stgnew and stgdelete records prior to that time,
right?  However, if a tape is in stgnew status at that time, then I better
not do any deletes using that time, right?  So, how do I construct my Del
Volhist command(s) to make such dynamic decisions for every volume in the
volhistory?  Note, we have DRM, so dbbackup entires are not a problem.
System is TSM 4.1.2 on AIX 4.3.3.

Bob Brazner
Johnson Controls, Inc.
(414) 524-2570


(Embedded image moved to file: pic08360.pcx)



Re: Multiple off-site/copy pools

2001-04-12 Thread Prather, Wanda

I was doing that for a while - a copy pool offsite and a copy pool onsite.

But the only reason I was doing that was because our drives/media were
unreliable, and I had to do RESTORE VOLUMES at least once a week.  So having
the onsite copy pool kept me from making a LOT of trips to the vault.

Now that we have better drives/media, I don't do that any more.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Walker, Mike [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 11, 2001 5:53 PM
To: [EMAIL PROTECTED]
Subject: Multiple off-site/copy pools


Is anyone using multiple copy pools to maintain two offsite backups ( or one
offsite and one out-of-library/on-site )??

I think my management is heading this direction and would like to get an
idea if this is real common ??

Thanks
Mike Walker



Re: Question on del volhist

2001-04-12 Thread Prather, Wanda

Yikes, I didn't know that!  Thanks for pointing it out.

If you think you have tapes disappearing and don't want to do a manual
audit, you can write an audit program/script.

1)Pull a list of tapes that are physically in the library:  q libv
2)Pull a list of tapes that are in DRM VAULT status:  q drmedia *
wherest=vault
3)Create a list of tapes that SHOULD be accounted for (either generate a
tape range, if it's contiguous, or build a static list in a file).

Have your script compare the volsers in list (3) to list (1) + (2).

ANything that is on list 3, but isn't in either list (1) or (2), is MIA.
It's much easier to go to the vault looking for a specific VOLSER, than to
audit everything in there...

I also run a script that compares (1) to (2) at least once a month, to make
sure everything that was supposed to go to the vault actually did...






-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 12, 2001 10:52 AM
To: [EMAIL PROTECTED]
Subject: FW: Question on del volhist


Okay .. since we're talking del volhist ...

Can someone clarify this point with the del volhist ?

  If I am using scratch tapes (no private tapes), and I am using the DR
Module.   It seems to me that if I have a DR tape, offsite, and empty (in
retrieve state) and I delete the volhist entry,   the tape seems to
disappear totally from view. (i.e. a  "q drm" command will no longer show
it).  I believe that scratch tapes will disappear if they are empty ... not
a big deal if you know which ones they are, but if they disappear from view
before you bring them back on site,  how would you find them (without
performing a manual offsite audit) ?  (It is my understanding that tapes
change from pending to empty "reuse" number of days (exactly) after they
have gone pending.  So you could have tapes changing from pennding to empty
any time "3" days from the time the expiration freed it up.)

  How is anyone handling this (or are you doing what I'm doing and not
deleting the volhist) ?   Should I run my expiration weekly instead of daily
?  Then I would know when my tapes would be changing to empty.

Thanks,

Glenn MacIntosh
Manager of Technical Services
Sobeys Inc.
123 Foord St.
Stellarton, Nova Scotia
(902) 752-8371 Ext. 4017


-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 11, 2001 11:51 AM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


Don't over think it.  Remember, information about volumes is in the database
too and really, the only volume history record you are concerned about is
the one indicating the most recent db backup tape.  I routinely run, like
daily, a delete volhist type=all todate=today-30.

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs CO 80949-1313
(719) 531-5926
Fax: (240) 539-7175
Email: [EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Brazner, Bob
Sent: Wednesday, April 11, 2001 8:26 AM
To: [EMAIL PROTECTED]
Subject: Question on del volhist



   Subject: Question
on del volhist

 Attachment(s): 

My vol history for stgnew and stgdelete goes back to the beginning of time.
I'd like to use Del Volhist to get rid of records that aren't needed any
more. but I can't see how to do this without possibly corrupting the vol
history.  If I understand things correctly, if a tape is in stgdelete
status as of a given point in time (let's say 30 days ago), then I should
be able to delete all stgnew and stgdelete records prior to that time,
right?  However, if a tape is in stgnew status at that time, then I better
not do any deletes using that time, right?  So, how do I construct my Del
Volhist command(s) to make such dynamic decisions for every volume in the
volhistory?  Note, we have DRM, so dbbackup entires are not a problem.
System is TSM 4.1.2 on AIX 4.3.3.

Bob Brazner
Johnson Controls, Inc.
(414) 524-2570


(Embedded image moved to file: pic08360.pcx)



Re: Question on del volhist

2001-04-15 Thread Prather, Wanda

OK,
I ran a DELETE VOLHIST for all record types all the way back to January.

I checked on the oldest volumes in my OFFSITE copypool, which are marked
VAULT by DRM.

They no longer have ANY entry in the volume history file; running
 select * from volhistory where volume_name='xx' gets no hits at all,
but
 Q DRMEDIA * still shows that volume xx is marked VAULT.

So I still believe that removing STGNEW/STGDELETE via DELETE VOLHIST has NO
effect on storage pool volumes.  But it will make to "disappear" volumes
that are tracked ONLY via VOLUMEHISTORY, including DBBACKUP, DBSNAPSHORT,
and EXPORT entries.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 12, 2001 12:12 PM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


This was not my observation, but rather that of Glenn MacIntosh.  I don't
know if this is what happens or not.  Glenn had responded to me and not to
the list so I thought I'd post his reply to garner further input on the
assertion.  I wouldn't think deleting volhist records would cause this to
happen in DRM.

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs CO 80949-1313
(719) 531-5926
Fax: (240) 539-7175
Email: [EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Prather, Wanda
Sent: Thursday, April 12, 2001 9:57 AM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


Yikes, I didn't know that!  Thanks for pointing it out.

If you think you have tapes disappearing and don't want to do a manual
audit, you can write an audit program/script.

1)Pull a list of tapes that are physically in the library:  q libv
2)Pull a list of tapes that are in DRM VAULT status:  q drmedia *
wherest=vault
3)Create a list of tapes that SHOULD be accounted for (either generate a
tape range, if it's contiguous, or build a static list in a file).

Have your script compare the volsers in list (3) to list (1) + (2).

ANything that is on list 3, but isn't in either list (1) or (2), is MIA.
It's much easier to go to the vault looking for a specific VOLSER, than to
audit everything in there...

I also run a script that compares (1) to (2) at least once a month, to make
sure everything that was supposed to go to the vault actually did...






-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 12, 2001 10:52 AM
To: [EMAIL PROTECTED]
Subject: FW: Question on del volhist


Okay .. since we're talking del volhist ...

Can someone clarify this point with the del volhist ?

  If I am using scratch tapes (no private tapes), and I am using the DR
Module.   It seems to me that if I have a DR tape, offsite, and empty (in
retrieve state) and I delete the volhist entry,   the tape seems to
disappear totally from view. (i.e. a  "q drm" command will no longer show
it).  I believe that scratch tapes will disappear if they are empty ... not
a big deal if you know which ones they are, but if they disappear from view
before you bring them back on site,  how would you find them (without
performing a manual offsite audit) ?  (It is my understanding that tapes
change from pending to empty "reuse" number of days (exactly) after they
have gone pending.  So you could have tapes changing from pennding to empty
any time "3" days from the time the expiration freed it up.)

  How is anyone handling this (or are you doing what I'm doing and not
deleting the volhist) ?   Should I run my expiration weekly instead of daily
?  Then I would know when my tapes would be changing to empty.

Thanks,

Glenn MacIntosh
Manager of Technical Services
Sobeys Inc.
123 Foord St.
Stellarton, Nova Scotia
(902) 752-8371 Ext. 4017


-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 11, 2001 11:51 AM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


Don't over think it.  Remember, information about volumes is in the database
too and really, the only volume history record you are concerned about is
the one indicating the most recent db backup tape.  I routinely run, like
daily, a delete volhist type=all todate=today-30.

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs CO 80949-1313
(719) 531-5926
Fax: (240) 539-7175
Email: [EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]

Re: Can not checkout the tape through scheduled job

2001-04-16 Thread Prather, Wanda

Some admin commands are not allowed for scheduling as admin schedules.

However, you CAN schedule a RUN command that executes a TSM server script.
.
So if you put your CHECKOUT command in a named TSM server script, you should
be able to schedule the script as "run scriptname".

I schedule my CHECKINs that way..

-Original Message-
From: Ganu Sachin, IBM [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 13, 2001 12:26 AM
To: [EMAIL PROTECTED]
Subject: Can not checkout the tape through scheduled job


Hi,

A query about TSM Administrative scheduling.

We have TSM ,configured on AIX 4.3.3. I want to schedule a task which will
checkout the volume. The volume is used for tsm database backup. When I
define a new administrative task for checkout, I am getting following
error. The command is used in the scheduling task is " CHECKOUT LIBVOLUME
LIB0 TSK071 CHECKLABEL=NO FORCE=NO REMOVE=NO". I do not want to set FORCE
option to YES.

But if, I run the same command from server command prompt, command executed
successfully.

ERROR :-
"ANR2755E DEFINE SCHEDULE or UPDATE SCHEDULE parameter CMD='CHECKOUT
LIBVOLUME LIB0 TSK071 CHECKLABEL=NO  FORCE=NO REMOVE=NO' - not eligible for
scheduling. "

Pls tell me whether it is possible to schedule the administrative task for
checking out the volume, if yes pls explain.


Thanks in advance

Sachin Ganu



Re: Question on del volhist

2001-04-16 Thread Prather, Wanda

Hi Glenn,

Yes, all my tapes, primary and copy pools, are managed as SCRATCH.

When a DR/copy pool tape becomes empty through reclamation,  if is a stgpool
tape and you have a REUSEDELAY set on the stgpool, it first becomes PENDING.
It's DRM status is still VAULT.

It stays in PENDING state for REUSEDELAY days, then it goes to EMPTY. At
that point, the DRM status changes from VAULT to VAULTRETRIEVE.

At that point, you should make arrangements to bring the tapes in
VAULTRETRIEVE status back onsite.

The next MOVE DRMEDIA you do, the DRM status changes to COURIERRETRIEVE.
The stgpool status is still EMPTY.  (You can skip the COURIERRETREIEVE
status if you want.)

One the tapes are back onsite, you are supposed to do a MOVE DRMEDIA to
state ONSITERETRIEVE.  THis is a little sneaky, because the tapes don't stay
in that state.  As soon as you move them to ONSITERETRIEVE, if you look
carefully at the log, you will see TSM immediately deletes them from the
storage pool.

If you have already checked them back into the library, they will go back
into library scratch status.  If you haven't already checked them back in,
then yes, they "disappear" and are shown nowhere in TSM.

But at that point, you should put them back in the library and check them in
as scratch.

DRM doesn't do that checkin for you; when you bring the tapes back, you have
to do the checkin yourself.

We don't have any problem with this working OK.  You just have to make sure
that you (or your operations staff) has explicit instructions to make sure
that the tapes get brought back onsite before somebody does that last MOVE
DRMEDIA that gets them changed back to ONSITERETRIEVE, and thereby deleted
from the storage pool listing.

(As for your question about volhist, scratch tapes are not tracked there.
Tapes become "scratch" by being checked into the library in scratch status.
YOu can only see "scratch" tapes by looking at the library inventory:  q
libv)

Hope this helps..


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert








-Original Message-
From: Glenn MacIntosh [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 16, 2001 8:58 AM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


Wanda,

The issue that I THINK I have, is that I am using scratch tapes for all
pools (copypools as well).   Are your copypools private or scratch ?What
I think happens, is that when a DR tape becomes empty (through
expiration/reclamation), and changes from VAULT to VAULTRETRIEVE, that since
it is a scratch tape, with no data on it, and not in my library (q libv),
that it will simply cease to exist.

If you are using category private tapes, you will not see this problem.

If your tapes are scratch and still contain data, (status vault) you
will not see this problem.

Would an empty scratch tape only be tracked by volhistory entries ?
(Where else would you find an entry for an empty scratch ?)

P.S.  Sorry for inadvertently replying to Kelly instead of the listserv.

Thanks,

Glenn MacIntosh
Manager of Technical Services
Sobeys Inc.
123 Foord St.
Stellarton, Nova Scotia
(902) 752-8371 Ext. 4017


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 13, 2001 1:54 PM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


OK,
I ran a DELETE VOLHIST for all record types all the way back to January.

I checked on the oldest volumes in my OFFSITE copypool, which are marked
VAULT by DRM.

They no longer have ANY entry in the volume history file; running
 select * from volhistory where volume_name='xx' gets no hits at all,
but
 Q DRMEDIA * still shows that volume xx is marked VAULT.

So I still believe that removing STGNEW/STGDELETE via DELETE VOLHIST has NO
effect on storage pool volumes.  But it will make to "disappear" volumes
that are tracked ONLY via VOLUMEHISTORY, including DBBACKUP, DBSNAPSHORT,
and EXPORT entries.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 12, 2001 12:12 PM
To: [EMAIL PROTECTED]
Subject: Re: Question on del volhist


This was not my observation, but rather that of Glenn MacIntosh.  I don't
know if this is what happens or not.  Glenn had responded to me and not to
the list so I thought I'd post his reply to 

Re: Should LOADDB take this long?

2001-04-16 Thread Prather, Wanda

There was a bug that caused  TSM to hang coming up at "Recovery log mount in
progress".  And it just never comes up.  I know it hit the NT server and AIX
server at 3.7.2, and was fixed in 3.7.4.  Not sure about OS/390, but check
into it.

I expect you need to upgrade to 3.7.4, or you will be subject to this
happening again.  Just putting on the fix was all we had
to do, and TSM came right back up, did not have to restore the DB.


-Original Message-
From: William Boyer [mailto:[EMAIL PROTECTED]]
Sent: Sunday, April 15, 2001 5:50 PM
To: [EMAIL PROTECTED]
Subject: Should LOADDB take this long?


TSM 3.7.3.0 running on OS/390 2.9

Last night our automated processes to shut the system down didn't take into
account that TSM would take a few extra minutes to halt due to reclamation
running. The automation script ended up taking TCPIP and CA-TLMS and
DFSMSRmm (in warn mode) down while TSM was still up and trying to close out
his tape processing. TSM ended up abending with a EC6. After our downtime,
TSM wouldn't come back up. It would sit there in a CPU loop with no I/O. The
last message in the joblog was "ANR0306I Recovery log volume mount in
progress." WOuld come up any farther. I managed to get an DUMPDB to run and
it took only 1/2 hour and dumped over 78million database entries for a total
of 6.9MB. I then did a FORMAT for all the db/log volumes and started the
LOADDB last night at 20:15. It is still running and it is now 22 hours later
and has only processed 70million of those database entries.

I searched the archives, but there wasn't much on LOADDB. Should LOADDB take
this long when the DUMPDB only took 1/2hour? Good thing this is a holiday
weekend or the users and managers would be more upset than they
are. I tell them, Hey it wasn't my shutdown script that corrupted the
system!!!

Also, if anyone has any ideas on how I could have averted having to do the
DUMP/LOADDB processes I would be more than happy to hear them. I just
couldn't think of any way to bypass the recovery log processing during
startup, or to have the load cleared by itself.

TIA,
Bill Boyer
"Some days you are the bug, some days you are the windshield." - ??



Re: Expiration

2001-04-17 Thread Prather, Wanda

If this is a new problem, I also suggest you check the AIX errpt to see if
there is anything bad going on with your disk

-Original Message-
From: Chibois, Herve [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 17, 2001 9:51 AM
To: [EMAIL PROTECTED]
Subject: Re: Expiration


Hi  Bert,

Your TSM server creeps !

what is your BUFPOOLSIZE (dsmserv.opt)

you should not go below 99.5 % for PCT CACHE

change BUFPOOLSIZE to 256 Mo at least and restart your tsm server,
the expiration process should fill the DB cache and it should speed up
your process.

What kind of disks are you using ? SSA or SCSI ? are the DBVOL files
on the same disks as AIX ?

If your DB is "old" you should do an unload/load db operation to
defrag. your DB pages.

rv


> -Message d'origine-
> De : Bert Moonen [mailto:[EMAIL PROTECTED]]
> Envoyé : mardi 17 avril 2001 15:44
> À : [EMAIL PROTECTED]
> Objet : Expiration
> 
> 
> Hello,
> 
> expiration runs very, very, very slow. Only 16 objects in 20 
> minutes time.
> We have ADSM 3.1.2.90 on an S70 machine with 4GB Memory AIX 4.3.3.
> Does anyone know what my problem is
> adsm> q db f=d
> Available Space (MB): 19,300, Assigned Capacity (MB): 17,964, Maximum
> Extension (MB): 1,336, Maximum Reduction (MB): 3,740, Page 
> Size (bytes):
> 4,096, Total Usable Pages: 4,598,784, Used Pages: 3,315,954, 
> Pct Util: 72.1,
> Max. Pct Util: 72.1, Physical Volumes: 1, Buffer Pool Pages: 
> 32,768, Total
> Buffer Requests: 651,353,008, Cache Hit Pct.: 98.34, Cache 
> Wait Pct.: 0.00,
> Backup in Progress?: No, Type of Backup In Progress:, 
> Incrementals Since
> Last Full: 0, Changed Since Last Backup (MB): 1,759.66, 
> Percentage Changed:
> 13.58, Last Complete Backup Date/Time: 04/16/01 17:00:39.
> 



Re: Roll Forward Mode

2001-04-17 Thread Prather, Wanda

Yes, it does no harm to switch to ROLLFORWARD mode in mid-stream, it just
works.

It does fire an extra DB backup; or maybe that's when you switch BACK from
ROLLFORWARD to NORMAL, I forget.

The amount of log space you need depends on the amount of activity rather
than the DB size, so it's hard to say whether 2 GB is enough log space or
not.  You should be OK if you have a trigger set, just keep an eye on the
log utilization for a few days.


-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 17, 2001 11:07 AM
To: [EMAIL PROTECTED]
Subject: Roll Forward Mode


Here is a little of my environment that might help answer this question:
TSM 4.1.2
AIX 4.3.3
3494lib with 4 3590-E1A's

Database:

  Available Space (MB): 18,000
Assigned Capacity (MB): 18,000
Maximum Extension (MB): 0
Maximum Reduction (MB): 8,740
 Page Size (bytes): 4,096
Total Usable Pages: 4,608,000
Used Pages: 2,356,088
  Pct Util: 51.1
 Max. Pct Util: 51.5
  Physical Volumes: 6
 Buffer Pool Pages: 32,768
 Total Buffer Requests: 188,336,528
Cache Hit Pct.: 98.12  (BufPoolSize131072)
   Cache Wait Pct.: 0.00
   Backup in Progress?: No
Type of Backup In Progress:
  Incrementals Since Last Full: 0
Changed Since Last Backup (MB): 621.21
Percentage Changed: 6.75
Last Complete Backup Date/Time: 04/16/01 11:16:22

Log:
   Available Space (MB): 2,000
 Assigned Capacity (MB): 2,000
 Maximum Extension (MB): 0
 Maximum Reduction (MB): 1,996
  Page Size (bytes): 4,096
 Total Usable Pages: 511,488
 Used Pages: 279
   Pct Util: 0.1
  Max. Pct Util: 89.3
   Physical Volumes: 4
 Log Pool Pages: 2,048
 Log Pool Pct. Util: 0.18
 Log Pool Pct. Wait: 0.00
Cumulative Consumption (MB): 409,183.96
Consumption Reset Date/Time: 09/27/99 13:00:44

I want to switch to roll forward mode but am not sure if the log size is
sufficient. Does anyone have any experience with switching in mid stream? I
have about 120 clients, I do have a space trigger set for both the DB and
log.

I'd also like to know if anyone thinks my (BufPoolSize131072) is a
bit low. I discovered my cache hit was down below 97% last week so I reset
the buffer pool and it creeped up to just over 98%. Would increasing this
help? Do I need to consider my total server memory before I increase this?

Thanks for all the help,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (888) 997-9614



License counts for TSM 4.1 ?

2001-04-17 Thread Prather, Wanda

A while back someone posted a section of the 4.1 server README file (see
below).
Says that TSM 4.1 doesn't count license as "in use" if the client has been
inactive for 30+days.

Does that mean you only need to pay for enough licenses to cover the "in
use" count now?


$$1 License counting changes for "in use" *

With this service level the following changes to in use license counting are
introduced.
- License Expiration. A license feature that has not been used for more than
30 day will be expired from the in use license count. This will not change
the registered licenses, only the count of the in use licenses. Libraries in
use will not be expired, only client license features.
 - License actuals update. The number of licenses in use will now be updated
when the client session ends. An audit license is no longer required for the
number of in use licenses to get updated.



Re: 3.7 to 4.1 and the license issue

2001-04-17 Thread Prather, Wanda

Alas, "it depends".

What I was told is:

If your licenses were under a maintenance agreement at the time 4.1, came
out, you may be entitled to the upgrade for free - depends on the contract.

If not, the deadline for a price break on upgrading to 4.1 was Dec. 31,
2000.
So you will have to buy them like new.


-Original Message-
From: Tyree, David [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 17, 2001 2:19 PM
To: [EMAIL PROTECTED]
Subject: 3.7 to 4.1 and the license issue


Can someone explain just what the change in licensing is all about?
We need to do the 3.7 to 4.1 update soon. We have 75 licenses running right
now in version 3.7. Am I going to lose these and have to start from scratch?


David Tyree
Microcomputer Specialist
South Georgia Medical Center
229.333.1155

Confidential Notice:  This e-mail message, including any attachments, is for
the sole use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorized review, use,  disclosure or
distribution is prohibited.  If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.



Re: DEF ASSOC domain_name sched_name * - ARRGGHHH

2001-04-18 Thread Prather, Wanda

When I have to do something like that, I have TSM generate the commands I
need by putting fixed text into an SQL select statement.   For example:

select 'delete association', node_name , domain_name, schedule_name from
associations  where CHG_TIME>'2001-04-17 00:00'

That SELECT  generates output with the "delete association" text at the
beginning of each linet.  Pipe the output into a file.  Edit the file to
delete the header lines, and VIOLA, you have a list of commands ready to
execute as a macro.

You will have to adjust the WHERE statements to make sure it pulls out just
the stuff you want to undo.

Hope that helps

Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert







-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 18, 2001 10:55 AM
To: [EMAIL PROTECTED]
Subject: DEF ASSOC domain_name sched_name * - ARRGGHHH


I had an administrator 'play' with the concept of running the subject
command against a domain with over 1000 nodes and 5 backup schedules. So,
this administrator ran this command against each schedule expecting it to
function like the QUERY command. (I know, some people shouldn't be allowed
close to computers. But..) Now, every node in the domain will backup on all
5 schedules.

Does anyone know of any way I can undo this mess? Tivoli, is this a feature?

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com



Re: remaining DESTROYED volumes

2001-04-24 Thread Prather, Wanda

If these tapes are really gone, you know you can't get the data back any
way, and you just want to make TSM forget all about them and purge all the
DB entries for them (and their backed up files), then do:
 DELETE VOL blah DISCARDDATA=YES.



-Original Message-
From: Lawrence Clark [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 23, 2001 1:05 PM
To: [EMAIL PROTECTED]
Subject: remaining DESTROYED volumes


Hi:
We were able to recover most of our data on destroyed volumes from copypool
data, thus removing the destroyed volumes from the system. However, 5
destroyed volumes still retain a small set of files that TSM was not able to
recover from copypool volumes. How would you remove these volumes from the
system?:

001019 NTBACKUP 3494BTAPE40960.0 0.2
Filling   Destroyed   Yes44.8No  48
1 03/04/2001 05:35:06 1  0
03/03/2001 06:10:43 0
001083 NTBACKUP 3494BTAPE40960.0 0.3
Filling   Destroyed   Yes61.4No  18
1 03/04/2001 06:54:48 1  0
02/25/2001 20:02:05 0
001130 NTBACKUP 3494BTAPE40960.0 0.0
Filling   Destroyed   Yes3.2 No  1
1 03/04/2001 05:48:36 1  0
03/04/2001 05:41:51 0
001147 NTBACKUP 3494BTAPE40960.0 1.2
Filling   Destroyed   Yes86.2No  25
1 04/05/2001 04:40:11 1  0
04/05/2001 10:48:28 3
001209 NTBACKUP 3494BTAPE40960.0 3.9
Filling   Destroyed   Yes11.4No  72
1 04/04/2001 05:17:46 1  0
03/28/2001 19:17:55 0

Larry Clark
NYS Thruway Authority



Re: merging two servers (export server)

2001-04-24 Thread Prather, Wanda

No.  
Each TSM server has it's own data base.
There is no way to merge two TSM data bases.

What you would have to do is to EXPORT each of the clients from server A,
then IMPORT them individually into server B.

What a lot of people do, is just point the clients from server A to server
B, and let them take new, full backups to server B, essentially starting
them over.  Then they only do the EXPORT/IMPORT for any clients where it's
important to retain the old data, or archived data, as the EXPORT/IMPORT
process is relatively slow.  

You can of course, disconnect the library from server A, and reconnect/use
it on server B.

And another thing you can do (at least on a UNIX server, don't know about
Windows) , is MOVE server A to the same host as server B, and run both
instances of the TSM server on the same host.   There is not a lot of
benefit to doing that, but it would give you some extra time to work on
merging the two systems via EXPORT/IMPORT, if you need to get rid of server
A quickly.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





   

  

-Original Message-
From: Karsten Hüttmann [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 24, 2001 9:09 AM
To: [EMAIL PROTECTED]
Subject: merging two servers (export server)


Hi !
One of our customers (Airbus Industries) wants to merge two TSM-Server.
Is it possible to import the second server via the following steps ?
- export server (on server two)
- move the library from the second server to the first
- create entries for the library/stgpools and so on
- import server (on server one)

Is this correct ? Any other suggestions ?
Thanks in advance.
--
Mit freundlichen Grüssen / with regards
Karsten Hüttmann



Re: Expiration of DbBackup, Volhist and Libvolume status.

2001-04-24 Thread Prather, Wanda

Hi John,



I would just pull the volsers of the libvolumes that are defined as
DBBACKUP:
select volume_name from libvolumes where last_use='DbBackup'

Then look and see if they still live ANYWHERE in volume history:
select * from volhistory where volume_name='XX'

Assuming these are physically in the 3494, if they don't exist in
volhistory, and they don't exist in a storage pool, I would change them all
back to scratch tapes:
update libv libname volumename status=scratch

Then just watch to see if they get reused.

 
Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 24, 2001 2:43 PM
To: [EMAIL PROTECTED]
Subject: Expiration of DbBackup, Volhist and Libvolume status.


I have been keeping two weeks worth of FULL database backups. The Volhist
expiration command shows nothing unusual. The command I use is 'del volhist
todate=today-15 type=dbb'. However when I query the 3494 library volume I
find 31 cartridges used for DBBackup. My Database is just below 40GB. Q DB
shows:
tsm: FSPHNSM1>q db f=d

  Available Space (MB): 39,760
Assigned Capacity (MB): 39,760
Maximum Extension (MB): 0
Maximum Reduction (MB): 16,812
 Page Size (bytes): 4,096
Total Usable Pages: 10,178,560
Used Pages: 5,865,312
  Pct Util: 57.6
 Max. Pct Util: 57.8
  Physical Volumes: 12
 Buffer Pool Pages: 16,384
 Total Buffer Requests: 27,301,611
Cache Hit Pct.: 98.47
   Cache Wait Pct.: 0.00
   Backup in Progress?: No
Type of Backup In Progress:
  Incrementals Since Last Full: 0
Changed Since Last Backup (MB): 781.95
Percentage Changed: 3.41
Last Complete Backup Date/Time: 04/24/2001 10:00:24

We have 3590-E1A cartridges and looking in the VOLHIST file I do not see the
full database backup taking two tapes at any time.

Now, the SQL statement 'select status, last_use, count(*) from libvolumes
group by last_use, status' shows:

STATUS LAST_USEUnnamed[3]
-- -- ---
Scratch41
PrivateData   528
PrivateDbBackup31

Why don't I see something like 15 tapes in the DbBackup category?

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com



Re: Expiration of DbBackup, Volhist and Libvolume status.

2001-04-24 Thread Prather, Wanda

Good question.  Offhand, I can't think of anything that would cause it to
happen specifically to DBBackup tapes.

TSM will change the status of a tape from SCRATCH to PRIVATE if it has
trouble mounting the tape, so it can move on to the next scratch tape.

And I don't think the LastUse field is updated util the tape is used again
(unless the tapes are checked out).

But if it is a mounting problem, it should hit scratch pool tapes, too.

FYI, I have a script that once a month pulls a list of libv tapes and
stgpool tapes and compare them, to find tapes that are like this - i.e., no
valid data but in PRIVATE status caused by a mount failure.


.


-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 24, 2001 4:46 PM
To: [EMAIL PROTECTED]
Subject: Re: Expiration of DbBackup, Volhist and Libvolume status.


Thanks, Wanda. That's what I did. I found the libvolumes that showed as
DbBackup and compared them with the VOLHIST information. I repeated this
step again five days later just to make sure they were in fact the same
volumes. They were.  So, I updated the libvolumes and changed the status to
scratch.

Now, I guess I just watch and make sure they get used.'

I wonder what made this happen hmmm.

jt

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 24, 2001 4:17 PM
To: [EMAIL PROTECTED]
Subject: Re: Expiration of DbBackup, Volhist and Libvolume status.


Hi John,



I would just pull the volsers of the libvolumes that are defined as
DBBACKUP:
select volume_name from libvolumes where last_use='DbBackup'

Then look and see if they still live ANYWHERE in volume history:
select * from volhistory where volume_name='XX'

Assuming these are physically in the 3494, if they don't exist in
volhistory, and they don't exist in a storage pool, I would change them all
back to scratch tapes:
update libv libname volumename status=scratch

Then just watch to see if they get reused.

 
Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 24, 2001 2:43 PM
To: [EMAIL PROTECTED]
Subject: Expiration of DbBackup, Volhist and Libvolume status.


I have been keeping two weeks worth of FULL database backups. The Volhist
expiration command shows nothing unusual. The command I use is 'del volhist
todate=today-15 type=dbb'. However when I query the 3494 library volume I
find 31 cartridges used for DBBackup. My Database is just below 40GB. Q DB
shows:
tsm: FSPHNSM1>q db f=d

  Available Space (MB): 39,760
Assigned Capacity (MB): 39,760
Maximum Extension (MB): 0
Maximum Reduction (MB): 16,812
 Page Size (bytes): 4,096
Total Usable Pages: 10,178,560
Used Pages: 5,865,312
  Pct Util: 57.6
 Max. Pct Util: 57.8
  Physical Volumes: 12
 Buffer Pool Pages: 16,384
 Total Buffer Requests: 27,301,611
Cache Hit Pct.: 98.47
   Cache Wait Pct.: 0.00
   Backup in Progress?: No
Type of Backup In Progress:
  Incrementals Since Last Full: 0
Changed Since Last Backup (MB): 781.95
Percentage Changed: 3.41
Last Complete Backup Date/Time: 04/24/2001 10:00:24

We have 3590-E1A cartridges and looking in the VOLHIST file I do not see the
full database backup taking two tapes at any time.

Now, the SQL statement 'select status, last_use, count(*) from libvolumes
group by last_use, status' shows:

STATUS LAST_USEUnnamed[3]
-- -- ---
Scratch41
PrivateData   528
PrivateDbBackup31

Why don't I see something like 15 tapes in the DbBackup category?

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com



Re: Write Protect

2001-04-25 Thread Prather, Wanda

What type of tape?
The "write protected" error is because someone has physically flipped the
write-protect tab on the tape cartridge.
But the tab is in a different place depending on the tape type...


-Original Message-
From: Larry Way [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 25, 2001 3:07 PM
To: [EMAIL PROTECTED]
Subject: Write Protect


What determines if a tape is write protected?  Had discovered that tape were
in a status of Private but not being used.  I updated the volume and placed
in scratch but when TSM attempts to use as scratch he says that the volume
is write protected ?  Can't explain why this is happening...

Larry Way

408-743-4242  Desk
408-655-3512  Cell
408-743-4201  Fax



Re: Incremental DBbackups

2001-04-25 Thread Prather, Wanda

No.
Each DB backup, full or incremental, starts a new tape.

-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 25, 2001 4:15 PM
To: [EMAIL PROTECTED]
Subject: Incremental DBbackups


Hello all,

A question about incremental DBbackups. If I run 10 incremental backups on
the database per day will they all be written to the same tape, provided
there is room, as long as the tape remains in the library?

Thanks for the info,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (888) 997-9614



Re: Ramifications of Delete Volume as pertains to next backup cyc le.

2001-04-27 Thread Prather, Wanda

Yes, if the file still exists on the client machine, and TSM determines that
it no longer has a backup of the file, it will back up that file again
during the next cycle.


-Original Message-
From: Alan Davenport [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 27, 2001 10:21 AM
To: [EMAIL PROTECTED]
Subject: Ramifications of Delete Volume as pertains to next backup
cycle.


I have a question regarding what happens after a "DELETE VOL X
DISCARD=YES" is issued. It is not about what happens to the data on the
tape. I realize that the data is then gone with no chance of recovery.
My question is what does *SM do during the next backup cycle. Will *SM
see the files that were previously backed up on the tape as new files on
the client and then back them up again?

  Thank you,
  Al



Re: Migrating to new server.

2001-04-27 Thread Prather, Wanda

Hi Geoff,

The one hard and fast rule here, is NEVER CREATE A SITUATION WHERE A
HARDWARE FAILURE COULD DESTROY YOUR RECOVERY LOG.   Beyond that, the answer
to everything else is pretty much "it depends".

If you search the archives of this list, you will find that IBM/Tivoli's
position has always stated that it is better to let TSM do the DB and log
mirroring, instead of letting the OS do the mirroring.  They believe there
are cases where TSM can detect a logical/software error in the DB or log and
NOT propagate it into the mirror copy.  If you let the OS or the hardware do
the mirroring you are protected from hardware failure, but any type of
logical erro in the DB or log will immediately be copied to the mirror and
you will have two copies of junk.  I have never heard from anyone that has
proved this case, but anyway TSM does a very good job of managing its own
mirrors.

For full recovery, you should be running the recovery log in rollforward
mode.  I belive it is absolutely necessary to have a mirror copy of the
recovery log that is physically isolated from the primary.  That way I know
I can always recover the DB from a DB restore/rollforward operation, even if
there is no DB mirror.

Whether you need the DB mirrored or not, depends on your situation.
Mirroring the DB is obviouly a good thing and always recommended.  Bbut in
systems where you don't have much disk available, mirror the log instead of
the DB, and do daily DB backups.  You can always recover that way, it just
takes more time.

I believe putting the DB on RAID 5 with mirrored recovery logs provides
about as good a recovery situation as most sites require, as the probability
of losing a DB on RAID-5 is very low, assuming you trust your RAID-5 vendor.
(A mistake I think some people make is putting both the DB and log on RAID-5
and thinking they are protected, when actually there is one RAID-5
controller that could be a single point of failure...)

You have to consider what YOUR installation recovery requirements really
are.  From testing I know that if  we lose a TSM  DB here, it would take at
most a 4 hour outage to recover from the DB backup.  That is acceptable
here, considering the extremely low probability of losing the DB on a RAID-5
disk, so I consider not mirroring the DB to be acceptable if the logs are
mirrored.On the other hand, if you are using TSM space management/HSM,
you can't afford ANY outage at all, and you have to mirror everything, and
probably should be running your TSM server in an HA cluster!
.
However, RAID-5 is slower than mirrored non-RAID disk.  IN the busiest
system we have here, I started with one copy of the DB on external SCSI
RAID-5.  As the load increased, we grew and moved the DB to SSA RAID-5,
which was faster.  Then as the load increased more, we grew and had to take
the DB off RAID-5 and use more disk for mirroring to pick up more speed for
that system.  Whatever works, works.  As long as you mirror those logs with
no single point of failure and do good DB back ups, you can recover.
Everything else is just an issue of time.

I have never had a problem putting one copy of the recovery log on the OS
disk, unless there is a lot of paging activity.  However, if you have a lot
of paging, THAT is your performance problem, and you should worry about
fixing that!

PUtting storage pool volumes on the same disk as your log WILL cause some
perfomance hit.  The question is, do you care?  The biggest hit I see is
during migration - does migration occur at a time when performance matters
to you?  If there is performance degradation at 2am and no one is there to
see it (and you still meet your time windows), does it really exist? So on
my systems where throughput is an issue, I isolate the logs on their own
disks.  On the systems where throughput isn't an issue, I use the extra
space for a storage pool volume if I need it.  (It may be a good place for
an archive pool volume, if archiving occurs infrequently.)

Hope that helps some

Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert






-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 27, 2001 10:14 AM
To: [EMAIL PROTECTED]
Subject: Migrating to new server.


I sent this yesterday and haven't seen a peep on it. In case it didn't make
it here it is again.

Hi all,

After learning something about AIX I did some investigating on the system
that IBM had set up for us. I discovered that the TSM database and log are
on the same physical disk and mirrored through TSM on a separate physical
disk. Everything I've read tells me to separate these so I wonder why it was
done that way.

The question now is what to do about it. I don't seem to be having a major
pe

Re: 500GB Backup

2001-04-27 Thread Prather, Wanda

And it depends on the type of DB.  TSM only supports a couple in LAN-free
mode.

-Original Message-
From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 27, 2001 1:18 PM
To: [EMAIL PROTECTED]
Subject: Re: 500GB Backup


You need a SAN in place to use the LAN free backup.

-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 27, 2001 12:01 PM
To: [EMAIL PROTECTED]
Subject: Re: 500GB Backup
Importance: High


Hi
If I am correct IBM offers LAN free backup which is faster .But need to
purchase additional s/w.
pinni



-Original Message-
From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 27, 2001 11:48 AM
To: [EMAIL PROTECTED]
Subject: 500GB Backup


I need to backup a 500GB database and my current setup is my tsm server is
on the same ip subnet has the database server.  Tsm is connected to 500GB of
SSA storage with 5 RAID5 sets with two 50GB tsm volumes on each raid set.
My though put to the disk only seems to be 9000 kB/s.  Which doesn't seem
very high to me. My backup time is about 6 hours.  Does anyone have any
better senarios for backing up this amount of data in the smallest amount of
time soon.  I am going to be backing up a 1Tb per day and I need to get me
backup times to the lowest.  Does anyone have any suggestions on how to do
that.

Thanks
***EMAIL  DISCLAIMER**
This e-mail and any files transmitted with it may be confidential and are
intended solely for the use of the individual or entity to whom they are
addressed.   If you are not the intended recipient or the individual
responsible for delivering the e-mail to the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken
in reliance on it, is strictly prohibited.  If you have received this e-mail
in error, please delete it and notify the sender or contact Health
Information  Management (312) 996-3941.
***EMAIL  DISCLAIMER**
This e-mail and any files transmitted with it may be confidential and are
intended solely for the use of the individual or entity to whom they are
addressed.   If you are not the intended recipient or the individual
responsible for delivering the e-mail to the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken
in reliance on it, is strictly prohibited.  If you have received this e-mail
in error, please delete it and notify the sender or contact Health
Information  Management (312) 996-3941.



User Group for Baltimore, Washington DC, Northern VA meets May 17

2001-05-01 Thread Prather, Wanda

TSMUG (formerly DCAF) is the Tivoli Storage Manager User Group for
Baltimore, Washington DC, and Northern Virginia.
Please join us at our next meeting on Thursday, May 17, 2001, at the
beautiful T. Rowe Price campus in Owings Mills, Maryland.
Come and hear our speaker, Chris Dedham of EC Solutions, talk about his
experience with TSM 4.1. Participate in our round-table TSM discussions and
meet other TSM users in the area.
Meeting details are below. Attendance is free and everyone is welcome!
However, YOU MUST REGISTER in advance in order to attend. To register, just
send an email (subject REGISTRATION) to [EMAIL PROTECTED]
The cutoff date for registration is 5pm May 15. Directions will be mailed to
everyone who registers.
PLEASE FORWARD this message to anyone you know who is interested in Tivoli
Storage Manager.
+++
When & Where:
May 17, 2001
08:30am - 12:30pm
T. Rowe Price
Owings Mills, Maryland
+++
Registration:
Registration is free and everyone is welcome.
To register, all that is necessary is to send email (subject:  REGISTRATION)
to [EMAIL PROTECTED]
+++
Program:

Experiences with TSM 4.1 - Chris Dedham, ECSolutions
Win2K Backup Client Issues and Other News you can Use -
Wanda Prather, Jacob & Sundstrom
TSM Round Table: What's up at YOUR TSM site? -
Share your TSM experiences, good and bad!
Bring your questions for discussion with other TSM users and Tivoli reps.

+++
Contacts:
For further information about TSMUG meetings, please visit the TSMUG web
site: 
If you have questions please send email to [EMAIL PROTECTED] or leave voice
mail for Wanda Prather at 410-539-1135.



Re: restore errors

2001-05-02 Thread Prather, Wanda

Are you sure you are trying to restore the file with the same level of
client code that backed it up?

-Original Message-
From: Steven P Roder [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 02, 2001 5:34 PM
To: [EMAIL PROTECTED]
Subject: restore errors


Anyone know what causes this?  and if anything can be done to restore the
client data?


 05/02/01   13:31:59 ANS4032E Error processing
 '/temp/joeogden/mainmatch.f': file is not compressed.
 05/02/01   13:31:59 ANS4032E Error processing
 '/temp/joeogden/mainmatch.f': file is not compressed.

Thanks,

Steve Roder, University at Buffalo
HOD Service Coordinator
VM Systems Programmer
UNIX Systems Administrator (Solaris and AIX)
TSM/ADSM Administrator
([EMAIL PROTECTED] | (716)645-3564 |
http://ubvm.cc.buffalo.edu/~tkssteve)



Re: restore errors

2001-05-03 Thread Prather, Wanda

Yep, that will cause the error you saw.
You can restore old data with a newer client.
But you usually can't restore data backed up with a higher level client
using lower level code.

-Original Message-
From: Steven P Roder [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 03, 2001 8:46 AM
To: [EMAIL PROTECTED]
Subject: Re: restore errors


> Hi Steve!
> What's the client code level? It's probably rather outdated! Upgrade to
the
> most current level, this will solve your problem.
> Kindest regards,
> Eric van Loon
> KLM Royal Dutch Airlines

Actually, I think the issue is that the client is running 3.7.2, but the
person that was doing the restore was using the same OS and platform, but
hitting the server for the 3.7.2 client with 3.1.06

yes, very old

>
>
> -Original Message-
> From: Steven P Roder [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 02, 2001 23:34
> To: [EMAIL PROTECTED]
> Subject: restore errors
>
>
> Anyone know what causes this?  and if anything can be done to restore the
> client data?
>
>
>  05/02/01   13:31:59 ANS4032E Error processing
>  '/temp/joeogden/mainmatch.f': file is not compressed.
>  05/02/01   13:31:59 ANS4032E Error processing
>  '/temp/joeogden/mainmatch.f': file is not compressed.
>
> Thanks,
>
> Steve Roder, University at Buffalo
> HOD Service Coordinator
> VM Systems Programmer
> UNIX Systems Administrator (Solaris and AIX)
> TSM/ADSM Administrator
> ([EMAIL PROTECTED] | (716)645-3564 |
> http://ubvm.cc.buffalo.edu/~tkssteve)
>
>
> **
> This e-mail and any attachment may contain confidential and privileged
> material intended for the addressee only. If you are not the addressee,
you
> are notified that no part of the e-mail or any attachment may be
disclosed,
> copied or distributed, and that any other action related to this e-mail or
> attachment is strictly prohibited, and may be unlawful. If you have
received
> this e-mail by error, please notify the sender immediately by return
e-mail,
> and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
> subsidiaries and/or its employees shall not be liable for the incorrect or
> incomplete transmission of this e-mail or any attachments, nor responsible
> for any delay in receipt.
> **
>
>

Steve Roder, University at Buffalo
HOD Service Coordinator
VM Systems Programmer
UNIX Systems Administrator (Solaris and AIX)
TSM/ADSM Administrator
([EMAIL PROTECTED] | (716)645-3564 |
http://ubvm.cc.buffalo.edu/~tkssteve)



TSM and Win2K file encryption

2001-05-08 Thread Prather, Wanda

ANybody successfully backed up files that were encrypted by the Win2K file
encryption mechanism?
We are getting an access violation by the scheduler, the system account
doesn't appear to have access to encrypted files.
But I'm clueless as to what privilege is required.



RE: How are you guys doing bare metal restores on TSM 4.x clients???!

2001-05-09 Thread Prather, Wanda

I posted our procedures to the scripts depot at www.coderelief.com.

Read carefully, though - it matters whether your server level is 3.7.2 or
above.



-Original Message-
From: Keith Kwiatek [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 09, 2001 5:16 PM
To: [EMAIL PROTECTED]
Subject: How are you guys doing bare metal restores on TSM 4.x
clients???!


How are you guys doing bare metal restores on TSM 4.x clients???!

Any detailed procedures?

Keith



Re: AIX Help

2001-05-10 Thread Prather, Wanda

 I believe starting at AIX 4.2.something, nohup actually changed rules; you
have to also < /dev/null.  The example below is in the README for the AIX
client, and it works for me on AIX 4.3.3:

nohup dsmc sched >/dev/null 2>&1 < /dev/null &


-Original Message-
From: Dearman, Richard
To: [EMAIL PROTECTED]
Sent: 5/9/01 6:13 PM
Subject: AIX Help

I am trying to run the "nohup dsmc sched 2> /dev/null &" command on my
AIX
4.3.3 machine and it works but every time I exit.  It says "There are
jobs
running"  so I exit again and whe I check the dsmc sched process is not
running.  I thought the nohup command was suppose to let process run
even
after you logout.  What am I doing wrong.

Thanks
***EMAIL  DISCLAIMER**
This e-mail and any files transmitted with it may be confidential and
are
intended solely for the use of the individual or entity to whom they are
addressed.   If you are not the intended recipient or the individual
responsible for delivering the e-mail to the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be
taken
in reliance on it, is strictly prohibited.  If you have received this
e-mail
in error, please delete it and notify the sender or contact Health
Information  Management (312) 996-3941.



FW: reclaim offsite stgpool volumes

2001-05-10 Thread Prather, Wanda

Geoff, you may have hit a bug that comes up occasionally on this list.
I sometimes (very infrequently) have tapes that show 100% reclaimable,
no files left on them.  But TSM will not send them back to scratch, or
reclaim them, or even let you do a DELETE with DISCARD=YES.

If that is the case, bring the tape back on site anyway and run AUDIT.
That usually fixes it.


-Original Message-
From: Gill, Geoffrey L.
To: [EMAIL PROTECTED]
Sent: 5/10/01 10:38 AM
Subject: Re: reclaim offsite stgpool volumes

>-Original Message-
>From: Bill Colwell [mailto:[EMAIL PROTECTED]]
>Sent: Thursday, May 10, 2001 7:02 AM
>To: [EMAIL PROTECTED]
>Subject: Re: reclaim offsite stgpool volumes
>
>
>Geoff,
>
>You ask if you are letting it run long enough.  Do you actually cancel
>the reclaim process?  Just raising the reclaim threshhold at 7
>pm should
>not stop the process.

Bill,

I am not cancelling the process. A script runs to update the threshold
at 7
PM.

See my other post on last nights reclamation.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (888) 997-9614



Re: TSM 4.1 bug???

2001-05-15 Thread Prather, Wanda

It is not uncommon with the 3.7.2 client running on Win2K to misreport
events.

Most people have reported their problem resolved by installing the 4.1.2.12
client code.

Go to www.adsm.org, search the ADSM 2001 bucket for this:  "which client
version is best".
On Feb. 6 I posted a list of client bugs.

-Original Message-
From: Joseph Dawes [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 14, 2001 10:41 AM
To: [EMAIL PROTECTED]
Subject: TSM 4.1 bug???


Does anyone have a problem with tsm reporting events inaccurately i.e.
saying it failes when it completed and the reverse??




please advise



Joe



Re: Backing up Windows Terminal Server

2001-05-15 Thread Prather, Wanda

We have had several Windows Terminal servers at various times.
Treat them just as any other NT machine, as far as TSM is concerned.
No special handling required.



-Original Message-
From: Gibb, Malcolm [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 15, 2001 5:57 AM
To: [EMAIL PROTECTED]
Subject: Backing up Windows Terminal Server


This is more a research question than anything else.

I'm looking to install the Tivoli Backup Archive Client software on Windows
Terminal Server 4.0 SP6 and just wondered if there is anything I should
know.  As in "DON'T, it doesn't work", I don't think I'll get an answer back
like that.  I'm just trying to be proactive and find out about potential
problems.

Cheers All

Malcolm Gibb



Win2k Scheduler & SYSTEM Account?

2001-05-15 Thread Prather, Wanda

We normally install the TSM Scheduler on all our machines using the default
SYSTEM account.

For a Win2K Pro machine, is there any disadvantage/fallout to using the
person's regular network logon account, instead of the SYSTEM account?
(assuming it is a member of the local ADMINISTRATORs group).

It looks like this is what we will HAVE to do to back up files encrypted
with the Windows 2000 File Encryption.

If you use a network logon account to run the scheduler, what happens if the
password expires?  It is toast?



Re: Win2k Scheduler & SYSTEM Account?

2001-05-15 Thread Prather, Wanda

Thanks Tim.
That's weird, I have a user whose scheduler gets ACCESS DENIED trying to
backup encrypted files, but they can backup OK using the GUI.
I don't get it.  But we're using 3.7.2.
I'll try it again with 4.1.2.12.

Many thanks for the info, that helps a lot!

-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 15, 2001 2:20 PM
To: [EMAIL PROTECTED]
Subject: Re: Win2k Scheduler & SYSTEM Account?


Hi Wanda:

You should be able to use the System Account to backup encrypted files.  I
just did a test here:
Create encrypted file with usera
Try to READ with userb (access denied)
Backup and restore with userb (userb has rights to the file) works fine
File is still encrypted after restore (userb cannot read, usera can)
Backed up another encrypted file via the scheduler
Restored file
File is still encrypted

I am running W2K SP1, TSM 4.12.12.

Basically an encrypted file prevents another user from reading the file but
that user can still backup, copy, delete the file etc, as long as they have
rights.

Tim Rushforth
City of Winnipeg

-Original Message-----
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 15, 2001 12:39 PM
To: [EMAIL PROTECTED]
Subject: Win2k Scheduler & SYSTEM Account?


We normally install the TSM Scheduler on all our machines using the default
SYSTEM account.

For a Win2K Pro machine, is there any disadvantage/fallout to using the
person's regular network logon account, instead of the SYSTEM account?
(assuming it is a member of the local ADMINISTRATORs group).

It looks like this is what we will HAVE to do to back up files encrypted
with the Windows 2000 File Encryption.

If you use a network logon account to run the scheduler, what happens if the
password expires?  It is toast?



Re: Windows 2000 Bare Metal Recovery and System Object restore pr oblem

2001-05-16 Thread Prather, Wanda

I haven't tried it myself, but I don't think so.  Tivoli states that the
Windows MACHINE ID must be the same when restoring "system objects", not
just the TSM nodename.
.  

-Original Message-
From: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 16, 2001 2:15 AM
To: [EMAIL PROTECTED]
Subject: Re: Windows 2000 Bare Metal Recovery and System Object restore pr
oblem


Wouldn't help you to use the same TSM client nodename as original server
(otherwise to use virtualnodename) only for a moment of restore system
objects and after restore change client nodename back?

Tom

> -Původní zpráva-
> Od:   Jeff Connor [SMTP:[EMAIL PROTECTED]]
> Odesláno:   15. května 2001 22:44
> Komu: [EMAIL PROTECTED]
> Předmět:  Windows 2000 Bare Metal Recovery and System Object restore
> problem
> 
> Our NT admins requested that we build one server(HCB1) from another
> servers
> backups(SYR1).  The servers are Windows 2000 Advanced Server running
> service pack one with identical hardware.   Our TSM server is V4.1.2 on
> AIX
> 4.3 and the Windows Client is 4.1.2.14.   We used the TSM B/A CLI with the
> virtualnodename option to succesfully restore the SYR1 servers C: drive to
> the HCB1 server.  We then attempted to restore the system objects but
> could
> not access them from the CLI or the GUI.
> 
> We got into TSM GUI on the original server, SYR1, and where able to see
> the
> subfolders under system objects.  Under HCB1 using virualnodename SYR1 we
> cannot see the system object subfolders.  We checked the gray box and
> attempted the restore but received zero objects inspected or backed up.
> 
> 
> We contacted TSM support who informed me that Bare Metal Restore and
> associated Redbooks are not supported.  However, after the level one
> person
> spoke to level two, they informed me that you could recover a Windows 2000
> server from the ground up if the target system for the restore is the same
> local machine name as the original server.  This is due to the fact that
> the Windows 2000 System Objects are stored using the local machine name in
> TSM and can't be restored to a new location like a drive file space can.
> 
> Has anyone else attempted what we tried or does anyone have any comments?
> We have been able to restore a server and it's registry to another server
> with a new name under Windows NT 4.0, using a temporary copy of NT
> installed to a folder called Wintemp, then coming up under the original
> name after restore/re-boot using the normal WINNT folder.  It looks like
> this is not possible with Windows 2000 and TSM due to the way TSM stores
> the system object backups.
> 
> Comments?
> 
> Jeff Connor
> Niagara Mohawk Power Corp.
> ---
> Příchozí zpráva neobsahuje viry.
> Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
> Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000
> 
---
Odchozí zpráva neobsahuje viry.
Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000



Re: possible update Path from ADSM 3.1.2 to ????

2001-05-16 Thread Prather, Wanda

To upgrade to any 3.7 or 4.1 server from 3.1, you need the CD, which means
you need $.

I don't believe there is any 3.1 server that is still supported by Tivoli.


-Original Message-
From: Block, Clemens [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 16, 2001 1:37 PM
To: [EMAIL PROTECTED]
Subject: possible update Path from ADSM 3.1.2 to 


Hi Folk!

We are still running a 3.1.2.16 ADSM Server Version.
As you know this is out of Support now. So there is a necessity  to migrate
the System to a higher level.
If we want to upgrade to the actual version 4.1.3 we had to pay a lot for
the license updates. Unfortunatelly there is no budget to do this at the
moment.

Can anybody tell me up to which Server Release I can upgrade without the
need of new license files. I tested to register licenses for a eval copy of
3.7.4.5 by using the old adsm-sl package on a test machine, but it didn't
work.

Probably we want to install the new server on a new machine. So there is
no need to be careful during the Installation process.

Maybe I did something wrong or it is impossible to use this release.
I'm grateful for any tip how to do this or where I can find detailed
Informa-
tions on the IBM Sites.

Thanks in advance
Clemens



Re: Error Log

2001-05-16 Thread Prather, Wanda

There is a messages manual.
Go to the TSM web site, look for the link on the right that says "product
manuals".

http://www.tivoli.com/products/index/storage_mgr/

However, the quickest way to find the message text for a client, is to ask
the client.
*   Start the COMMAND LINE version of the client (DSMC).
*   Follow the onscreen instructions EXACTLY (type D to scroll down).
*   All the messages are there, and they will be the correct version for
what you have installed.


-Original Message-
From: Tim Delaney [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 16, 2001 4:22 PM
To: [EMAIL PROTECTED]
Subject: Error Log


Can anyone help me with this TSM NT client error.log.  I have not been able
to locat descriptions for these errors in the Admin guide and the Client
guide.

The error is:
ANS1228E

Is there a central location for the error descriptions.

#2  How can I have configure my backup.cmd file to receive failed backup
codes.  I need these codes to update my ESP scheduler.

backup.cmd

@echo off
set >output.txt
c:
cd\program files\tivoli\tsm\baclient
dsmc incremental
exit


Thanks



5 GB limit on Recovery Log

2001-05-16 Thread Prather, Wanda

I've lost track - has the 5 GB limit on the size of the recovery log been
lifted in the V4.1 server?

The 5 GB limit is killing me here - it would be a good reason for me to
upgrade



Re: Data on destroyed tapes backed up automatically?

2001-05-16 Thread Prather, Wanda

No.

Just marking the tapes destroyed will NOT change the data base entries for
the files on the tape.
(You can even mark the tape back to READWRITE, if it is still available.
All that DESTROYED does is make it not-mountable)

Case 1:
If you have another copy of the data in a COPY pool, the appropriate thing
to do is a RESTORE VOLUME.  TSM will copy the data from your copy pool back
to the primary pool, the release the damaged primary volume.  (If the copy
pool volume is offsite, run RESTORE VOLUME  PREVIEW=YES, you can see
which tapes you need to bring back.)

Case 2:
If you don't have a copy pool, and you think any of the tapes might be
partially readable, run AUDIT VOLUME and specify FIX=YES.  It will delete
the DB entries for any files it can't read. (It can also put a LOT of stuff
in the activity log.)

Case 3:
If you know for certain that the tapes are all trash (from a TSM point of
view), you can just DELETE the volume and check the box that says
DISCARDDATA=YES.  That causes TSM to purge all the file version entries from
the data base, and release the primary tape.

In Case 1, the data doesn't need to be backed up again and nothing is lost.
In Case 2 & 3, the next time a client backup runs, it will re-back up any
files that still exist on the client machine.  (When a backup starts, the
first thing the client and server do is exchange information about what is
already backed up.)  Any extra versions, or backups of deleted files that
were on the tape, are toast.

Hope that helps.

Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert







-Original Message-
From: Kevin Kinder [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 16, 2001 3:52 PM
To: [EMAIL PROTECTED]
Subject: Data on destroyed tapes backed up automatically?


We are using TSM 4.1.0.

We have several tapes in our sequential access storage pool that have been
accidentally overwritten with data from other applications.  If we mark
those tapes as "Destroyed" will the next incremental job backup the files
that were on those tapes?

Our guess is that it would, because the database would know what data was on
those tapes.

Thanks for any help anyone can provide.

-
Kevin Kinder



Re: 5 GB limit on Recovery Log

2001-05-16 Thread Prather, Wanda

I was afraid that was the answer!
I wish they would bump it up.
I appreciate your reply, thanks!

-Original Message-
From: Kevin Kinder [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 16, 2001 5:08 PM
To: [EMAIL PROTECTED]
Subject: Re: 5 GB limit on Recovery Log


I am in a TSM class this week, and our instructor says the limit in 4.1 is
5.4 GB.

<<< "Prather, Wanda" <[EMAIL PROTECTED]>  5/16  4:40p >>>
I've lost track - has the 5 GB limit on the size of the recovery log been
lifted in the V4.1 server?

The 5 GB limit is killing me here - it would be a good reason for me to
upgrade



Re: 5 GB limit on Recovery Log

2001-05-17 Thread Prather, Wanda

Daily.  

-Original Message-
From: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 17, 2001 4:28 AM
To: [EMAIL PROTECTED]
Subject: Re: 5 GB limit on Recovery Log


How often do you provide full DB backup? Maybe increase of full DB backup
frequency can lower your recovery log needs.

Tom

> -Původní zpráva-
> Od:   Prather, Wanda [SMTP:[EMAIL PROTECTED]]
> Odesláno:   16. května 2001 22:38
> Komu: [EMAIL PROTECTED]
> Předmět:  5 GB limit on Recovery Log
> 
> I've lost track - has the 5 GB limit on the size of the recovery log been
> lifted in the V4.1 server?
> 
> The 5 GB limit is killing me here - it would be a good reason for me to
> upgrade
> ---
> Příchozí zpráva neobsahuje viry.
> Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
> Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000
> 
---
Odchozí zpráva neobsahuje viry.
Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000



Re: 5 GB limit on Recovery Log

2001-05-17 Thread Prather, Wanda

That's great, at least maybe there is light at the end of the tunnel
somewhere -

-Original Message-
From: Lisa Cabanas [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 17, 2001 9:19 AM
To: [EMAIL PROTECTED]
Subject: Re: 5 GB limit on Recovery Log


Well, I heard from a Tivoli technical sales rep yesterday that development
is working on getting around that limitation.  No time frame, though.

-lisa




"Prather, Wanda" <[EMAIL PROTECTED]>
05/16/2001 04:22 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc: (bcc: Lisa Cabanas/SC/MODOT)
Subject:Re: 5 GB limit on Recovery Log



I was afraid that was the answer!
I wish they would bump it up.
I appreciate your reply, thanks!

-Original Message-
From: Kevin Kinder [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 16, 2001 5:08 PM
To: [EMAIL PROTECTED]
Subject: Re: 5 GB limit on Recovery Log


I am in a TSM class this week, and our instructor says the limit in 4.1 is
5.4 GB.

<<< "Prather, Wanda" <[EMAIL PROTECTED]>  5/16  4:40p >>>
I've lost track - has the 5 GB limit on the size of the recovery log been
lifted in the V4.1 server?

The 5 GB limit is killing me here - it would be a good reason for me to
upgrade



Re: Schedlog

2001-05-17 Thread Prather, Wanda

For windows, there is an option called QUIET that you can put in dsm.opt.  I
don't use it, so I'm not sure how much difference it makes.  Description
below.

If you try it, be sure and STOP and restart the scheduler so it will pick up
the change to dsm.opt.

===
QUIET

The quiet option prevents messages from displaying on your screen during
processing.
For example, when you run the incremental, selective, or restore backupset
commands, information displays about each file that is backed up. Use the
quiet option
if you do not want TSM to display this information.
When you use the quiet option, some error information still displays on your
screen,
and messages are written to log files. If you do not specify quiet, the
default option,
verboseis used.
This option also affects the amount of information reported in the NT
eventlog and
schedule log.
Note: Quiet can also be defined on the server and overrides the client
setting.
===

-Original Message-
From: Rajesh Oak [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 17, 2001 12:11 PM
To: [EMAIL PROTECTED]
Subject: Schedlog


What do I do so that TSM does not record all the details of files that are
backed up? Is there a way to record just the major details like Backup
Started, Backup Completed successfully etc ?
The reason is that the schedlog becomes too big even I keep the records just
for 2 days.
This is for MAC and Windows 95/98 Client Schedlog file.

Rajesh Oak


Get 250 color business cards for FREE!
http://businesscards.lycos.com/vp/fastpath/



Re: wrong stgpool

2001-05-18 Thread Prather, Wanda

Or create a third management class, with an even longer retention than
notesclass.

-Original Message-
From: Short, Anne [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 18, 2001 9:31 AM
To: [EMAIL PROTECTED]
Subject: Re: wrong stgpool


Sounds like the new management class, notesclass, has a longer retention
period than the original, ntclass.  Directories will automatically be bound
to whatever management class has the longest retention period so that you
don't accidentally have data hanging around longer then directories.  If you
want to keep the directories bound to the shorter management class, you will
now need to use the DIRMC option to bind them that way.


Anne Short
Lockheed Martin Enterprise Information Systems
Gaithersburg, Maryland
301-240-6184
CODA/I Storage Management

-Original Message-
From: David DeCuir [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 18, 2001 9:12 AM
To: [EMAIL PROTECTED]
Subject: wrong stgpool

I have 20 NT servers with a dsm.opt that has an include * ntclass line.
These run a nightly backup that has worked fine for months, backing
up to the backup copygroup of mgmtclass ntclass with a dest. of
ntdiskpool. This is all normal.

Here is my problem.  I created a new mgmtclass called notesclass,
which has a backup copygroup with a dest. of notespool
(this is a tape pool). Now, keep in mind the 20 NT servers have
include * ntclass as the only mgmtclass. I'm seeing that q content of
some of the volumes in notespool show data paths from most of the
20 NT servers.

Notesclass is not a default mgmtclass and the notespool volumes
were aquired empty from scratch. How could this data get on the wrong
stgpool volumes?

Also, the NT servers data paths on the volumes in notespool point to
directories only, no files.
I'm also seeing this with my UNIX mgmtclass (having data paths
on notespool).

I am new to this and have never run any audits or unloads or
anything on the database or volumes. Could that help?
Still too new and dumb to know why this is happening. Thanks.



Re: HELP! Logged off, BUT ntuser.dat still "in use by another pro cess" ???!

2001-05-18 Thread Prather, Wanda

Services.
Each service runs under a logon id.
The default for most NT services is to use the SYSTEM id.

But sometimes services are set up with another id, espeically if the serivce
needs network or domain authority to access files.  Go into Services, and
look at the PROPERTIES of each.  You will see the logon id.

If any of the services runs under the user's id, then the user IS logged on,
and NTUSER.dat will still be in use!

BTW, while the ntuser.dat message is annoying, it isn't really a problem.
The logical contents of the NTUSER.DAT are backed up anyway as part of the
registry backup.  You can restore the user customization if you have EITHER
a backup copy of NTUSER.dat, or a good copy of the registry.  And you never
need to restore NTUSER.log.



-Original Message-
From: Keith Kwiatek [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 18, 2001 7:19 PM
To: [EMAIL PROTECTED]
Subject: HELP! Logged off, BUT ntuser.dat still "in use by another
process" ???!


Hello,

With respect to the ntuser.dat and ntuser.dat.log files, we have noticed
that our clients are getting: "the object is in use by another process"
entries in their log files. BUT they have logged off, and there are no other
users logged into the machine

Any ideas what else could be locking ntuser.dat and ntuser.dat.log files?

thanks!
Keith



Re: HELP! Logged off, BUT ntuser.dat still "in use by another pro cess" ???!

2001-05-18 Thread Prather, Wanda
t files were missing from the profile directories.
When we copied the ntuser.dat from the original box, all worked fine.

Tim Rushforth
City of Winnipeg

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 18, 2001 3:29 PM
To: [EMAIL PROTECTED]
Subject: Re: HELP! Logged off, BUT ntuser.dat still "in use by another
pro cess" ???!


Services.
Each service runs under a logon id.
The default for most NT services is to use the SYSTEM id.

But sometimes services are set up with another id, espeically if the serivce
needs network or domain authority to access files.  Go into Services, and
look at the PROPERTIES of each.  You will see the logon id.

If any of the services runs under the user's id, then the user IS logged on,
and NTUSER.dat will still be in use!

BTW, while the ntuser.dat message is annoying, it isn't really a problem.
The logical contents of the NTUSER.DAT are backed up anyway as part of the
registry backup.  You can restore the user customization if you have EITHER
a backup copy of NTUSER.dat, or a good copy of the registry.  And you never
need to restore NTUSER.log.



-Original Message-
From: Keith Kwiatek [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 18, 2001 7:19 PM
To: [EMAIL PROTECTED]
Subject: HELP! Logged off, BUT ntuser.dat still "in use by another
process" ???!


Hello,

With respect to the ntuser.dat and ntuser.dat.log files, we have noticed
that our clients are getting: "the object is in use by another process"
entries in their log files. BUT they have logged off, and there are no other
users logged into the machine

Any ideas what else could be locking ntuser.dat and ntuser.dat.log files?

thanks!
Keith



Re: Audit Volume not Helping

2001-05-24 Thread Prather, Wanda

You must have a copy pool?

If you have a copy pool, when TSM runs AUDIT, it just marks the files as
damaged and doesn't delete them from the data base (that would invalidate
the copies in the copy pool as well.)

You need to repair the problem by running the RESTORE STGPOOL poolname.

The first time, add PREVIEW=YES.  Check the activity log and it will give
you a list of the tapes required, in case you need to bring them back from
offsite storage.
The second time leave off the PREVIEW=YES.  TSM will mount the copy pool
tape(s), and recreate the damaged files from the copies on the copy pool
tape.  That will fix it.

-Original Message-
From: Ghanekar, Prasanna [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 24, 2001 11:16 AM
To: [EMAIL PROTECTED]
Subject: Audit Volume not Helping


Hi Everyone,

I'm running ADSM V3.1.2.40 on Windows NT server with SP5.
Every night when the Migration process begins it gives me errors about two
nodes pointing to the damaged files on the disk volumes such as "Data1.dsm".

Message:
ANR1168W Migration skipping damaged file on volume D:\DATA1.DSM: Node
PEASGAI1_CLIENT,
 Type Backup, File space \\peasgai1\c$, File name \ADSM.SYS\REGISTRY\PEASGA-
I1\MACHINE\ SAM.
When I run Audit on this volume with the Repair ?fix option, it goes through
the process, finds damaged files , marks those files as damaged but doesn't
delete. I get the following message at the end;
ANR2314I Audit volume process ended for volume D:\DATA1.D- SM; 21127 files
inspected, 0 damaged files deleted, 569 damaged files marked as damaged.
 How do I avoid getting the same messages during the migration process ?
Thanks in advance,
Prasanna


Prasanna Ghanekar
 <<...OLE_Obj...>>
EDS Pontiac East
2100 S Opdyke Rd
MI 48341
*: (248) 972-4547



Re: Keeping COPYPOOL in tact

2001-05-24 Thread Prather, Wanda

You can't destroy dltpool without destroying any copies of the data that are
in your copypool.
Essentially, if you have a backup of a file in a primay pool, and you cause
the db entry for that file to be deleted, it also deletes the record of that
file that is in the copy pool.

However, you DON"T NEED to destroy your dltpool to colocate it.
Just turn on colocation.
All output tapes from then on, in that storage pool, will be colocated.
You can let it happen gradually, via reclaim, or force the turnover quickly
by using MOVE DATA on the existing volumes.



-Original Message-
From: Berning, Tom [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 21, 2001 1:17 PM
To: [EMAIL PROTECTED]
Subject: Keeping COPYPOOL in tact


I am in the process of converting from ADSM 3.1.2.50 to TSM 4.1

I have four storage pools in my server (Diskpool, dltpool, copypool, and
tapepool).

I would like to destroy the dltpool and recreate it so that I can do
co-location on these tapes.

What I do not want to do is lose what is currently on the tapepool or
copypool.

Is this possible?

Thomas R. Berning
UNIX Systems Administrator
8485 Broadwell Road
Cincinnati, OH 45244
Phone:   513-388-2857
Fax:   513-388-
Email:[EMAIL PROTECTED]



Re: Include/Exclude

2001-05-24 Thread Prather, Wanda

I don't know why you are having the overall problem, but there is also a
problem in your specification.

Periods are treated just like any other character, not a wild card.
So when you specify:
INCLUDE DATA:\USERS\...\*.*

you are telling TSM to match filenames that have at least one . in the name.

If you want to include everything, you need:
INCLUDE DATA:\USERS\...\*

-Original Message-
From: Mahesh Tailor [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 22, 2001 10:30 AM
To: [EMAIL PROTECTED]
Subject: Include/Exclude


I had posted this a few days ago, but I am not sure if anyone got it (my
Internet email gateway has not been behaving) . . .


Hello, all!

I have a clarification question about include/exclude statements.  One of
our Netware guru's pointed this out to me.  Here's what I have (NetWare 5)
in my dsm.opt file:

EXCLUDE *:\...\*.*
INCLUDE DATA:\USERS\...\*.*

When I perform an incremental backup of the node, the system backs up
everything on the system.  If on the other had I do the following:

INCLUDE DATA:\USERS\...\*.*
EXCLUDE *:\...\*.*

The system does not backup anything except the files in the
data:\users\...\*.* directories.

It seems the system is doing just the opposite of what it's supposed to
since these statements are supposed to the processed from the bottom
statement up. [See excerpt from Tivoli Storage Manager for NetWare Using
the Backup-Archive Client manual.]

Excerpt:
These options are checked from the bottom of the include-exclude list up,
until a match is found. If a match is found, the processing stops and
checks whether the option is include or exclude. If the option is include,
the file is backed up. If the option is exclude, the file is not backed
up.

Has something changed?  Can someone explain this?

TIA

Mahesh



No Subject

2001-05-24 Thread Prather, Wanda

I've had about every combination you can think of -
two single disks, one copy on a RAID array and one copy on an internal
drive, etc.
It's just a matter of speed - whatever you can get from your hardware,
without (and this is critical) introducing a single point of failure.
My best performance has been on two SSA disks on different controllers.



-Original Message-
From: Jeff Bach [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 22, 2001 2:26 PM
To: [EMAIL PROTECTED]
Subject:


Has anyone tested to see what the best setup for the ADSM recovery log is.
Most of the time I has seen it on a single disk drive and mirrored to
another.  Has anyone tested other setups?

Jeff Bach
Home Office Open Systems Engineering
Wal-Mart Stores, Inc.

WAL-MART CONFIDENTIAL



**
This email and any files transmitted with it are confidential
and intended solely for the individual or entity to
whom they are addressed.  If you have received this email
in error destroy it immediately.
**



Re: RMAN instead of AIX Scripts ???.

2001-05-24 Thread Prather, Wanda

RMAN is the Oracle backup utility.  You can set it up and use it without any
interaction with TSM.  Think of it is a backup product that can back up to
local tape or disk.

When you add the TSM TDP, it just sort of makes TSM look like a funky tape
driver to RMAN.  YOu still run your backups and restores with RMAN, all your
functionality comes from there.  But instead of going to a local tape drive,
the data goes over the network to TSM.

What you need is (1) a license for the Oracle TDP, and (2) You also have to
install at least the API piece of the normal backup client, as both are
used.  You also need a separate TSM management class; they tell you in the
TDP install guide what the requirements are.  The data can go into your
normal disk storage pool, or direct to tape, just like any other TSM client
backup.

If you have never worked with an API-based TSM client, it is a bit different
than the backup client.  The api sends data to the TSM server, and TSM
treats it as a package and doesn't expect to know the contents.  TSM doesn't
do the normal versioning on Oracle backups; instead there is a utility you
run on the RMAN end when you want to expire old backups.  RMAN deletes the
versions from his catalog, then calls the TSM API to tell TSM what pieces to
delete from server storage.

The TDP install guide is pretty complete; the install is a bit messy, and
you have to follow the instructions EXACTLY.  But overall, it works very
well.

HOpe that helps..


-Original Message-
From: Roy Lake [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 21, 2001 6:56 AM
To: [EMAIL PROTECTED]
Subject: RMAN instead of AIX Scripts ???.


Hi Chaps,

We will soon be moving away from using AIX scripts to backup Oracle
databases, and will be using RMAN.

Could anyone point me in the right direction as to where I can find out more
about how RMAN interacts with TSM, what is needed, etc and how it works?.

Kind Regards,

Roy Lake
RS/6000, SP & Tivoli Storage Manager Administrator
Axial (UK) Ltd
Tel: 0208 526 8883
E-Mail: [EMAIL PROTECTED]



** IMPORTANT INFORMATION **
This message is intended only for the use of the person(s) ("the Intended
Recipient")
to whom it is addressed. It may contain information which is privileged and
confidential
within the meaning of applicable law. Accordingly any dissemination,
distribution, copying
or other use of this message or any of its content by any person other than
the Intended
Recipient may constitute a breach of civil or criminal law and is strictly
prohibited.

The views in this message or it's attachments are that of the sender.

If you are not the Intended Recipient please contact the sender and dispose
of this email
as soon as possible. If in doubt contact the Tibbett & Britten European IT
Helpdesk
on 0870 607 6777 (UK) or +0044 870 607 6777 (Non UK).



Re: Primary vs Backup Stg

2001-05-24 Thread Prather, Wanda

I would not expect them to be the same.  physical_mb (and I always get this
confused and have to look it up...) is the space occupied, including any
"dead" space inside aggregates.  logical_mb is the space occupied, not
including any "dead space" inside aggregates.

When a file is expired, if it is a small file living inside an aggregate on
a tape, the logical_mb count will drop, but the physical_mb count will not.
The physical_mb count will drop later when the tape is reclaimed; reclaim
squishes out the dead space inside aggregates as it goes.

Since your onsite and offsite pools have tapes that are not exact image
copies of each other, and the tapes reclaim at different rates/different
times, they should be somewhat different.  (Now if they're off by
terrabytes, I would worry...)
See if adding sum(logical_mb) to the query doesn't give you closer numbers.



Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert






-Original Message-
From: Ronnie Crowder [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 24, 2001 3:56 PM
To: [EMAIL PROTECTED]
Subject: Primary vs Backup Stg


Hopefully someone out there can help me.  What I was trying to do was check
to
see if my primary stg pool and copy storage pool contained the same
information.  The way I was checking was using a select statement in the
form:

select sum(physical_mb), sum(num_files) from occupancy where stgpool_name =
'ONSITE'

select sum(physical_mb), sum(num_files) from occupancy where stgpool_name =
'OFFSITE'

The number of files that come back are the same for both but the total size
is
not the same.  I can issue the backup stg pri copy preview=yes but it comes
back basically saying that the pools are the same.  Just wondering why the
offsite pool has a larger total than the onsite.  Could it be because of
data
compression??  The reason I ask is that I have another smaller TSM server at
another location and when I run the script on it, both number come back the
same.  Any comments would be appreciated.

Thanks



Re: TDP for Oracle

2001-05-24 Thread Prather, Wanda

Eric,

Remember the discussion a few months ago about "orphan" backups?
The 2.2 version as a utility called TDPOSYNC:
>From the install guide:

"This utility checks for items on the TSM server that are not in the
RMAN catalog and allows you to repair such discrepancies. By thus
removing unwanted objects in TSM storage, you can reclaim space
on the server."

-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 18, 2001 7:48 AM
To: [EMAIL PROTECTED]
Subject: TDP for Oracle


Hi *SM-ers!
We currently use TDP for Oracle 2.1.10 and I saw that version 2.2 is
available.
I would like to know the difference between the two versions so I can advice
my Oracle customers to upgrade or not.
Can anybody point me to the right direction for this information?
Thanks in advance!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
This e-mail and any attachment may contain confidential and privileged
material intended for the addressee only. If you are not the addressee, you
are notified that no part of the e-mail or any attachment may be disclosed,
copied or distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have received
this e-mail by error, please notify the sender immediately by return e-mail,
and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its
subsidiaries and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor responsible
for any delay in receipt.
**



Re: TSM configuration questions

2001-05-25 Thread Prather, Wanda

However, if you have high-capacity tape, you don't have to spend a whole
tape for each node.  You can control the tape use by setting a MAXSCRATCH
value for the tape pool.  For example, if you have 200 clients to back up
and you set MAXSCRATCH to 100, TSM will put two clients on each tape.  You
get all the benefits of colocation, without having to spend too much extra
tape.

Regarding your configuration:  we back up every form Windows, plus AIX, SUN,
IRIX, Mac, OS/2 -  all into the same 60 GB disk pool and same tape pools.
OVer 500 clients, and there has NEVER been any type of problem from mixing
this data in the pools or on the tapes.

If you give up your disk pool and go direct to tape, you will lose a lot of
the flexibility that TSM provides and have to do a lot of extra scheduling
for your client backups that TSM would normally handle automatically for
you.

If you split your clients into separate disk and tape pools, you will also
be losing a lot of flexibility and creating yourself a lot of extra
management work, all for no real benefits.

Don't try to make TSM work like another product.  It was designed this way
for a reason.
My opinion, take it or leave it...


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Alan Davenport [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 25, 2001 8:10 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM configuration questions


You need to do colocation. Colocation will place files from each client on a
tape by itself providing the client segregation you desire. You can even
break this down even further and colocate by filespace. The downside is that
you will significantly increase the amount of tapes you use. For further
information look in your TSM documentation for "UPDATE STGPOOL".

   Al

-=>-Original Message-
-=>From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
-=>Sent: Friday, May 25, 2001 5:21 AM
-=>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
-=>Subject: TSM configuration questions
-=>
-=>
-=>From: [EMAIL PROTECTED]
-=>To: [EMAIL PROTECTED]
-=>Date: Fri, 25 May 2001 02:21:23 -0700
-=>Subject: TSM configuration questions
-=>
-=>I am setting up TSM in our shop and I am learning TSM
-=>as I go. Right now, I have set up a 100GB Disk Pool
-=>and one Tape Pool and one Tapecopy pool for making
-=>offsite storage copies.  My questions are:
-=>
-=>If I am doing backups using this configuration, my
-=>daily backups on NT, Netware, and Unix will all first
-=>go to the disk pool and then migrate to the tape pool.
-=> Data on the tapes will consist of information from
-=>all platforms, mixing together, right?  I was
-=>uncomfortable
-=>with this data mixing idea.  I called TSM support and
-=>they assured me everything would be alright.  However,
-=>If I still want to have tapes consisting of only one
-=>platform only, are there ways to configure the system
-=>to do that?  My purpose of doing this is to have NT,
-=>Novell, and Unix System Administrators to handle their
-=>own tapes.
-=>
-=>Any ideas or suggestions would be greatly appreciated.
-=>
-=>
-=>__
-=>Do You Yahoo!?
-=>Yahoo! Auctions - buy the things you want at great prices
-=>http://auctions.yahoo.com/
-=>



Re: Win2k Scheduler & SYSTEM Account?

2001-05-25 Thread Prather, Wanda

Tim, thanks for the info.

I tried it using 4.1.2.12, as you suggested, and the scheduler can back up
fine under the SYSTEM account.
Don't know why it fails at 3.7.2.19.

But this solves my problem, thanks!


-Original Message-
From: Rushforth, Tim [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 15, 2001 2:20 PM
To: [EMAIL PROTECTED]
Subject: Re: Win2k Scheduler & SYSTEM Account?


Hi Wanda:

You should be able to use the System Account to backup encrypted files.  I
just did a test here:
Create encrypted file with usera
Try to READ with userb (access denied)
Backup and restore with userb (userb has rights to the file) works fine
File is still encrypted after restore (userb cannot read, usera can)
Backed up another encrypted file via the scheduler
Restored file
File is still encrypted

I am running W2K SP1, TSM 4.12.12.

Basically an encrypted file prevents another user from reading the file but
that user can still backup, copy, delete the file etc, as long as they have
rights.

Tim Rushforth
City of Winnipeg

-Original Message-----
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 15, 2001 12:39 PM
To: [EMAIL PROTECTED]
Subject: Win2k Scheduler & SYSTEM Account?


We normally install the TSM Scheduler on all our machines using the default
SYSTEM account.

For a Win2K Pro machine, is there any disadvantage/fallout to using the
person's regular network logon account, instead of the SYSTEM account?
(assuming it is a member of the local ADMINISTRATORs group).

It looks like this is what we will HAVE to do to back up files encrypted
with the Windows 2000 File Encryption.

If you use a network logon account to run the scheduler, what happens if the
password expires?  It is toast?



Re: Search for file

2001-05-29 Thread Prather, Wanda

If the file is from a Wintel machine, then ll_name should be specified
"SOMETHING.BLAH", because the file names are in all upper case.

If it is from a unix-flavored machine, then use mixed case.

Signed,

Been there, Been burned.




-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 29, 2001 2:25 PM
To: [EMAIL PROTECTED]
Subject: Re: Search for file


select * from adsm.backups where ll_name="something.blah"
filespace_name is the filespace name under unix or the drive id under
win.
the HL_NAME is the directory path  (stuff left over after the filespace_name
but not including the actual file name)
and the LL_NAME is the end file name...

hope this helps
later,
Dwight

ps if you are looking for an archive, just switch to table adsm.archives


-Original Message-
From: Jane Doe [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 29, 2001 1:08 PM
To: [EMAIL PROTECTED]
Subject: Search for file


Does anybody have a sql script written to search for a specific file through
the ODBC interface or the admin command line?  I need to be able to search
for a file on  a node without knowing the folder it resides in.

Thanks
Jane





___
Send a cool gift with your E-Card
http://www.bluemountain.com/giftcenter/



Re: db backup volume in offsite and del volhistory.

2001-05-30 Thread Prather, Wanda

Yes.  A good DB backup volume is usable even if it is no longer in the
VOLHISTORY file.
You just have to specify the volser on your DSMSERV RESTORE DB command.


-Original Message-
From: Jon Milliren [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 10:52 AM
To: [EMAIL PROTECTED]
Subject: db backup volume in offsite and del volhistory.


Hi all,

I was wondering that if you performed a del volhist -1 daily, would
the
db volume offsite be usable (in a dr situation), even if it were deleted
from the volhistory?

For offsite, I send my copy volumes, along with a db_backup volume,
and
a floppy with that days volhistory and devconfig. The db backup,
volhist.out and devconfig.out are created on the same day. I'm thinking
that this is sufficient, however I don't want to be bitten in the behind
should DR day ever come.

Thanks,

Jon

--
Jon Milliren
Systems Administrator
University of Pittsburgh
Office of Institutional Advancement

[EMAIL PROTECTED]
(412) 624-2727 office
(412) 292-2070 mobile



Re: Recovery log utilization does not drop after DB backup

2001-05-30 Thread Prather, Wanda

If the log utilization gets to 100%, it will not only shut down sessions, it
will crash the server.


-Original Message-
From: Suad Musovich [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 2:29 AM
To: [EMAIL PROTECTED]
Subject: Re: Recovery log utilization does not drop after DB backup


> some time ago I complained in a mail to this list that the recovery log
> utilization is not reset after a database backup. APAR IC30181 was
generated
> for this problem. Its status is "open".

I complained about the problem too. The response I got was, when the system
was
too busy it took a while for it to happen.

I have done this when no sessions/processes were running on the system and
it still
took over an hour to reset itself.

Our situation is worse as sometimes the log jumps up to over 90% before we
start to
backup (triggered by a ANR0314W). It has continued to increase after the
backup
and has got to 98% in one observed instance. Will it stop sessions if it
gets to 100?

The reason we don't have a auto triggered backup is that we use a manual
tape
drive (LTO is a bit of a waste for incrementals).

Suad
--



Re: ADSM/TSM memory leak?

2001-05-30 Thread Prather, Wanda

We run TSM on AIX 4.3.3; we don't reboot unless there is a problem, or we
have software maintenance to apply, or some other reason.  Not uncommon for
us to run 2-3 weeks without a restart.



-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 7:54 AM
To: [EMAIL PROTECTED]
Subject: Re: ADSM/TSM memory leak?


>But I did have another inquiry. Does anyone else regularly reboot
>their TSM/ADSM server app? Regularly, meaning every 3 days or even
>more often.
>
>We have 1.1 GB of RAM on our TSM box. Under ADSM, this apparently
>wasn't enough, as we always got "Paging space low!" if the server app
>ran for more than ~72 hours without restarting the app (is this
>what's called a "memory leak"?). TSM apparently does the same thing.
>It respawns itself, but it's still bad cuz if it dies during an
>operation involving tapes, it always screws us up.
>
>So I plan to reboot the server app every day, but just wondering if
>we're the only ones with this problem.

Then...why not add paging space to that system?  Disk is quite cheap
these days.  Until you get the additional disks, you can tailor
server options like BUFPoolsize to minimize TSM memory utilization.
Also have your AIX people look into what else is running on that
server that might not have to, or might be problematic, as the
other processes are also using memory.

  Richard Sims, BU



EXPIRE INVENTORY shuts down

2001-05-30 Thread Prather, Wanda

TSM 3.7.4 on AIX 4.3.3

I have EXPIRE INVENTORY scheduled to run for 1.5 hours in the wee hours:
EXPIRE INVENTORY QUIET=YES DURATION=90


Sometimes EXPIRE INVENTORY runs OK.
Other times it starts up, examines/expires 200-300 objects, then shuts
itself down again in 2-3 minutes with the normal message of SUCCESS.

Now I know it should be expiring 100,000+ objects per day.
If I start it again manually, it takes off and runs as you would expect.

Does anyone else see this happen?
Any idea what causes the premature shutdown?



Re: EXPIRE INVENTORY shuts down

2001-05-30 Thread Prather, Wanda

Oh bimps.
Something else to script, that should happen automatically.
But thanks, I'm glad to know it's not just me.


-Original Message-
From: Ford, Phillip [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 3:34 PM
To: [EMAIL PROTECTED]
Subject: Re: EXPIRE INVENTORY shuts down


I have.  We are on 3.7.0 (I know, bad idea but that is what I am stuck with
for now).  We have to run expire three times.  The first two stop
pre-maturely.  The third time it takes off.  We do this program wise.  The
first scheduled program starts expire and makes a scheduled program to run
in 5 min.  This scheduled program checks to see if expire is running - if
not start expire and reschedule itself for 5 min.  If expire is still
running then it is assumed that it will now run to conclusion and deletes
itself (actually it runs a process that keeps looking for expire to finish
so that we can run the next phase of our daily chores).  This has been going
on for almost a year.


--
Phillip Ford
Senior Software Specialist
Corporate Computer Center
Schering-Plough Corp.
(901) 320-4462
(901) 320-4856 FAX
[EMAIL PROTECTED]



-Original Message-----
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 12:16 PM
To: [EMAIL PROTECTED]
Subject: EXPIRE INVENTORY shuts down


TSM 3.7.4 on AIX 4.3.3

I have EXPIRE INVENTORY scheduled to run for 1.5 hours in the wee hours:
EXPIRE INVENTORY QUIET=YES DURATION=90


Sometimes EXPIRE INVENTORY runs OK.
Other times it starts up, examines/expires 200-300 objects, then shuts
itself down again in 2-3 minutes with the normal message of SUCCESS.

Now I know it should be expiring 100,000+ objects per day.
If I start it again manually, it takes off and runs as you would expect.

Does anyone else see this happen?
Any idea what causes the premature shutdown?

***
 This electronic message, including its attachments, is confidential and
proprietary and is solely for the intended recipient.  If you are not the
intended recipient, this message was sent to you in error and you are hereby
advised that any review, disclosure, copying, distribution or use of this
message or any of the information included in this message by you is
unauthorized and strictly prohibited.  If you have received this electronic
transmission in error, please immediately notify the sender by reply to this
message and permanently delete all copies of this message and its
attachments in your possession.  Thank you.



Re: EXPIRE INVENTORY shuts down

2001-05-30 Thread Prather, Wanda

Thanks, but in my case it is shutting down when there is still a LOT of
things to expire.
When I restart it again, it starts up again and runs for hours.




-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 3:32 PM
To: [EMAIL PROTECTED]
Subject: Re: EXPIRE INVENTORY shuts down
Importance: High


Wanda

If the no of objects are to to be inspected or inspected objects are
completed within time duration then it shuts off by itself.

In my case when I wanted to run for duration=10.
Then issued the cmd query proc I can see that as all objects get inspected
within the time it gets shuts off because
it has nothing else to inspect .PL do that again and see that how much time
it takes to inspect and expire.
This is what I saw in my case .It need not run all the duration in idle for
left out time.




-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 12:16 PM
To: [EMAIL PROTECTED]
Subject: EXPIRE INVENTORY shuts down


TSM 3.7.4 on AIX 4.3.3

I have EXPIRE INVENTORY scheduled to run for 1.5 hours in the wee hours:
EXPIRE INVENTORY QUIET=YES DURATION=90


Sometimes EXPIRE INVENTORY runs OK.
Other times it starts up, examines/expires 200-300 objects, then shuts
itself down again in 2-3 minutes with the normal message of SUCCESS.

Now I know it should be expiring 100,000+ objects per day.
If I start it again manually, it takes off and runs as you would expect.

Does anyone else see this happen?
Any idea what causes the premature shutdown?



Re: EXPIRE INVENTORY shuts down

2001-05-30 Thread Prather, Wanda

THanks, I will try switching to CANCEL EXPIRATION instead of using DURATION,
and see if that works.



-Original Message-
From: Bill Colwell [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 5:12 PM
To: [EMAIL PROTECTED]
Subject: Re: EXPIRE INVENTORY shuts down


Wanda, I have seen this, sort of, and it is expected behavior, working as
designed.

When you run expire with duration, or use the 'cancel expiration' command as
I do,
the point where expire stops is recorded in the db.  When you start it again
it starts at that point and continues either for the duration or until the
logical
end of the database is reached.  If it reaches the end of the db, then
expire
considers that it has completed 1 pass over the database and quits.

When expire was improved, the code of course had a bug in it; hitting the
duration
did not store the stop point.  The 'cancel exp' command did store the stop
point
which is why I use the command and not the duration parameter.  At some
point the bug was fixed and that is probably when you started noticing the
change in behavior.

I run expire weekly, starting at 5 am, and a cancel expire command is issued
at
11 pm.  The next week expire starts at 5 am but usually finishes around
noon.

Hope this helps,


--
--
Bill Colwell
C. S. Draper Lab
Cambridge, Ma.
[EMAIL PROTECTED]
--



In <[EMAIL PROTECTED]>, on 05/30/01
   at 05:11 PM, "Prather, Wanda" <[EMAIL PROTECTED]> said:

>TSM 3.7.4 on AIX 4.3.3

>I have EXPIRE INVENTORY scheduled to run for 1.5 hours in the wee hours:
>EXPIRE INVENTORY QUIET=YES DURATION=90


>Sometimes EXPIRE INVENTORY runs OK.
>Other times it starts up, examines/expires 200-300 objects, then shuts
>itself down again in 2-3 minutes with the normal message of SUCCESS.

>Now I know it should be expiring 100,000+ objects per day.
>If I start it again manually, it takes off and runs as you would expect.

>Does anyone else see this happen?
>Any idea what causes the premature shutdown?



Re: Slow NT Restore

2001-05-31 Thread Prather, Wanda

I have done restores on NT5 from a similar config, and had NO problems.

I agree that autonegotiate is a likely culprit when the throughput is that
slow.  When autonegotiate is the problem, our network guys can see errors
being recorded on the switch; ask for help there.

Another possibility is that the bottleneck is on the NT machine itself,
trying to rebuild the file system.
If you have virus checking software installed, disable it for the time
being.
If this rebuild is being done due to a failure of some sort, did anyone run
CHECKDISK on the drive before starting the restore?

Run from the TSM server command line:  q db f=d
Your cache hit % should ideally be 98% or better; 96% or better works pretty
good, too.
If it's lower than 95%, that may be hurting you some (but not to the tune of
20 hours...)

As a last resort, check your TSM server's AIX errpt log, to check for disk
errors, and run IOSTAT to see if you are getting any bottlenecks on your TSM
DB I/O.



-Original Message-
From: Herfried Abel [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 31, 2001 7:46 AM
To: [EMAIL PROTECTED]
Subject: Re: Slow NT Restore


Phil,
Did you check the general network speed between the client and the server
( e.g. ftp put and get a large file ). If you are in a switched 100 mb
network
sometimes ( i saw it on our Compaq and RS6000 servers ) the autonegotiate
port speed / mode does not work correct. We set all components (
server/client/switch-ports) manually  to 100 mb full duplex and this solved
the problem.

just a hint but maybe it  helps

herfried




Phil Stockton <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
31.05.2001 12:23:06

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:

Subject:  Slow NT Restore


Hi

We are trying to restore the C drive on a NT server 4.0 Service Pack 5. We
have TCPIP 100 mb full duplex.  The client is 3.1.0.8 and the server is
3.1.2.90 on AIX 4.3.2.

We are observing very slow restore times.  There is no waiting for tape
mounts but when I query sessions wait times of some seconds are observed,
almost as if it restores one file then goes off to do something else.

So far it has taken 20 hours to restore 850mb.  All directories are held on
disk and they have been restored.

Anyone got any ideas on how to speed this up, by a factor of 10 or so.

Regards

Phil Stockton

RS Components Ltd
Corby
Northants



***
The contents of this Email and any files transmitted with it
are confidential and intended solely for the use of the
individual or entity to whom it is addressed. The views stated
herein do not necessarily represent the view of the company.
If you are not the intended recipient of this Email you may not
copy, forward, disclose or otherwise use it or any part of it
in any form whatsoever. If you have received this mail in
error please Email the sender.
***

RS Components Ltd.








The information contained in this transmission, which may be
confidential and proprietary, is only for the intended recipients.
Unauthorized use is strictly prohibited. If you receive this
transmission in error, please notify me immediately by telephone
or electronic mail and confirm that you deleted this transmission
and the reply from your electronic mail system.




Re: LTO and 3590

2001-05-31 Thread Prather, Wanda

There is a presentation on the Tivoli website comparing 3590, 9840, LTO, and
DLT.
http://www.tivoli.com/news/press/analyst/tsm.pdf

I agree with Jeff & Dwight;
if you are used to 3590, consider that LTO is a competitor to DLT, not a
competitor to the 3590.

I would base the decision on load:

If your load is less than 20 GB per night, 3590 may be overkill and LTO
might provide a less expensive alternative.

But the larger your load gets, the more abuse your media gets and the more
you need the big iron

(If you do decide to migrate, it's not a big deal.  You can do it real time.
Hook up the new library to your TSM server. Define new tape storage pools,
and point your management classes or disk migration to the new tape storage
pools.  Then just start running MOVE DATA from your old tape volumes to your
new tape pools.  Backups will continue while it's going on.  If you need a
restore in the meantime, TSM will find the data and mount whichever tapes it
needs.  When the old tapes are all empty, remove the old robot.)


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert






-Original Message-
From: Caffey, Jeff L. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 31, 2001 12:01 PM
To: [EMAIL PROTECTED]
Subject: Re: LTO and 3590


David,

Lisa is exactly right!  We decided to save some money and go with LTO, but
research shows that the 3590 would have been MUCH better.  Don't get me
wrong - the LTO technology outperforms even the new Super DLT - and blows
the old DLT's away.  But if you already have 3590e's in house, you'll be
disappointed if you go with anything less.

Thank you,

Jeff Caffey
Enterprise Systems Programmer
(AIX & Storage Administrator)
Pier 1 imports, Inc.  -  Information Services
[EMAIL PROTECTED]
Voice: (817) 252-6222
Fax:   (817) 252-7299

 -Original Message-
From:   Lisa Cabanas [mailto:[EMAIL PROTECTED]]
Sent:   Thursday, May 31, 2001 10:54 AM
To: [EMAIL PROTECTED]
Subject:Re: LTO and 3590

David-

For what it's worth, we just went through an eval of which to expand to
and decided to go with the Cadillac-- the 3590.  For thruput, it will beat
LTO hands down, due to the stopping and starting technology on the drives.
 We archive Oracle data thru an SP switch, and we get rates of 40MB/sec.

Go with the addt'nl frame on the 3494 & the 3590E drives.

Another thing to think about, is that 1Q 2002, half-height 3590 drives are
supposed to be available, if you can wait that long.

lisa




David DeCuir <[EMAIL PROTECTED]>
05/31/2001 10:16 AM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc: (bcc: Lisa Cabanas/SC/MODOT)
Subject:LTO and 3590



Hello,
I have read some archives about LTO vs 3590. I get  mixed feelings about
which is better depending on the situation. I would like to ask for your
opinions on my specific case. Here are the facts:

I am a TSM newbie < 6 months.
Current hardware is:
3494 w/2 3590E1A drives
3466-C00 w/H50 server
288gb SSA 7133-D40
We are needing to about double our current hardware (rough estimate)
The additional backups will be 70/30 small file/large file (all from NT)

The are 3 options on the table
1) add 3590EA1 drives (and additional frame)
2) add LTO drives (and whatever frame)
3) sell 3590 system and replace with LTO system

My first thoughts are to just add 3590's. Been working fine for a year.
I'm still gathering price info. but if cost was basically the same what
would you do?
I don't think cost will be significantly less for any of the three. Maybe
I'm wrong.
Advantages/disadvantages to these options?
If cost were much lower for LTO would you go with it?
If complete LTO replacement, would data migration off 3590 be a nightmare?
Also, I don't really need fast restore times

Thanks for any advice
David



Re: Server Script

2001-05-31 Thread Prather, Wanda

select node_name as "Nodes with no filespaces:", -
date(reg_time) as "Registered:", -
date(lastacc_time) as "Last Access:" -
from nodes where node_name not in -
(select node_name from filespaces)



-Original Message-
From: Rajesh Oak [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 31, 2001 1:53 PM
To: [EMAIL PROTECTED]
Subject: Server Script


Does anyone have a script that can report all the clients that have never
backed up. No Filespaces are present for that node.
I am running TSM 4.1 on WIn2000 Sp1.

Thanks in advance.

Rajesh Oak


Get 250 color business cards for FREE!
http://businesscards.lycos.com/vp/fastpath/



Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup

2001-06-01 Thread Prather, Wanda

I have run into some ugly issues with the Win2K backup of the "SYSTEM
OBJECT".

I'm throwing the information out here to warn other people what to expect,
and hopefully to get the developers to reconsider the current
implementation.

The SYSTEM FILES component of the "SYSTEM OBJECT"  on Win2K consists of over
1500 .dll and .exe and .obj files from (mostly) the WinNT/system32
directory.  These files are backed up EVERY TIME an incremental is run, even
though THE DATA HAS NOT CHANGED.

We have converted over 200 NT desktops to WIn2K PRO.  For each of our Win2K
PRO systems, this adds 1586 files to the backup every night.

This has had an enormous impact on the TSM server.  The additional data is
only about 20 GB per night, and that's not a big problem.  But each of the
SYSTEM FILES still has it's own entry in the TSM data base.

You do the math:  That's over 300,000 additional objects that get added AND
deactivated each day, which for me means an additional 2.5 HOURS of EXPIRE
INVENTORY time is needed DAILY.  And all for data THAT HAS NOT CHANGED.

TSM's strength has always been that it DOESN"T back up unchanged data.
Well, at least it didn't used to...

My problem here is we have another 250 machines to convert from NT to WIn2K.
They aren't about to buy me a second TSM server to handle the load, when the
current one worked fine for backing up the same number of NT systems with
the same amount of user data.  Instead they are looking at some Windows-only
software to back up the WIndows side of the house.

It appears to me the current TSM implementation is flawed, and will inhibit
other people's ability to support large Windows environments as well as
ours.

I put this information into the Requirements for the Oxford Symposium,
hopefully it will give some additional visibility to the issue.

Any suggestions welcome but don't suggest we give up our ability to do
full bare-metal restores.
Management will change the backup software first.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert




Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup

2001-06-01 Thread Prather, Wanda

No.  By definition, the way TSM has implemented SYSTEM OBJECT backup, it
backs up those files whether they are changed or not.  Always.

And we can't exclude them and retain the ability to do bare-metal restores.

-Original Message-
From: George Lesho [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 01, 2001 4:43 PM
To: [EMAIL PROTECTED]
Subject: Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup


Ms. Prather,
Have you made any attempt to make an explicit exclude of the system files
during
backup? There must be something about these files that TSM recognizes as
changed
for them to be picked up on your incremental. Any sense of what that might
be?
Hope they look at this quickly as our local Windoze shop is talking about
converting their NT4 environment into Win2K Thanks -

George Lesho
System/Storage Admin
AFC Enterprises






"Prather, Wanda" <[EMAIL PROTECTED]> on 06/01/2001 03:23:32 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: George Lesho/Partners/AFC)
Fax to:
Subject:  Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup



I have run into some ugly issues with the Win2K backup of the "SYSTEM
OBJECT".

I'm throwing the information out here to warn other people what to expect,
and hopefully to get the developers to reconsider the current
implementation.

The SYSTEM FILES component of the "SYSTEM OBJECT"  on Win2K consists of over
1500 .dll and .exe and .obj files from (mostly) the WinNT/system32
directory.  These files are backed up EVERY TIME an incremental is run, even
though THE DATA HAS NOT CHANGED.

We have converted over 200 NT desktops to WIn2K PRO.  For each of our Win2K
PRO systems, this adds 1586 files to the backup every night.

This has had an enormous impact on the TSM server.  The additional data is
only about 20 GB per night, and that's not a big problem.  But each of the
SYSTEM FILES still has it's own entry in the TSM data base.

You do the math:  That's over 300,000 additional objects that get added AND
deactivated each day, which for me means an additional 2.5 HOURS of EXPIRE
INVENTORY time is needed DAILY.  And all for data THAT HAS NOT CHANGED.

TSM's strength has always been that it DOESN"T back up unchanged data.
Well, at least it didn't used to...

My problem here is we have another 250 machines to convert from NT to WIn2K.
They aren't about to buy me a second TSM server to handle the load, when the
current one worked fine for backing up the same number of NT systems with
the same amount of user data.  Instead they are looking at some Windows-only
software to back up the WIndows side of the house.

It appears to me the current TSM implementation is flawed, and will inhibit
other people's ability to support large Windows environments as well as
ours.

I put this information into the Requirements for the Oxford Symposium,
hopefully it will give some additional visibility to the issue.

Any suggestions welcome but don't suggest we give up our ability to do
full bare-metal restores.
Management will change the backup software first.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert




Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup

2001-06-01 Thread Prather, Wanda

I haven't figured out any way to do that.  Those files are cataloged in a
Win2K "catalog" file in the system32 directory as being "system protected"
files.  And whenever you install a software upgrade (like an upgrade to IE,
for example), the "catalog" gets changed.  And since a software upgrade like
that can also change the registry, I shudder to think what happens if you
copy one person's catalog to another machine with a different registry

So they are "almost" the same, on "most" nodes.  But my experience is that
you don't dare mix and match software parts on Windows systems.

I'm hoping I'm wrong about that, and somebody can tell me how to make it
work.


-Original Message-
From: Robin Sharpe [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 01, 2001 4:51 PM
To: [EMAIL PROTECTED]
Subject: Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup


Question: Are these files unique on each node?
If no, then why not make just one ( or a couple) backups from a "reference"
machine, and exclude them from all other nodes.
If yes, then include them to a retain-forever management class the first
time only, then exclude them in the future.  The first time copies will
become inactive, but will be retained forever due to the special management
class.

Just some off-the-cuff suggestions we're still on NT4, and we only
backup servers, not desktops.  But I agree this seems like a serious
problem that you should not have to "work around".  Hope it gets fixed
before we go to Win2K next year...

Robin Sharpe
Berlex Laboratories




"Prather, Wanda"
   To:[EMAIL PROTECTED]
 cc:(bcc: Robin Sharpe/WA/USR/SHG)
06/01/01 04:23   Subject:
PM  Issues with Win2K SYSTEM
OBJECT/SYSTEM FILES backup
Please respond
to "ADSM: Dist
Stor Manager"







I have run into some ugly issues with the Win2K backup of the "SYSTEM
OBJECT".

I'm throwing the information out here to warn other people what to expect,
and hopefully to get the developers to reconsider the current
implementation.

The SYSTEM FILES component of the "SYSTEM OBJECT"  on Win2K consists of
over
1500 .dll and .exe and .obj files from (mostly) the WinNT/system32
directory.  These files are backed up EVERY TIME an incremental is run,
even
though THE DATA HAS NOT CHANGED.

We have converted over 200 NT desktops to WIn2K PRO.  For each of our Win2K
PRO systems, this adds 1586 files to the backup every night.

This has had an enormous impact on the TSM server.  The additional data is
only about 20 GB per night, and that's not a big problem.  But each of the
SYSTEM FILES still has it's own entry in the TSM data base.

You do the math:  That's over 300,000 additional objects that get added AND
deactivated each day, which for me means an additional 2.5 HOURS of EXPIRE
INVENTORY time is needed DAILY.  And all for data THAT HAS NOT CHANGED.

TSM's strength has always been that it DOESN"T back up unchanged data.
Well, at least it didn't used to...

My problem here is we have another 250 machines to convert from NT to
WIn2K.
They aren't about to buy me a second TSM server to handle the load, when
the
current one worked fine for backing up the same number of NT systems with
the same amount of user data.  Instead they are looking at some
Windows-only
software to back up the WIndows side of the house.

It appears to me the current TSM implementation is flawed, and will inhibit
other people's ability to support large Windows environments as well as
ours.

I put this information into the Requirements for the Oxford Symposium,
hopefully it will give some additional visibility to the issue.

Any suggestions welcome but don't suggest we give up our ability to do
full bare-metal restores.
Management will change the backup software first.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert




Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup

2001-06-01 Thread Prather, Wanda

Yes, that works, as far as it goes.
You get a bootable system.


But, if you do not restore your registry, you lose all your customization.
And the doc says you should not restore registry without restoring the rest
of the "system objects".

There is a LOT of user customization for these WIn2K Pro systems - different
people have different software installed (programmer development kits,
specialized software, etc.)

The reason the management here has always supported TSM, is because we CAN
do bare-metal restores and get ALL the customization back, down to the last
shortcut and icon.

I hope someone WILL figure out how to rebuild a WIn2K system from scratch
and restore the registry, while using a "common" set of boot files...



-Original Message-
From: George Lesho [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 01, 2001 5:04 PM
To: [EMAIL PROTECTED]
Subject: Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup


Ah! The education continues... I am not a Win2K person, but would it not be
possible to rebuild a system from scratch using a CD and then restoring
non-system files? Granted, there would be some configuration for local
environment but... that is why we do a makesisbee tape and exclude the root
volume groups when backing up our AIX boxes...

George Lesho
System/Storage Admin
AFC Enterprises





"Prather, Wanda" <[EMAIL PROTECTED]> on 06/01/2001 03:46:01 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: George Lesho/Partners/AFC)
Fax to:
Subject:  Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup



No.  By definition, the way TSM has implemented SYSTEM OBJECT backup, it
backs up those files whether they are changed or not.  Always.

And we can't exclude them and retain the ability to do bare-metal restores.

-Original Message-
From: George Lesho [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 01, 2001 4:43 PM
To: [EMAIL PROTECTED]
Subject: Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup


Ms. Prather,
Have you made any attempt to make an explicit exclude of the system files
during
backup? There must be something about these files that TSM recognizes as
changed
for them to be picked up on your incremental. Any sense of what that might
be?
Hope they look at this quickly as our local Windoze shop is talking about
converting their NT4 environment into Win2K Thanks -

George Lesho
System/Storage Admin
AFC Enterprises






"Prather, Wanda" <[EMAIL PROTECTED]> on 06/01/2001 03:23:32 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: George Lesho/Partners/AFC)
Fax to:
Subject:  Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup



I have run into some ugly issues with the Win2K backup of the "SYSTEM
OBJECT".

I'm throwing the information out here to warn other people what to expect,
and hopefully to get the developers to reconsider the current
implementation.

The SYSTEM FILES component of the "SYSTEM OBJECT"  on Win2K consists of over
1500 .dll and .exe and .obj files from (mostly) the WinNT/system32
directory.  These files are backed up EVERY TIME an incremental is run, even
though THE DATA HAS NOT CHANGED.

We have converted over 200 NT desktops to WIn2K PRO.  For each of our Win2K
PRO systems, this adds 1586 files to the backup every night.

This has had an enormous impact on the TSM server.  The additional data is
only about 20 GB per night, and that's not a big problem.  But each of the
SYSTEM FILES still has it's own entry in the TSM data base.

You do the math:  That's over 300,000 additional objects that get added AND
deactivated each day, which for me means an additional 2.5 HOURS of EXPIRE
INVENTORY time is needed DAILY.  And all for data THAT HAS NOT CHANGED.

TSM's strength has always been that it DOESN"T back up unchanged data.
Well, at least it didn't used to...

My problem here is we have another 250 machines to convert from NT to WIn2K.
They aren't about to buy me a second TSM server to handle the load, when the
current one worked fine for backing up the same number of NT systems with
the same amount of user data.  Instead they are looking at some Windows-only
software to back up the WIndows side of the house.

It appears to me the current TSM implementation is flawed, and will inhibit
other people's ability to support large Windows environments as well as
ours.

I put this information into the Requirements for the Oxford Symposium,
hopefully it will give some additional visibility to the issue.

Any suggestions welcome but don't suggest we give up our ability to do
full bare-metal restores.
Management will change the backup software first.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert




Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup

2001-06-01 Thread Prather, Wanda

Hi Bill,

Here's what I know so far:

These files are defined to be part of "Win2K system protected files".  (This
is actually explained pretty well in the 3.7/4.1 Tech Guide Redbook;
additional info in the 4.1.2 Win client Redbook.)  They are cataloged as
such by Win2K (I don't know for sure but I think it has to do with the
CATROOT directories in system32/config) and have some sort of hash code or
signature that identifies them as being "good".  These files get copied into
the dllcache directory.

When you boot, Win2K somehow checks the signature of all the .dll, .exe, and
.obj files to see if they are still "good".  If one of them has been
tampered with, Win2K refreshes your copy from the dllcache copy.  This is
supposed to provide protection against the classic Win95 problem where
downloading and installing a software upgrade changes the wrong .dll and
stuff doesn't match anymore, resulting in the Blue Screen of Death.

All well and good, works for me.

Now Microsoft says that these "system protected files" (including the boot
files), the Registry, the Active Directory (for servers), and the other
components of "SYSTEM OBJECT" should all be backed up and restored as a set.
I.E., you shouldn't restore the registry without restoring the system files,
and vice versa.  And you aren't supposed to restore individual .dll files;
it's all or nothing.

So I can see that TSM needs to support that, and I can support that.  I just
question the current TSM implementation.

I have thought about limiting the SYSTEM OBJECT backup, as you have done,
but MANY of the restores we do here include the registry, and I assume you
can get into trouble if your copy of the registry is not taken at the same
time as the SYSTEM FILES, which include that "catalog" of "good" SYSTEM
FILES.  (In fact, the 4.1.2 Win client redbook says you MUST run QUERY
SYSTEMOBJECT and check to make sure your SYSTEM FILES and REGISTRY were
backed up at the same time before trying to restore - how vital that is, I'm
not sure.)

The practical differences in backing these files up as part of the C: drive
backup vs. backing them up as part of "SYSTEM OBJECT", are  that 1) backing
them up as part of SYSTEM OBJECT backs them up whether they have changed or
not, and 2) Microsoft says you aren't supposed to restore these files
individually anymore.

Again, my problem is with WAY TSM has implemented this.  If you backup the
SYSTEM OBJECT, you can't restore the files individually.  But if you run:

SELECT * FROM BACKUPS WHERE FILESPACE_NAME='SYSTEM OBJECT'

you can see all the cazillion entries for all those individual objects.  You
can also see each file processed individually in dsmsched.log.  So it
APPEARS we are taking the overhead of backing them up individually, without
getting any of the benefits of restoring them individually.

Seems to me if you can only restore them as a set, the proper implementation
would be to have ONE data base entry describing the set, and any other
necessary descriptions of individual objects that are part fo the set should
be stored INSIDE the set itself, not in the data base.

But much of this  is speculation on my part.  All I have to go by is the
information in the TEch Guide redbook, what I see,  and the fact that I have
a problem since we implemented SYSTEM OBJECT backups.

Thanks for the input...
Wanda

-Original Message-
From: Bill Colwell [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 01, 2001 4:57 PM
To: [EMAIL PROTECTED]
Subject: Re: Issues with Win2K SYSTEM OBJECT/SYSTEM FILES backup


Wanda, thanks for the info.

I was aware of this feature so I am staying on 4.1.1.16 where 'system
objects'
is not in the default domain.  For each w2k machine I make an additional
schedule to back up 'system objects' every 4 weeks.

I wonder what is the difference between backing up these files thru the
normal
backup of the C drive and backing them up as part of 'system objects'.  Put
another way, what necessary data or meta-data is added by using the system
objects
path.  This is probably more a question about the win2k system than about
tsm.

--
------
Bill Colwell
C. S. Draper Lab
Cambridge, Ma.
[EMAIL PROTECTED]
--



In <[EMAIL PROTECTED]>, on 06/01/01
   at 04:56 PM, "Prather, Wanda" <[EMAIL PROTECTED]> said:

>I have run into some ugly issues with the Win2K backup of the "SYSTEM
>OBJECT".

>I'm throwing the information out here to warn other people what to expect,
>and hopefully to get the developers to reconsider the current
>implementation.

>The SYSTEM FILES component of the "SYSTEM OBJECT"  on Win2K consists of
over
>1500 .dll and .exe and .obj files from (mostly) the WinNT/system32
>directory.  These files are backed up EVERY TIME an inc

  1   2   3   4   5   6   7   8   9   10   >