Re: What table is the "q drive" WWN and Serial number stored in?

2006-03-13 Thread Large, M (Matthew)
You could always look here:

http://www.tsmwiki.com/tsmwiki/show?action=fullsearch&context=180&value=
show

Regards,
Matthew 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Wojtek Piecek
Sent: 11 March 2006 00:56
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] What table is the "q drive" WWN and Serial number
stored in?

Nice. Is it anywhere list of undocumented 'show something' list?

On 3/11/06, Josh-Daniel Davis <[EMAIL PROTECTED]> wrote:
> These are pulled by the server during startup and stored in temporary 
> tables that are inaccessible by SQL commands.
>
> You can get the WWN from SHOW LIBR.
>
> -Josh
>
>
>
> On 06.03.10 at 07:56 [EMAIL PROTECTED] wrote:
>
> > Date: Fri, 10 Mar 2006 07:56:52 -0800
> > From: T. Lists <[EMAIL PROTECTED]>
> > Reply-To: "ADSM: Dist Stor Manager" 
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: What table is the "q drive"  WWN and Serial number stored
in?
> >
> > Q drive gives a WWN and serial number, however if you just select 
> > from the DRIVES table you don't get that.
> > What table is that information stored in?
> >
> > tsm: TSM02>q drive * drive01 f=d
> >
> >Library Name: 3584LIB
> >Drive Name: DRIVE01
> >Device Type: LTO
> >On-Line: Yes
> >Read Formats:
> > ULTRIUM2C,ULTRIUM2,ULTRIUMC,ULTRIUM
> >Write Formats:
> > ULTRIUM2C,ULTRIUM2,ULTRIUMC,ULTRIUM
> >Element: 270
> >Drive State: EMPTY
> >Allocated to:
> > *  WWN: 500507630001F012
> > *  Serial Number: 9110108472
> >Last Update by (administrator): STACY
> >Last Update Date/Time: 03/09/06   16:16:17
> >Cleaning Frequency
> > (Gigabytes/ASNEEDED/NONE): NONE
> >
> >
> > tsm: TSM02>select * from drives where drive_name='DRIVE01'
> >
> >  LIBRARY_NAME: 3584LIB
> >DRIVE_NAME: DRIVE01
> >   DEVICE_TYPE: LTO
> >ONLINE: YES
> >  READ_FORMATS: ULTRIUM2C,ULTRIU
> > WRITE_FORMATS: ULTRIUM2C,ULTRIU
> >   ELEMENT: 270
> >  ACS_DRIVE_ID:
> >   DRIVE_STATE: EMPTY
> >  ALLOCATED_TO:
> > LAST_UPDATE_BY: STACY
> >   LAST_UPDATE: 2006-03-09 16:16:17.00
> >CLEAN_FREQ:
> >  DRIVE_SERIAL:
> >
> >
> >
> > __
> > Do You Yahoo!?
> > Tired of spam?  Yahoo! Mail has the best spam protection around 
> > http://mail.yahoo.com
> >
>


--
--w
_

This email (including any attachments to it) is confidential, legally 
privileged, subject to copyright and is sent for the personal attention of the 
intended recipient only. If you have received this email in error, please 
advise us immediately and delete it. You are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Although we have taken reasonable 
precautions to ensure no viruses are present in this email, we cannot accept 
responsibility for any loss or damage arising from the viruses in this email or 
attachments. We exclude any liability for the content of this email, or for the 
consequences of any actions taken on the basis of the information provided in 
this email or its attachments, unless that information is subsequently 
confirmed in writing. If this email contains an offer, that should be 
considered as an invitation to treat.
_


Netware 6.5 SP2 or greater issue

2006-03-13 Thread Jim Bollard
All,

I have a unicode issue in that Netware cannot handle special unicode
characters and this is only seen on large F&P servers where files are
saved with unusual strings containing for example [2006], this causes the
client to hang and the backup halts, I have applied all the Netware
patches recommended from the README from the ftp site where I downloaded
the 5.3.2.0 client from IBM, I patched it to FIX Pack 2 for 5.3 and its
had no effect. Essentially its a Novell issue but they are blaming IBM and
vice versa, I am sure many of you would have seen this before.

Is there a work around??
Would the 5.2.4 client for Netware be better??
My TSM server is W2K3 with 5.3.2 running.

Thanks for any help on this one..

Regards,

Jim Bollard.


Re: linux dsmcad problem

2006-03-13 Thread Dirk Kastens

Andrew Raibeck schrieb:

Nevertheless, the section in the client manual regarding dsmcad is not
well described, IMO.



What is wrong or missing? What would you like to see?


I think, the section "Configuring Tivoli Storage Manager client/server
communication across a firewall" should be enhanced to describe the
consequences of using dsmcad. I was confused by the statement "Method 2:
For the client scheduler in prompted mode, it is unnecessary to open any
ports on the firewall." I was using prompted mode for years with the
"dsmc sched" running on the client nodes. It doesn't come to my mind
that I now have to use the WEBPORTS option, because I thought this was
related to the web client, that I don't want to use.
The "ADSM Quick Facts" comes to the point: "The WEBports option controls
the scheduler side's port number (oddly enough)."
I'm missing a statement like: "If you use dsmcad to start the client
scheduler, you have to define the webports option on the client".

--
Regards,

Dirk Kastens
Universitaet Osnabrueck, Rechenzentrum (Computer Center)
Albrechtstr. 28, 49069 Osnabrueck, Germany
Tel.: +49-541-969-2347, FAX: -2470


MAXSIZE

2006-03-13 Thread Remco Post
Hi all,

I'm a bit confused, so I was hoping maybe the list could help.

When I read the help on def stg/upd stg for the Maxsi parameter it mentionses
two things:

1- It's the size of the physical file (aggregate)
2- it's the size of the file before compression if compression is used.

Now during backup I can inmagine that 2 is being used, but during
migration/backup stg/whatever I can only inmagine TSM using the size of the
aggregate or the size of the file on the filesystem, but not both. So which is
it? Is the use of the size of the file before compresion the old way
(pre-aggregate) way of doing things? Is the manual wrong? Is the help wrong?

(and yes I've read the quickfacts, and no, they don't make things any clearer)

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Create archives from primary storagepool

2006-03-13 Thread Erik Björndell
Hi!
 
Ive got a 5.3 TSM server backing up ~50 nodes to disk.
 
Can i create archives on tape from the primary storagepool instead of running 
archiving from the nodes.
 
 
 
Erik Björndell 
IT Consultant 
Semcon IT Solutions 
Kardanvägen 37 
SE-461 38 Trollhättan 
Phone +46(0)520-40 08 00 
Moblie +46(0)736-84 04 06 
SMS [EMAIL PROTECTED]   
E-mail [EMAIL PROTECTED]   
Internet http://www.semcon.se   
 


Re: Create archives from primary storagepool

2006-03-13 Thread Richard Sims

On Mar 13, 2006, at 5:41 AM, Erik Björndell wrote:


Hi!

Ive got a 5.3 TSM server backing up ~50 nodes to disk.

Can i create archives on tape from the primary storagepool instead  
of running archiving from the nodes.


Erik - TSM Archive storage objects must be created from the client.
   Only client operations provide the ability to operate upon
individual files.

The closest thing you could do from the server is create a Backup Set,
which is not what you want.

   Richard Sims


Re: TSM client API upgrade and TDP for oracle

2006-03-13 Thread Loon, E.J. van - SPLXM
Hi Tae!
Not, this is not nessesary. The API client is only used during a backup,
so if no backup is running, you can replace all files during
installation and no restart is required.
However, I would wait a few days before starting the upgrade. The
5.3.3.0 client will be released this month. 
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Tae Kim
Sent: vrijdag 10 maart 2006 17:39
To: ADSM-L@VM.MARIST.EDU
Subject: TSM client API upgrade and TDP for oracle

Hi guys and gals,

Currently I am trying to upgrade various AIX TSM clients which are
v5.2.2 or V5.1.5 to TSM client version 5.3.2.  I thought that clients
can be upgraded with out having to upgrade TSM API but I was wrong.  I
do need to upgrade both the API and the clients. The issue is that these
TSM clients also have TDP for oracle running.  Will the upgrade of the
TSM API require a restart of oracle DB (like when upgrading TDP for
oracle you need to restart oracle).

Thanks for your input.


Tae





"T. Lists" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" 
03/10/2006 10:56 AM
Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] What table is the "q drive"  WWN and Serial number stored in?






Q drive gives a WWN and serial number, however if you just select from
the DRIVES table you don't get that.
What table is that information stored in?

tsm: TSM02>q drive * drive01 f=d

Library Name: 3584LIB
Drive Name: DRIVE01
Device Type: LTO
On-Line: Yes
Read Formats:
ULTRIUM2C,ULTRIUM2,ULTRIUMC,ULTRIUM
Write Formats:
ULTRIUM2C,ULTRIUM2,ULTRIUMC,ULTRIUM
Element: 270
Drive State: EMPTY
Allocated to:
 *  WWN: 500507630001F012
 *  Serial Number: 9110108472
Last Update by (administrator): STACY
Last Update Date/Time: 03/09/06   16:16:17
Cleaning Frequency
(Gigabytes/ASNEEDED/NONE): NONE


tsm: TSM02>select * from drives where
drive_name='DRIVE01'

  LIBRARY_NAME: 3584LIB
DRIVE_NAME: DRIVE01
   DEVICE_TYPE: LTO
ONLINE: YES
  READ_FORMATS: ULTRIUM2C,ULTRIU
 WRITE_FORMATS: ULTRIUM2C,ULTRIU
   ELEMENT: 270
  ACS_DRIVE_ID:
   DRIVE_STATE: EMPTY
  ALLOCATED_TO:
LAST_UPDATE_BY: STACY
   LAST_UPDATE: 2006-03-09 16:16:17.00
CLEAN_FREQ:
  DRIVE_SERIAL:



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com




+=+
This message may contain confidential and/or privileged information.  If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose or take any action based on
this message or any information herein.  If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message.  Thank you for your cooperation.
+=+


**
For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), 
its subsidiaries and/or its employees shall not be liable for the incorrect or 
incomplete transmission of this e-mail or any attachments, nor responsible for 
any delay in receipt.
**


Re: Create archives from primary storagepool

2006-03-13 Thread Erik Björndell
Hi!

Yea i realized that myself looking through the manual.

Thanks for confirming :) 


Erik Björndell 
IT Consultant 
Semcon IT Solutions 
Kardanvägen 37 
SE-461 38 Trollhättan 
Phone +46(0)520-40 08 00 
Moblie +46(0)736-84 04 06 
SMS [EMAIL PROTECTED] 
E-mail [EMAIL PROTECTED] 
Internet http://www.semcon.se 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Richard 
Sims
Sent: den 13 mars 2006 13:44
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Create archives from primary storagepool

On Mar 13, 2006, at 5:41 AM, Erik Björndell wrote:

> Hi!
>
> Ive got a 5.3 TSM server backing up ~50 nodes to disk.
>
> Can i create archives on tape from the primary storagepool instead of 
> running archiving from the nodes.

Erik - TSM Archive storage objects must be created from the client.
Only client operations provide the ability to operate upon individual 
files.

The closest thing you could do from the server is create a Backup Set, which is 
not what you want.

Richard Sims


!DSPAM:44154c56153132084828380!


Re: MAXSIZE

2006-03-13 Thread Richard Sims

On Mar 13, 2006, at 6:00 AM, Remco Post wrote:


Hi all,

I'm a bit confused, so I was hoping maybe the list could help.

When I read the help on def stg/upd stg for the Maxsi parameter it
mentionses
two things:

1- It's the size of the physical file (aggregate)
2- it's the size of the file before compression if compression is
used.

Now during backup I can inmagine that 2 is being used, but during
migration/backup stg/whatever I can only inmagine TSM using the
size of the
aggregate or the size of the file on the filesystem, but not both.
So which is
it? Is the use of the size of the file before compresion the old way
(pre-aggregate) way of doing things? Is the manual wrong? Is the
help wrong?

(and yes I've read the quickfacts, and no, they don't make things
any clearer)



Hi, Remco -

I think you picked up the item about compression being a participant
in storage pool operations, from the TSM Concepts redbook.
Unfortunately, that part of the redbook is poorly written, failing to
explain the context of its discussion, leading to confusion.

Compression is a factor only when a file is being backed up, and at
that point the TSM server is evaluating the size reported by the
client in deciding which storage pool the new object (actually, an
Aggregate for B/A; an individual file for HSM and perhaps TDPs) will
land in. Once in TSM server storage, the object is just a clump of
bits: no considerations for compression prevail. Where it will fit
thereafter is a function of Aggregate size (which can shrink during
reclamation operations, most visibly via MOVe Data RECONStruct=Yes).

Richard Sims


Disaster recovery of Windows 2003 Server

2006-03-13 Thread Doyle, Patrick
Has there been any updates to the disaster recovery documents for 2003
server?

The following refer to Windows 2000 only,

Disaster Recovery Strategies with Tivoli Storage Management
http://www.redbooks.ibm.com/redbooks/pdfs/sg246844.pdf

Summary BMR Procedures for Windows NT and Windows 2000 with ITSM
http://www.redbooks.ibm.com/abstracts/tips0102.html?Open


In particular, references to "dsmc restore systemobject" seem to be
obsolete. TSM Client 5.3.2.0 now sees "system services" and "system
state" as replacements for "systemobject".

Is anyone aware of an update?

Regatds,
Pat.


Zurich Insurance Ireland Limited t/a Eagle Star is regulated by the Financial 
Regulator
**
The contents of this e-mail and possible associated attachments
are for the addressee only. The integrity of this non-encrypted
message cannot be guaranteed on the internet.
Zurich Insurance Ireland Limited t/a Eagle Star is therefore
not responsible for the contents. If you are not the intended recipient,
please delete this e-mail and notify the sender
**


Re: Netware 6.5 SP2 or greater issue

2006-03-13 Thread Troy Frank
Assuming netware has all the latest patches (nw65sp5, tsaup18), then I
would be inclined to say try one of the tsm 5.2.x clients.  Anecdotal
evidence around here seems to point to weird things going on with the
newer 5.3.x clients on netware.


>>> [EMAIL PROTECTED] 3/13/2006 3:38:16 AM >>>
All,

I have a unicode issue in that Netware cannot handle special unicode
characters and this is only seen on large F&P servers where files are
saved with unusual strings containing for example [2006], this causes
the
client to hang and the backup halts, I have applied all the Netware
patches recommended from the README from the ftp site where I
downloaded
the 5.3.2.0 client from IBM, I patched it to FIX Pack 2 for 5.3 and
its
had no effect. Essentially its a Novell issue but they are blaming IBM
and
vice versa, I am sure many of you would have seen this before.

Is there a work around??
Would the 5.2.4 client for Netware be better??
My TSM server is W2K3 with 5.3.2 running.

Thanks for any help on this one..

Regards,

Jim Bollard.


Confidentiality Notice follows:

The information in this message (and the documents attached to it, if any)
is confidential and may be legally privileged. It is intended solely for
the addressee. Access to this message by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution
or any action taken, or omitted to be taken in reliance on it is
prohibited and may be unlawful. If you have received this message in
error, please delete all electronic copies of this message (and the
documents attached to it, if any), destroy any hard copies you may have
created and notify me immediately by replying to this email. Thank you.


Re: Disaster recovery of Windows 2003 Server

2006-03-13 Thread Henrik Wahlstedt
There is a Redpapper called "IBM Tivoli Storage Manager: Bare Machine
Recovery for Microsoft Windows 2003 and XP". Search on the Redbooks.

//Henrik

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Doyle, Patrick
Sent: den 13 mars 2006 15:17
To: ADSM-L@VM.MARIST.EDU
Subject: Disaster recovery of Windows 2003 Server

Has there been any updates to the disaster recovery documents for 2003
server?

The following refer to Windows 2000 only,

Disaster Recovery Strategies with Tivoli Storage Management
http://www.redbooks.ibm.com/redbooks/pdfs/sg246844.pdf

Summary BMR Procedures for Windows NT and Windows 2000 with ITSM
http://www.redbooks.ibm.com/abstracts/tips0102.html?Open


In particular, references to "dsmc restore systemobject" seem to be
obsolete. TSM Client 5.3.2.0 now sees "system services" and "system
state" as replacements for "systemobject".

Is anyone aware of an update?

Regatds,
Pat.


Zurich Insurance Ireland Limited t/a Eagle Star is regulated by the
Financial Regulator
**
The contents of this e-mail and possible associated attachments are for
the addressee only. The integrity of this non-encrypted message cannot
be guaranteed on the internet.
Zurich Insurance Ireland Limited t/a Eagle Star is therefore not
responsible for the contents. If you are not the intended recipient,
please delete this e-mail and notify the sender
**


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


restoring backup sets from one node to another

2006-03-13 Thread Timothy Hughes
I am having problems restoring a backup set from one node to
another. I set access archive * * on node DOQCRPS and tried to
restore the same backupset to DOQCW2, but I get the error
ANS1934E Backup set 'monthlydocqrppoi.410125866' not found.
I checked the backupset name and it is still active and the name is
correct.

If I try a set access on DOQCRPS that specifies a path (e.g., set
access archive poi:/* *) I always get ANS1083E No files have previously
been archived for 'poi:/*'


The command used that got ANS1934E on CW2 was:

restore backupset monthlydoqcrppoi.410125866 poi:swsp/ poi:restore/
-subdir=yes

This backupset was created on DOQCRPS and I issued "set access archive
* *" on DOQCRPS before attempting the restore on CW2. But I'm not sure
the command is working right because when I try to set access to a
specified path, e.g. "set access archive poi:/* *" I get the other error

ANS1083E.


We should be able to restore a backupset created on one server to a
different server correct? Is there something I am missing?

P.S. - Is it just me or is information regarding backups extremely
limited?

Thanks for any help in advance!


TSM 5.3.2.1
Novell client 5.3.0.12


Thanks


Re: restoring backup sets from one node to another

2006-03-13 Thread Andrew Raibeck
The SET ACCESS and -FROMNODE options do not pertain to backup sets. You
cannot use -FROMNODE to restore data from another node's backup sets. You
must connect with the node name for which the backup set was created.

Also, backup sets contain backup data only; they do not contain archive
data.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

IBM Tivoli Storage Manager support web page:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 2006-03-13
09:08:34:

> I am having problems restoring a backup set from one node to
> another. I set access archive * * on node DOQCRPS and tried to
> restore the same backupset to DOQCW2, but I get the error
> ANS1934E Backup set 'monthlydocqrppoi.410125866' not found.
> I checked the backupset name and it is still active and the name is
> correct.
>
> If I try a set access on DOQCRPS that specifies a path (e.g., set
> access archive poi:/* *) I always get ANS1083E No files have previously
> been archived for 'poi:/*'
>
>
> The command used that got ANS1934E on CW2 was:
>
> restore backupset monthlydoqcrppoi.410125866 poi:swsp/ poi:restore/
> -subdir=yes
>
> This backupset was created on DOQCRPS and I issued "set access archive
> * *" on DOQCRPS before attempting the restore on CW2. But I'm not sure
> the command is working right because when I try to set access to a
> specified path, e.g. "set access archive poi:/* *" I get the other error
>
> ANS1083E.
>
>
> We should be able to restore a backupset created on one server to a
> different server correct? Is there something I am missing?
>
> P.S. - Is it just me or is information regarding backups extremely
> limited?
>
> Thanks for any help in advance!
>
>
> TSM 5.3.2.1
> Novell client 5.3.0.12
>
>
> Thanks


Re: Limits to TSM Reporting Tool?

2006-03-13 Thread Lloyd Dieter
There's a limit of 62 reports (or was, for the 5.2.X versions).  The
service apparently can't handle more than that.

I work around it by creating batch files that execute the reports/monitors
from the command line, and run them from the windows scheduler.  Works
like a champ.

-Lloyd


On Fri, 10 Mar 2006 10:22:55 -0500
Dennis Melburn W IT743 <[EMAIL PROTECTED]> wrote thusly:

> Is there a limit to the number of reports (aka containers) that the
> reporting tool can handle?  I've been seeing problems with the reporting
> tool not working at all as soon as pass the 60 mark.  Anyone else seeing
> this?  Any way to increase this threshold or is this a limitation in the
> software?
>
>
> Mel Dennis


Re: restoring backup sets from one node to another

2006-03-13 Thread Timothy Hughes
Andrew,

Thanks, Does this mean that we cannot restore a backupset to a
different server?


Andrew Raibeck wrote:

> The SET ACCESS and -FROMNODE options do not pertain to backup sets. You
> cannot use -FROMNODE to restore data from another node's backup sets. You
> must connect with the node name for which the backup set was created.
>
> Also, backup sets contain backup data only; they do not contain archive
> data.
>
> Regards,
>
> Andy
>
> Andy Raibeck
> IBM Software Group
> Tivoli Storage Manager Client Development
> Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
> Internet e-mail: [EMAIL PROTECTED]
>
> IBM Tivoli Storage Manager support web page:
> http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
>
> The only dumb question is the one that goes unasked.
> The command line is your friend.
> "Good enough" is the enemy of excellence.
>
> "ADSM: Dist Stor Manager"  wrote on 2006-03-13
> 09:08:34:
>
> > I am having problems restoring a backup set from one node to
> > another. I set access archive * * on node DOQCRPS and tried to
> > restore the same backupset to DOQCW2, but I get the error
> > ANS1934E Backup set 'monthlydocqrppoi.410125866' not found.
> > I checked the backupset name and it is still active and the name is
> > correct.
> >
> > If I try a set access on DOQCRPS that specifies a path (e.g., set
> > access archive poi:/* *) I always get ANS1083E No files have previously
> > been archived for 'poi:/*'
> >
> >
> > The command used that got ANS1934E on CW2 was:
> >
> > restore backupset monthlydoqcrppoi.410125866 poi:swsp/ poi:restore/
> > -subdir=yes
> >
> > This backupset was created on DOQCRPS and I issued "set access archive
> > * *" on DOQCRPS before attempting the restore on CW2. But I'm not sure
> > the command is working right because when I try to set access to a
> > specified path, e.g. "set access archive poi:/* *" I get the other error
> >
> > ANS1083E.
> >
> >
> > We should be able to restore a backupset created on one server to a
> > different server correct? Is there something I am missing?
> >
> > P.S. - Is it just me or is information regarding backups extremely
> > limited?
> >
> > Thanks for any help in advance!
> >
> >
> > TSM 5.3.2.1
> > Novell client 5.3.0.12
> >
> >
> > Thanks


[no subject]

2006-03-13 Thread Robin Sharpe
Dear colleagues,

It's time for us to split our TSM into several new instances because our
database is now just too large -- 509GB -- and still growing.  My initial
plan is to create five TSMs - four plus a library manager - on the existing
server (an 8-way, 12GB HP rp7410 with 15 PCI slots).  This is cost
effective since no additional hardware or license is needed - just lots of
SAN disk for the databases, which we have available.  But, I've been
thinking what do you think about the following:

A more "creative" approach is to place the "new" TSM servers on existing
large clients.  This has several advantages:
- eliminates need to acquire new servers, saving physical room, power
and cooling requirements, additional maintenance.
- client benefits by sending its backup to local disk using shared
memory protocol. Eliminates potential network bottleneck.
- Client sends data to tapes using library sharing; no need for storage
agent.
- Use of local disk eliminates the need for SANergy
- heavy clients "pay" for their usage by providing backup services for
smaller clients.

There are also some concerns (not necessarily disadvantages):
- May require CPU, memory, and/or I/O upgrades (still cheaper than
buying a server)
- TSM operation may impact client's primary app.  Can be controlled by
PRM on HP-UX.
- Incurs licensing cost.

Thanks for any insights
Robin Sharpe
Berlex Labs


Re: MAXSIZE

2006-03-13 Thread Remco Post
Richard Sims wrote:
> On Mar 13, 2006, at 6:00 AM, Remco Post wrote:
>
>> Hi all,
>>
>> I'm a bit confused, so I was hoping maybe the list could help.
>>
>> When I read the help on def stg/upd stg for the Maxsi parameter it
>> mentionses
>> two things:
>>
>> 1- It's the size of the physical file (aggregate)
>> 2- it's the size of the file before compression if compression is
>> used.
>>
>> Now during backup I can inmagine that 2 is being used, but during
>> migration/backup stg/whatever I can only inmagine TSM using the
>> size of the
>> aggregate or the size of the file on the filesystem, but not both.
>> So which is
>> it? Is the use of the size of the file before compresion the old way
>> (pre-aggregate) way of doing things? Is the manual wrong? Is the
>> help wrong?
>>
>> (and yes I've read the quickfacts, and no, they don't make things
>> any clearer)
>>
>
> Hi, Remco -
>
> I think you picked up the item about compression being a participant
> in storage pool operations, from the TSM Concepts redbook.
> Unfortunately, that part of the redbook is poorly written, failing to
> explain the context of its discussion, leading to confusion.
>
> Compression is a factor only when a file is being backed up, and at
> that point the TSM server is evaluating the size reported by the
> client in deciding which storage pool the new object (actually, an
> Aggregate for B/A; an individual file for HSM and perhaps TDPs)

Well, actually, I can inmagine the TSM server allocating the destenation
resource on a per file basis even for B/A client backups. This I get, not from
any redbook, but both the quickfacts and 'help def stg'. Or is there a verb in
the tsm protocol that says someting like: 'hey server! here's a bunch of
files, the grand total is x bytes, make sure you're ready to store it', where
bunch is defined by tnxgroupmax and movebatchsize?

The reason I'm asking:

I've done a query on the contents table that tells me:

1- the number of files in an aggregate
2- the size of the aggregate

This is about as much info as the server has during reclamation/migration etc.
I'm trying to determine how large a maxsize setting would give me how many %
of the total number of files and how many % of the total number of bytes
stored. (so for eg. maxsize of 10MB I have 73% of the total number of files
and they take up about 10% of the total data-volume). I then could determine
the size of a 'FILE' pool to keep all 'small' files on-line for my environment
at this point in time.

Now if the maxsize is _always_ the size of the aggregate, this is a correct
figure (in my environment), but if in one case this is the size of an
individual file (B/A client) to be aggregated, and in another it is the size
of the aggregate... I'm uhhh, trouble because I'll need a larger file pool for
that setting (or I'll end up migrating files to tape that I don't want to
store on tape).

> will
> land in. Once in TSM server storage, the object is just a clump of
> bits: no considerations for compression prevail. Where it will fit
> thereafter is a function of Aggregate size (which can shrink during
> reclamation operations, most visibly via MOVe Data RECONStruct=Yes).
>

So with reclamation, migration and move data, well TSM working on aggregates
makes sense. But for B/A client activity both make sense, so which is it?

> Richard Sims

We could of course just test to see what happens, but maybe somebody allready
knows (developers? anyone?)

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Re: linux dsmcad problem

2006-03-13 Thread Remco Post
Dirk Kastens wrote:
> Andrew Raibeck schrieb:
>
>>> Nevertheless, the section in the client manual regarding dsmcad is not
>>> well described, IMO.
>>
>>
>>
>> What is wrong or missing? What would you like to see?
>
>
> I think, the section "Configuring Tivoli Storage Manager client/server
> communication across a firewall" should be enhanced to describe the
> consequences of using dsmcad. I was confused by the statement "Method 2:
> For the client scheduler in prompted mode, it is unnecessary to open any
> ports on the firewall."

Which is true if the firewall protects your server (rather than the client).
If you're client is protected by a firewall, running secmode promped does
require a firewall change and schedmode polling doesn't (at least not on the
client side).

It seems that a lot of configurations have their server sitting behind a
firewall, while (apperently, reading from the manual), the client is reachable
by the server on any port. This is at least what is assumed in the manual,
while at least in my environment the reverse is true.

> I was using prompted mode for years with the
> "dsmc sched" running on the client nodes. It doesn't come to my mind
> that I now have to use the WEBPORTS option, because I thought this was
> related to the web client, that I don't want to use.
> The "ADSM Quick Facts" comes to the point: "The WEBports option controls
> the scheduler side's port number (oddly enough)."
> I'm missing a statement like: "If you use dsmcad to start the client
> scheduler, you have to define the webports option on the client".
>

running either dsmcad or dsmc sched doesn't change this, both (should) behave
the same in this respect.

> --
> Regards,
>
> Dirk Kastens
> Universitaet Osnabrueck, Rechenzentrum (Computer Center)
> Albrechtstr. 28, 49069 Osnabrueck, Germany
> Tel.: +49-541-969-2347, FAX: -2470


--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Re: Weekly/Monthly -Backupsets running long

2006-03-13 Thread Timothy Hughes
Josh,

Thanks for reply!

I am using group Collocation for the six nodes that I am running
Backupsets on and I also performed a Move Nodedata when
they were created.



Josh-Daniel Davis wrote:

> I would recommend creating a second copygroup and a private storage pool
> for that node.  You'd probably want to give it a DB snapshot of its own
> also.
>
> If you NEED backupsets, then you definitely need the source pool to be
> collocated.  If you have to, it could be by group with all of your other
> nodes in one group and this one in its own.  That would at least save
> media mount times on recreates.
>
> Your other option is EXPORT NODE using FROMDATE and FSID.  You can't
> restore it directly from the client, but it can still be imported to a
> stranger tsm server.  Also, trying to do incrementals this way would not
> clear out files which had been deleted between exports, so in a recovery,
> you'd be left with stray files.
>
> -Josh
>
> On 06.03.10 at 10:34 [EMAIL PROTECTED] wrote:
>
> > Date: Fri, 10 Mar 2006 10:34:39 -0500
> > From: Timothy Hughes <[EMAIL PROTECTED]>
> > Reply-To: "ADSM: Dist Stor Manager" 
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Weekly/Monthly -Backupsets running long
> >
> > Mark,
> >
> > Thanks, These Backupsets are for a Novell Client they are still
> > currently using ArcServe for there full weekly backup of a POI
> > Volume on a Novell OS server until we get ours working
> > correctly.  I believe they do this because this Volume is very
> > important and holds many many GroupWise user files and they
> > the Backup Sets for Archival/Restores.
> >
> > So I assume this means there is no way to shorten these? They
> > causing tape problems with Oracle clients on the weekend.
> >
> > Thanks again!
> >
> > Mark Stapleton wrote:
> >
> >> "ADSM: Dist Stor Manager"  wrote on 03/10/2006
> >> 09:05:13 AM:
> >>> I have Backup Sets that run extremely long these Backup Sets
> >>> backup only 1 file space. It seems the Backup Sets are backing
> >>> up the same Data plus the new Data it's like doing a Full backup
> >>> every week. Is there a command or setting that I can implement
> >>> to ensure that the Backup Set only backs up ONLY the NEW
> >>> Data? Or is there another way lessen the time on these
> >>> Backup Sets.
> >>
> >> By definition, a backupset creates a copy of the most recent version of
> >> *every* active file in a given server (or a filespace within that server).
> >>
> >> Sorry.
> >>
> >> As a matter of curiosity, why do you create regular backupsets? Do you
> >> ever use them?
> >>
> >> --
> >> Mark Stapleton ([EMAIL PROTECTED])
> >> MR Backup and Recovery Management
> >> 262.790.3190
> >>
> >> --
> >> Electronic Privacy Notice. This e-mail, and any attachments, contains 
> >> information that is, or may be, covered by electronic communications 
> >> privacy laws, and is also confidential and proprietary in nature. If you 
> >> are not the intended recipient, please be advised that you are legally 
> >> prohibited from retaining, using, copying, distributing, or otherwise 
> >> disclosing this information in any manner. Instead, please reply to the 
> >> sender that you have received this communication in error, and then 
> >> immediately delete it. Thank you in advance for your cooperation.
> >> ==
> >


Re: restoring backup sets from one node to another

2006-03-13 Thread William
Yes you can. As Andy said, you must use the original node name.

On 3/13/06, Timothy Hughes <[EMAIL PROTECTED]> wrote:
>
> Andrew,
>
> Thanks, Does this mean that we cannot restore a backupset to a
> different server?
>
>
> Andrew Raibeck wrote:
>
> > The SET ACCESS and -FROMNODE options do not pertain to backup sets. You
> > cannot use -FROMNODE to restore data from another node's backup sets.
> You
> > must connect with the node name for which the backup set was created.
> >
> > Also, backup sets contain backup data only; they do not contain archive
> > data.
> >
> > Regards,
> >
> > Andy
> >
> > Andy Raibeck
> > IBM Software Group
> > Tivoli Storage Manager Client Development
> > Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
> > Internet e-mail: [EMAIL PROTECTED]
> >
> > IBM Tivoli Storage Manager support web page:
> >
> http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
> >
> > The only dumb question is the one that goes unasked.
> > The command line is your friend.
> > "Good enough" is the enemy of excellence.
> >
> > "ADSM: Dist Stor Manager"  wrote on 2006-03-13
> > 09:08:34:
> >
> > > I am having problems restoring a backup set from one node to
> > > another. I set access archive * * on node DOQCRPS and tried to
> > > restore the same backupset to DOQCW2, but I get the error
> > > ANS1934E Backup set 'monthlydocqrppoi.410125866' not found.
> > > I checked the backupset name and it is still active and the name is
> > > correct.
> > >
> > > If I try a set access on DOQCRPS that specifies a path (e.g., set
> > > access archive poi:/* *) I always get ANS1083E No files have
> previously
> > > been archived for 'poi:/*'
> > >
> > >
> > > The command used that got ANS1934E on CW2 was:
> > >
> > > restore backupset monthlydoqcrppoi.410125866 poi:swsp/ poi:restore/
> > > -subdir=yes
> > >
> > > This backupset was created on DOQCRPS and I issued "set access archive
> > > * *" on DOQCRPS before attempting the restore on CW2. But I'm not sure
> > > the command is working right because when I try to set access to a
> > > specified path, e.g. "set access archive poi:/* *" I get the other
> error
> > >
> > > ANS1083E.
> > >
> > >
> > > We should be able to restore a backupset created on one server to a
> > > different server correct? Is there something I am missing?
> > >
> > > P.S. - Is it just me or is information regarding backups extremely
> > > limited?
> > >
> > > Thanks for any help in advance!
> > >
> > >
> > > TSM 5.3.2.1
> > > Novell client 5.3.0.12
> > >
> > >
> > > Thanks
>


Re: restoring backup sets from one node to another

2006-03-13 Thread Timothy Hughes
William,

Thanks..

I believe we are using the original node name and entering the
commands from that server. Do we just need to insert the
target server minus the Set Access command?



William wrote:

> Yes you can. As Andy said, you must use the original node name.
>
> On 3/13/06, Timothy Hughes <[EMAIL PROTECTED]> wrote:
> >
> > Andrew,
> >
> > Thanks, Does this mean that we cannot restore a backupset to a
> > different server?
> >
> >
> > Andrew Raibeck wrote:
> >
> > > The SET ACCESS and -FROMNODE options do not pertain to backup sets. You
> > > cannot use -FROMNODE to restore data from another node's backup sets.
> > You
> > > must connect with the node name for which the backup set was created.
> > >
> > > Also, backup sets contain backup data only; they do not contain archive
> > > data.
> > >
> > > Regards,
> > >
> > > Andy
> > >
> > > Andy Raibeck
> > > IBM Software Group
> > > Tivoli Storage Manager Client Development
> > > Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
> > > Internet e-mail: [EMAIL PROTECTED]
> > >
> > > IBM Tivoli Storage Manager support web page:
> > >
> > http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
> > >
> > > The only dumb question is the one that goes unasked.
> > > The command line is your friend.
> > > "Good enough" is the enemy of excellence.
> > >
> > > "ADSM: Dist Stor Manager"  wrote on 2006-03-13
> > > 09:08:34:
> > >
> > > > I am having problems restoring a backup set from one node to
> > > > another. I set access archive * * on node DOQCRPS and tried to
> > > > restore the same backupset to DOQCW2, but I get the error
> > > > ANS1934E Backup set 'monthlydocqrppoi.410125866' not found.
> > > > I checked the backupset name and it is still active and the name is
> > > > correct.
> > > >
> > > > If I try a set access on DOQCRPS that specifies a path (e.g., set
> > > > access archive poi:/* *) I always get ANS1083E No files have
> > previously
> > > > been archived for 'poi:/*'
> > > >
> > > >
> > > > The command used that got ANS1934E on CW2 was:
> > > >
> > > > restore backupset monthlydoqcrppoi.410125866 poi:swsp/ poi:restore/
> > > > -subdir=yes
> > > >
> > > > This backupset was created on DOQCRPS and I issued "set access archive
> > > > * *" on DOQCRPS before attempting the restore on CW2. But I'm not sure
> > > > the command is working right because when I try to set access to a
> > > > specified path, e.g. "set access archive poi:/* *" I get the other
> > error
> > > >
> > > > ANS1083E.
> > > >
> > > >
> > > > We should be able to restore a backupset created on one server to a
> > > > different server correct? Is there something I am missing?
> > > >
> > > > P.S. - Is it just me or is information regarding backups extremely
> > > > limited?
> > > >
> > > > Thanks for any help in advance!
> > > >
> > > >
> > > > TSM 5.3.2.1
> > > > Novell client 5.3.0.12
> > > >
> > > >
> > > > Thanks
> >


Re: TSM Server Hosting - dedicated vs. shared

2006-03-13 Thread Orville Lantto
TSM licenses (and pricing) has been based on the environment for some years.  
Check with your IBM Business Partner to get the details.
 
Orville L. Lantto
Glasshouse Technologies, Inc.
Cell:  952-738-1933
 



From: ADSM: Dist Stor Manager on behalf of Robin Sharpe
Sent: Mon 3/13/2006 2:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM Server Hosting - dedicated vs. shared



Orville,
Thanks for your thoughts.  We do use Control-M for all of our scheduling in
the Unix environment, and are moving towards Windows deployment too.
I am surprised, though, about your comment on licensing.  I thought each
TSM server instance on a separate physical server needed a license (per
processor).  Is this not true? Is it a new policy?

Robin Sharpe
Berlex Labs


|-+--->
| |   Orville Lantto  |
| |   <[EMAIL PROTECTED]|
| |   SHOUSE.COM> |
| |   Sent by: "ADSM: Dist|
| |   Stor Manager"   |
| |   <[EMAIL PROTECTED]|
| |   U>  |
| |   |
| |   |
| |   03/13/2006 01:40 PM |
| |   Please respond to   |
| |   "ADSM: Dist Stor|
| |   Manager"|
| |   |
|-+--->
  
>|
  | 
   |
  | 
   |
  |To: ADSM-L@VM.MARIST.EDU 
   |
  |cc:  
   |
  |Subject: 
   |
  |Re:  
   |
  
>|



The approach is valid and can reap significant backup/restore time benefits
for the clients.
Two points:

 1) No new licensing cost are involved.  TSM is
licensed by the environment, not the number of TSM servers.

 2) Consider the complexity of resources scheduling
between many servers.  Most sites have a limited number of tape drives and
contention can be a bear to schedule out with so many independent servers
and their separate schedulers.  An external admin scheduling utility may be
needed.


Orville L. Lantto
Glasshouse Technologies, Inc.





From: ADSM: Dist Stor Manager on behalf of Robin Sharpe
Sent: Mon 3/13/2006 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L]



Dear colleagues,

It's time for us to split our TSM into several new instances because our
database is now just too large -- 509GB -- and still growing.  My initial
plan is to create five TSMs - four plus a library manager - on the existing
server (an 8-way, 12GB HP rp7410 with 15 PCI slots).  This is cost
effective since no additional hardware or license is needed - just lots of
SAN disk for the databases, which we have available.  But, I've been
thinking what do you think about the following:

A more "creative" approach is to place the "new" TSM servers on existing
large clients.  This has several advantages:
- eliminates need to acquire new servers, saving physical room, power
and cooling requirements, additional maintenance.
- client benefits by sending its backup to local disk using shared
memory protocol. Eliminates potential network bottleneck.
- Client sends data to tapes using library sharing; no need for storage
agent.
- Use of local disk eliminates the need for SANergy
- heavy clients "pay" for their usage by providing backup services for
smaller clients.

There are also some concerns (not necessarily disadvantages):
- May require CPU, memory, and/or I/O upgrades (still cheaper than
buying a server)
- TSM operation may impact client's primary app.  Can be controlled by
PRM on HP-UX.
- Incurs licensing cost.

Thanks for any insights
Robin Sharpe
Berlex Labs


TSM Server Hosting - dedicated vs. shared

2006-03-13 Thread Robin Sharpe
Orville,
Thanks for your thoughts.  We do use Control-M for all of our scheduling in
the Unix environment, and are moving towards Windows deployment too.
I am surprised, though, about your comment on licensing.  I thought each
TSM server instance on a separate physical server needed a license (per
processor).  Is this not true? Is it a new policy?

Robin Sharpe
Berlex Labs


|-+--->
| |   Orville Lantto  |
| |   <[EMAIL PROTECTED]|
| |   SHOUSE.COM> |
| |   Sent by: "ADSM: Dist|
| |   Stor Manager"   |
| |   <[EMAIL PROTECTED]|
| |   U>  |
| |   |
| |   |
| |   03/13/2006 01:40 PM |
| |   Please respond to   |
| |   "ADSM: Dist Stor|
| |   Manager"|
| |   |
|-+--->
  
>|
  | 
   |
  | 
   |
  |To: ADSM-L@VM.MARIST.EDU 
   |
  |cc:  
   |
  |Subject: 
   |
  |Re:  
   |
  
>|



The approach is valid and can reap significant backup/restore time benefits
for the clients.
Two points:

 1) No new licensing cost are involved.  TSM is
licensed by the environment, not the number of TSM servers.

 2) Consider the complexity of resources scheduling
between many servers.  Most sites have a limited number of tape drives and
contention can be a bear to schedule out with so many independent servers
and their separate schedulers.  An external admin scheduling utility may be
needed.


Orville L. Lantto
Glasshouse Technologies, Inc.





From: ADSM: Dist Stor Manager on behalf of Robin Sharpe
Sent: Mon 3/13/2006 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L]



Dear colleagues,

It's time for us to split our TSM into several new instances because our
database is now just too large -- 509GB -- and still growing.  My initial
plan is to create five TSMs - four plus a library manager - on the existing
server (an 8-way, 12GB HP rp7410 with 15 PCI slots).  This is cost
effective since no additional hardware or license is needed - just lots of
SAN disk for the databases, which we have available.  But, I've been
thinking what do you think about the following:

A more "creative" approach is to place the "new" TSM servers on existing
large clients.  This has several advantages:
- eliminates need to acquire new servers, saving physical room, power
and cooling requirements, additional maintenance.
- client benefits by sending its backup to local disk using shared
memory protocol. Eliminates potential network bottleneck.
- Client sends data to tapes using library sharing; no need for storage
agent.
- Use of local disk eliminates the need for SANergy
- heavy clients "pay" for their usage by providing backup services for
smaller clients.

There are also some concerns (not necessarily disadvantages):
- May require CPU, memory, and/or I/O upgrades (still cheaper than
buying a server)
- TSM operation may impact client's primary app.  Can be controlled by
PRM on HP-UX.
- Incurs licensing cost.

Thanks for any insights
Robin Sharpe
Berlex Labs


Re: MAXSIZE

2006-03-13 Thread Richard Sims

On Mar 13, 2006, at 12:24 PM, Remco Post wrote:


Well, actually, I can inmagine the TSM server allocating the
destenation
resource on a per file basis even for B/A client backups. This I
get, not from
any redbook, but both the quickfacts and 'help def stg'. Or is
there a verb in
the tsm protocol that says someting like: 'hey server! here's a
bunch of
files, the grand total is x bytes, make sure you're ready to store
it', where
bunch is defined by tnxgroupmax and movebatchsize?


You can refer to the description of transaction processing in the
Admin Guide manual, and the original description of Small File
Aggregation in the ADSMv3 Technical Guide, to appreciate the
mechanism and its controls. The combination of TXNGroupmax and
TXNBytelimit govern the transaction and Aggregate size. In modern TSM
servers, TXNGroupmax may be defined individually for each node.



The reason I'm asking:

I've done a query on the contents table that tells me:

1- the number of files in an aggregate
2- the size of the aggregate

This is about as much info as the server has during reclamation/
migration etc.
I'm trying to determine how large a maxsize setting would give me
how many %
of the total number of files and how many % of the total number of
bytes
stored. (so for eg. maxsize of 10MB I have 73% of the total number
of files
and they take up about 10% of the total data-volume). I then could
determine
the size of a 'FILE' pool to keep all 'small' files on-line for my
environment
at this point in time.

Now if the maxsize is _always_ the size of the aggregate, this is a
correct
figure (in my environment), but if in one case this is the size of an
individual file (B/A client) to be aggregated, and in another it is
the size
of the aggregate... I'm uhhh, trouble because I'll need a larger
file pool for
that setting (or I'll end up migrating files to tape that I don't
want to
store on tape).


Aggregation distances logical file size from the MAXSize value: you
have coarse, rather than fine control. You could conceptually reduce
your client TXNBytelimit, but that then applies to all transmissions,
and could impair performance. (Note the huge jump in the default
TXNBytelimit value in TSM 5.3, for performance.) MAXSize is thus a
reasonableness value for storage pool apportionment rather than a way
of having only files in a certain size range stored in a given
storage pool.

In considering all this, appreciate that TSM is an Enterprise
product, intended to deal with large volumes of data, en masse. Its
controls are by classes of data rather than by specific object
attributes, such as size. The objective is overall throughput speed,
to keep today's high volume of data moving. Keeping small files
separate from large files is not a product objective, where relative
age is a more common differentiator for restorals, and thus migration.

   Richard Sims


Re:

2006-03-13 Thread Orville Lantto
The approach is valid and can reap significant backup/restore time benefits for 
the clients.
Two points:

1) No new licensing cost are involved.  TSM is licensed by the 
environment, not the number of TSM servers.
 
2) Consider the complexity of resources scheduling between many 
servers.  Most sites have a limited number of tape drives and contention can be 
a bear to schedule out with so many independent servers and their separate 
schedulers.  An external admin scheduling utility may be needed.

 
Orville L. Lantto
Glasshouse Technologies, Inc.
 
 



From: ADSM: Dist Stor Manager on behalf of Robin Sharpe
Sent: Mon 3/13/2006 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L]



Dear colleagues,

It's time for us to split our TSM into several new instances because our
database is now just too large -- 509GB -- and still growing.  My initial
plan is to create five TSMs - four plus a library manager - on the existing
server (an 8-way, 12GB HP rp7410 with 15 PCI slots).  This is cost
effective since no additional hardware or license is needed - just lots of
SAN disk for the databases, which we have available.  But, I've been
thinking what do you think about the following:

A more "creative" approach is to place the "new" TSM servers on existing
large clients.  This has several advantages:
- eliminates need to acquire new servers, saving physical room, power
and cooling requirements, additional maintenance.
- client benefits by sending its backup to local disk using shared
memory protocol. Eliminates potential network bottleneck.
- Client sends data to tapes using library sharing; no need for storage
agent.
- Use of local disk eliminates the need for SANergy
- heavy clients "pay" for their usage by providing backup services for
smaller clients.

There are also some concerns (not necessarily disadvantages):
- May require CPU, memory, and/or I/O upgrades (still cheaper than
buying a server)
- TSM operation may impact client's primary app.  Can be controlled by
PRM on HP-UX.
- Incurs licensing cost.

Thanks for any insights
Robin Sharpe
Berlex Labs


ANS1950E -Backup using Microsoft Volume Shadow Copy Failed Error in Client 5.3.0

2006-03-13 Thread Nancy L Backhaus
Hello,

We are still seeing the shadow copy issue in 5.3.2.2.  Anyone else still
seeing the same issue or do I need to open a call with IBM?


TSM Server - 5.3.2.2
Op System AIX 5.3
TSM Windows 2003 Client - 5.3.2.2

snipit from dsmerror.log


03/13/2006 00:11:31 CreateSnapshotSet(): pAsync->QueryStatus() returns
hr=VSS_E_PROVIDER_VETO
03/13/2006 00:11:33 ANS1999E Incremental processing of '\\tsmclient\d$'
stopped.
03/13/2006 00:11:33 ANS1950E Backup using Microsoft volume shadow copy
failed.



Nancy Backhaus
Enterprise Systems
HealthNow, NY
716-887-7979

CONFIDENTIALITY NOTICE: This email message and any attachments are for the sole 
use of the intended recipient(s) and may contain proprietary, confidential, 
trade secret or privileged information.  Any unauthorized review, use, 
disclosure or distribution is prohibited and may be a violation of law.  If you 
are not the intended recipient or a person responsible for delivering this 
message to an intended recipient, please contact the sender by reply email and 
destroy all copies of the original
message.


[no subject]

2006-03-13 Thread Allen S. Rout
>> On Mon, 13 Mar 2006 12:03:04 -0500, Robin Sharpe <[EMAIL PROTECTED]> said:



[...]

I've got 12 servers on 2 boxes at the moment, 6 of which are more or
less application-specific.

The more TSM server instances, the more complexity, and the more you
must depend on automation and automatic validation and monitoring.
Because of this, I would recommend against calving off a TSM server
only because there is a powerful box running app X.

I split one app off to its' own TSM server because of database
overhead: WebCT 'Campus Edition' which has some unsavory habits with
its' filesystem, and caused lockups.

I split one app off because it was excessively sensitive to outages:
Content Manager.

I split off my Cyrus back ends because each single node of them had
"big" databases (though they're teeny compared to yours).


So, if you have application Q which you would tend to split off even
if the split instance were to stay on your main TSM server, then I'd
consider that a candidate for motion to the other box.


---


Another thing to consider is that this opens up application Q to the
union of the upgrade / outage / maintenance issues of Q and TSM.  This
might be a non-issue, but could possibly be a big deal.  (What, we
need new tape drivers, to match the microcode applied yesterday
morning on the drives? Well, you can schedule a production outage for
next tuesday...)


---


Once you're over the automation/planning/monitoring hump for multiple
servers, and the additional hump associated with servers on multiple
hosts, you'll find that lots of concepts become much easier:  DR for
the TSM infrastructure gets more straightforward because there's much
less implicit state, you've made much more of it explicit.  This is a
lot of work, but gives good value.



- Allen S. Rout


Re: restoring backup sets from one node to another

2006-03-13 Thread Chris Pasztor




There is away around this issue and I use it on occasions by defining a new node
I call ADHOC, I then by granting the ADHOC node a proxy for the source node I do
a selective backup of the files then create the backupset. Then I define the new
node on the target server define the backupset and restore it.

Chris Pasztor
S/390 Systems Programmer/DB2 Database Administrator
Mantrack
Payroll, HR Service Solutions and Facilities Management

Phone: Ph: +61 1800 060 171
DDL:
   +61 (3) 9541 2605
Fax::     +61 (3) 9562 98221
Cell:       0411 229 151
Email   : [EMAIL PROTECTED]
URL: www.mantrack.com 


Re: MAXSIZE

2006-03-13 Thread Remco Post
Richard Sims wrote:
> On Mar 13, 2006, at 12:24 PM, Remco Post wrote:
>
> You can refer to the description of transaction processing in the
> Admin Guide manual, and the original description of Small File
> Aggregation in the ADSMv3 Technical Guide, to appreciate the
> mechanism and its controls. The combination of TXNGroupmax and
> TXNBytelimit govern the transaction and Aggregate size. In modern TSM
> servers, TXNGroupmax may be defined individually for each node.
>

thanks, will look into that, again. I wasn't really impressed by the
clarity of the chapters when I read them, just yesterday ;-)

> Aggregation distances logical file size from the MAXSize value: you
> have coarse, rather than fine control. You could conceptually reduce
> your client TXNBytelimit, but that then applies to all transmissions,
> and could impair performance. (Note the huge jump in the default
> TXNBytelimit value in TSM 5.3, for performance.) MAXSize is thus a
> reasonableness value for storage pool apportionment rather than a way
> of having only files in a certain size range stored in a given
> storage pool.
>

ok, let's make this clear beyond any doubt:

I don't care either way, I like aggregates and I do appriciate their
benefits, but I do want to know every detail about how aggregation and
the maxsize parameter interact.

I don't want to reduce tnxbytelimit or tnxgroupmax, large transactions
are a must to keep performance up.

> In considering all this, appreciate that TSM is an Enterprise
> product, intended to deal with large volumes of data, en masse. Its
> controls are by classes of data rather than by specific object
> attributes, such as size. The objective is overall throughput speed,
> to keep today's high volume of data moving. Keeping small files
> separate from large files is not a product objective, where relative
> age is a more common differentiator for restorals, and thus migration.
>

:-)

Do realize that diskpool's currently will not be able to handle the
database backups of over 64GB, since the single object doesn't fit in a
diskpool volume (due to restrictions on the size of diskpool volumes).

I would of course love to have my active files (save for the huge ones)
on-line and inactive versions off-line. Unfortunately, that is in the
future.

Overall throughput is nice, but being able te restore small files in
seconds rather than minutes is a big seller. Having over 70% of all
single file resotres, inrespective of file-age, not put any load on
tape-drives could greatly increase customer satisfaction as well as
reduce the load on my tape-drives. I currently have only 8 9940B drives
and 4 9840C drives for TSM and investing 20-40 k$ in disk rather than in
tape could be justified. I do need correct info on these details to be
able to both justify the investment and predict it's impact. (and market
it to my customers and convince my boss).

With these file-sizes, for files larger than the maxsize I'm currently
thinking of the tape mount-dismount times are still substantial compared
to the data transfer time. But not having a tape-drive busy for 2-3
minutes for a single 1k (or even 10M) restore, which do happen a lot
more often than the 90Gig restores, would be worth cosiddering, if only
I could all the correct details about TSM's workings ;-)

About TSM being enterprise: tell that to those who don't store
14000+ files  or 80TB (time 2 for the copy stgpool) in a single TSM
server ;-) I do appriciate TSM, we do have a p630 move over 2TB per 12
hours without any problems. We're just looking for ways of increasing
customer experience without any disproportionate investment and thus
impact on the cost of backups.

>Richard Sims

So now for the question at hand, how do aggregation and the maxsize
parameter interact? Both during B/A and migration/reclamation?

(yes I didn't notice a direct answer to my question in your posting ;-))

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167
PGP Key fingerprint = 6367 DFE9 5CBC 0737 7D16  B3F6 048A 02BF DC93 94EC

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Re: TSM Server Hosting - dedicated vs. shared

2006-03-13 Thread Remco Post
Robin Sharpe wrote:

> I thought each
> TSM server instance on a separate physical server needed a license (per
> processor).  Is this not true? Is it a new policy?

yes, you do need to buy tsm licenses for each server, they are the same
as any other server(-type client) cpu in your environment.

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167
PGP Key fingerprint = 6367 DFE9 5CBC 0737 7D16  B3F6 048A 02BF DC93 94EC

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Re: TSM Server Hosting - dedicated vs. shared

2006-03-13 Thread Remco Post
Robin Sharpe wrote:

> Dear colleagues,
>
> It's time for us to split our TSM into several new instances because
> our database is now just too large -- 509GB -- and still growing.  My

wow... you do know that 560 GB is the limit (iirc?).

> initial plan is to create five TSMs - four plus a library manager -
> on the existing server (an 8-way, 12GB HP rp7410 with 15 PCI slots).
> This is cost effective since no additional hardware or license is
> needed - just lots of SAN disk for the databases, which we have
> available.  But, I've been thinking what do you think about the
> following:
>
> A more "creative" approach is to place the "new" TSM servers on
> existing large clients.  This has several advantages: -
> eliminates need to acquire new servers, saving physical room, power
> and cooling requirements, additional maintenance.

these could be accomplished by running multiple instances on one
dedicated server as well. The latter also saves the cost of upgrading
several tsm instances on multiple systems.

> - client benefits by sending its backup to local disk using
> shared memory protocol. Eliminates potential network bottleneck.

good point. But you need to get the exact same amount of bits out of
your server, either via LAN or SAN. Lan adapters are cheaper and most
vendors recommend against mixing tape and disk access on one FC HBA. So
either way you'll probably want to increase the number of I/O adapters.
I'd go for the cheaper option

> - Client sends data to tapes using library sharing; no need for
> storage agent.

> - Use of local disk eliminates the need for SANergy

or IBM SANfs or whatever. SANergy is dead anyway ;-)

> - heavy clients "pay" for their usage by providing backup
> services for smaller clients.

... why not just charge them directly?

>
> There are also some concerns (not necessarily disadvantages): -
> May require CPU, memory, and/or I/O upgrades (still cheaper than
> buying a server) - TSM operation may impact client's primary app.
> Can be controlled by PRM on HP-UX.

Still, I wouldn't want a big application running on my TSM server, so
probably, your colleagues don't want tsm on their machines either

> - Incurs licensing cost.
>

that, is a major point. If I was running eg. an oracle server I wouldn't
want to pay the oracle license for the extra cpu's required for TSM, and
vv. So having dedicated servers for an environment seems to be a very
good idea, mixing two (or more) applications on one server isn't (imnsho)

> Thanks for any insights Robin Sharpe Berlex Labs

I think that in the end you'll find the tco of a big tsm server with
multiple instances cheaper that of multiple instances all over the place.


--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167
PGP Key fingerprint = 6367 DFE9 5CBC 0737 7D16  B3F6 048A 02BF DC93 94EC

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Re: MAXSIZE

2006-03-13 Thread Richard Sims

On Mar 13, 2006, at 5:55 PM, Remco Post wrote:


So now for the question at hand, how do aggregation and the maxsize
parameter interact? Both during B/A and migration/reclamation?


Remco -

There's no mystery in this - it's all laid out in the IBM
documentation, as previously indicated. During backup, client data is
formed into a transaction according to the limits imposed by the TXN*
parameters, and if the determined size fits in the primary landing
area, it goes there, else further down in the stgpool hierarchy.
Migration can happen if the current size of an Aggregate fits in the
next lower stgpool.

Your desire to service restorals without tape argues for an
investment in some kind of disk array, with a Migdelay value such
that older data migrates to tape.

   a weary Richard Sims


Re: Disaster recovery of Windows 2003 Server

2006-03-13 Thread Prather, Wanda
That information is also included now in the current Windows Client 
installation and use manual, and is a little more current I think - search on 
ASR.



From: ADSM: Dist Stor Manager on behalf of Henrik Wahlstedt
Sent: Mon 3/13/2006 9:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Disaster recovery of Windows 2003 Server



There is a Redpapper called "IBM Tivoli Storage Manager: Bare Machine
Recovery for Microsoft Windows 2003 and XP". Search on the Redbooks.

//Henrik

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Doyle, Patrick
Sent: den 13 mars 2006 15:17
To: ADSM-L@VM.MARIST.EDU
Subject: Disaster recovery of Windows 2003 Server

Has there been any updates to the disaster recovery documents for 2003
server?

The following refer to Windows 2000 only,

Disaster Recovery Strategies with Tivoli Storage Management
http://www.redbooks.ibm.com/redbooks/pdfs/sg246844.pdf

Summary BMR Procedures for Windows NT and Windows 2000 with ITSM
http://www.redbooks.ibm.com/abstracts/tips0102.html?Open


In particular, references to "dsmc restore systemobject" seem to be
obsolete. TSM Client 5.3.2.0 now sees "system services" and "system
state" as replacements for "systemobject".

Is anyone aware of an update?

Regatds,
Pat.


Zurich Insurance Ireland Limited t/a Eagle Star is regulated by the
Financial Regulator
**
The contents of this e-mail and possible associated attachments are for
the addressee only. The integrity of this non-encrypted message cannot
be guaranteed on the internet.
Zurich Insurance Ireland Limited t/a Eagle Star is therefore not
responsible for the contents. If you are not the intended recipient,
please delete this e-mail and notify the sender
**


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: ANS1950E -Backup using Microsoft Volume Shadow Copy Failed Error in Client 5.3.0

2006-03-13 Thread Kurt Beyers
Nancy,
 
The latest news I've got from IBM is that it will be solved in the TSM 5.3.3 BA 
client. The latter will become available somewhere this month.
 
At the time being, I've seperated the file system and Windows system objects 
into two different schedules. The issue (VSS backup of Windows system objects 
in combination with a VSS file system backup) is not met with this workaround. 
 
best regards,
Kurt
 



Van: ADSM: Dist Stor Manager namens Nancy L Backhaus
Verzonden: ma 3/13/2006 20:55
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: [ADSM-L] ANS1950E -Backup using Microsoft Volume Shadow Copy Failed 
Error in Client 5.3.0



Hello,

We are still seeing the shadow copy issue in 5.3.2.2.  Anyone else still
seeing the same issue or do I need to open a call with IBM?


TSM Server - 5.3.2.2
Op System AIX 5.3
TSM Windows 2003 Client - 5.3.2.2

snipit from dsmerror.log


03/13/2006 00:11:31 CreateSnapshotSet(): pAsync->QueryStatus() returns
hr=VSS_E_PROVIDER_VETO
03/13/2006 00:11:33 ANS1999E Incremental processing of '\\tsmclient\d$'
stopped.
03/13/2006 00:11:33 ANS1950E Backup using Microsoft volume shadow copy
failed.



Nancy Backhaus
Enterprise Systems
HealthNow, NY
716-887-7979

CONFIDENTIALITY NOTICE: This email message and any attachments are for the sole 
use of the intended recipient(s) and may contain proprietary, confidential, 
trade secret or privileged information.  Any unauthorized review, use, 
disclosure or distribution is prohibited and may be a violation of law.  If you 
are not the intended recipient or a person responsible for delivering this 
message to an intended recipient, please contact the sender by reply email and 
destroy all copies of the original
message.


Re: MAXSIZE

2006-03-13 Thread Remco Post
Richard Sims wrote:
> On Mar 13, 2006, at 5:55 PM, Remco Post wrote:
>
>> So now for the question at hand, how do aggregation and the maxsize
>> parameter interact? Both during B/A and migration/reclamation?
>
> Remco -
>
> There's no mystery in this - it's all laid out in the IBM
> documentation, as previously indicated. During backup, client data is
> formed into a transaction according to the limits imposed by the TXN*
> parameters, and if the determined size fits in the primary landing
> area, it goes there, else further down in the stgpool hierarchy.
> Migration can happen if the current size of an Aggregate fits in the
> next lower stgpool.
>

Thanks,

and of course, since the landing area is determined before the actual
start of the data-transfer, in filesystem sizes are used to determine
the expected size of the transaction during b/a activity. The actual
size of the aggregate is used during migration/reclamation/whatever
since at that time that value is known.

> Your desire to service restorals without tape argues for an
> investment in some kind of disk array, with a Migdelay value such
> that older data migrates to tape.
>

Well, actually, I've been thinking about that. I would probably set a
migdelay, but make it huge (years?) for the smaller file pool and maybe
set it to a few days (or a week or so) for (disk-/file-)pools that hold
the larger files.

Anyway, I think I can now present a clear picture of both cost and
benefits to management.

>a weary Richard Sims

Thanks for your clarification, your help and a good nights rest did
wonders ;-)

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167
PGP Key fingerprint = 6367 DFE9 5CBC 0737 7D16  B3F6 048A 02BF DC93 94EC

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams