Re: Management Classes

2004-04-16 Thread Sam Rudland
Thanks for your information/ideas guys. The reason behind this
requirement is a bit messy. My company has a 3584 library on site with a
3583 at our DR site. Obviously we couldn't fit all the DR media into the
3583 at one time so the plan is to put data from key nodes into a
separate tape pool so that in a DR scenario those tapes can be loaded
into the 3583 and we can do restores without having to check tapes in
and out repeatedly.

I was hoping it would be a simple option I could use on the server side
fo things. I have a little TSM knowledge but not a lot so I think we are
going to get an expert in for a couple of days to relook at our
configuration and help design a new standard to ensure we are following
best practice.

Thanks for your help again,

Sam 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Raibeck
Sent: 15 April 2004 15:54
To: [EMAIL PROTECTED]
Subject: Re: Management Classes

Kent, your questions are very good ones and you make legitimate points.
My intent was to provide an answer to the question that was asked, which
was how to change the MC. Even then, your points notwithstanding, that
answer does not work if uses more than one MC at a time. But then
followed on with an invitation to be more specific about what he wanted
to do, and an admonition against modifying MC's in this fashion. If that
wasn't clear, then I should have probably emphasized that point first
and more strongly.
  :-)

As you point out, there are multiple ways to deal with this sort of
thing, but rather than speculate or write at length on the topic, I
think it better to understand the real need first.
Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Kent Monthei <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/15/2004 06:43
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Andy, if the clients/filesystems overlap with the other schedules, won't
this lead to excessive/unintended management class rebinding?
If they don't overlap, it would make more sense to just define a new
domain.  If they do overlap, it might be safer to configure an alternate
node name for each client, for use with the special schedules - but this
can lead to timeline continuity issues that will complicate future
point-in-time restores.   Would it be safer to follow your plan, but
just
toggle the existing MC's COPYGROUP settings and do the ACTIVATE
POLICYSET, instead of toggling between two MC's?

Kent Monthei
GlaxoSmithKline




"Andrew Raibeck" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
15-Apr-2004 09:24
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Why not define admin schedules that change the default management class
prior to the scheduled backup (create server macros that run ASSIGN
DEFMGMTCLASS followed by ACTIVATE POLICYSET, then schedule the macros)?

If that does not provide sufficient granularity, then it would help to
have more specific information on what you wish to do, and, just as
important, why. Normally I would recommend against flip-flopping
management classes in this fashion, at least not without knowing a lot
more about your scenario.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Sam Rudland <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor
Manager" <[EMAIL PROTECTED]>
04/15/2004 06:06
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Management Classes






I have looked everywhere but been unable to find a solution to my needs.
I have am running the TSM server 5.1.8 and I have one policy domain with
several management classes. There is a default class that data goes to
but I want the data from selected schedules to go to a separate
management class, and in turn a separate tape pool. Both this and the
default MC are incremental backups.

Are there any options I should be employing the use of?

Many thanks in advance,

Sam Rudland



-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you receive
this message by mistake, you are hereby notified that any disclosure,
reproduction, distribution or use of this message is strictly
prohibited. Please inform the sender by reply

Re: TSM, Solaris and Fibre connected drives ?

2004-04-16 Thread Farren Minns
Hi Luke

Many thanks for the information. We are still awaiting a decent time to
perform the upgrade and hope to have a go in the next couple of weeks on a
test machine first.

We will indeed only be using one switch and single channel hba cards, so
with luck it should be a more straight forward process, but we shall see :)

Thanks again for keeping me updated. It's very much appreciated.

Farren
|+---+|
||   Luke Dahl   ||
||   <[EMAIL PROTECTED]>||
||   Sent by: "ADSM: Dist Stor   |   To:       [EMAIL PROTECTED]  |
||   Manager"|           cc:  |
||   <[EMAIL PROTECTED]>  |           Subject:        Re: TSM, |
||   |   Solaris and Fibre connected drives ? |
||   04/15/2004 06:44 PM ||
||   Please respond to "ADSM:||
||   Dist Stor Manager"  ||
||   ||
|+---+|









Hi Farren,
A quick update on our implementation.  We attached five 3590H1A drives to a
Sun 420R successfully in our 3494 library.  You must update all volumes to
readonly, delete your existing paths, possibly delete the drive entries
(not
sure on that one) and setup a zone for your system hba and all of the ports
that contain the drives.
Our configuration is as follows:
One Emulex dual channel fiber card, with port 0 connected to switch 1 and
port
1 connected to switch 2
Each drive is connected from port 0 to switch 1, and port 1 to switch 2

So, when we rebooted the system to create the new /dev/rmt entries we
showed 20
*st entries.  Here's an example of what we see, you can see that Drive 1
has
four st entries (stepped by a value of five for each one):
DRIVE 1
Drive Name: DRIVE1
WWN: 5005076372A8
Serial Number: 000CB518
7st -> ../../devices/[EMAIL PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0:st

Drive Name: DRIVE6
WWN: 5005076372A8
Serial Number: 000CB518
12st -> ../../devices/[EMAIL PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0:st

Drive Name: DRIVE11
WWN: 5005076372A8
Serial Number: 000CB518
24st -> ../../devices/[EMAIL PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0:st

Drive Name: DRIVE16
WWN: 5005076372A8
Serial Number: 000CB518
29st -> ../../devices/[EMAIL PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0:st

Essentially there were four *st entries for each drive, two for those going
through ports on switch 1, and two for ports on switch 2.  I called tech
support to find a way to determine how to setup for failover, but automatic
failover on the drives isn't support in Solaris.  So, we defined a drive
for
each st entry (drive1 - 20) and defined a path for each entry.  We then
were
able to see which drives were using the same ports by correlating the drive
serial number with each drive entry.  At that point, we offlined all but
five
entries, keeping two over fiber channel path 0 and three over fiber channel
path 1 to utilize both switches (probably not necessary, but why not?).  If
you're only using one switch with a single channel hba I don't believe
you'll
have any problems.  Let me know if you need any assistance or if this is
unclear at all.  Good luck!

Luke

Farren Minns wrote:

> Hi TSMers
>
> Running TSM 5.1.6.2 on a Sun E250 presently connected to 3494 lib with 2
> SCSI 3590 drives (H1A).
>
> I have a question for you.
>
> We will soon be changing the SCSI attached drives for fibre attached
models
> (still going to be 3590 H1A in the 3494 lib) which will be connected to a
> Brocade switch. Now, I have very little experience with fibre drives and
> was wondering what the steps were both with Solaris (if anybody can help)
> and TSM. I am used to adding SCSI devices to Solaris and then using the
> /dev/rmt/??? format to define the drives to TSM, but I'm assuming this is
> total different with fibre drives. I am assuming that the World Wide Name
> convention will be used somewhere.
>
> Can anyone give me any advice or good documentation pointers etc.
>
> Many thanks in advance
>
> Farren Minns - John Wiley & Sons Ltd


Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Christo Heuer
It depends on the actual type of data - take a 2Tb oracle db that is mostly
empty you will be getting a 90%+ compression ratio - so your LTO-2 cartridge
will show the capacity used as 2000Gb.
The typical ratio IBM used to qoute was 3-1 in recent years they changed
this to 2-1,
hence the 200-400 figure. The algorithm used for the comression is a
modified ZL algorithm - similar to the algorithm used for pkzip etc.
If you send already compressed data your tape usage will show 200Gig or
less - if you were getting negative compression ratios - data already
compressed can grow if comressed again.
So - there is no clear-cut answer - work on the native capacity(200G), and
everything else you get is a bonus.

Cheers
Christo
- Original Message -
From: "Chandrashekar, C. R. (Chandrasekhar)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 7:50 AM
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


> This is not a puzzle for me, Actually I want to know how much data it can
> compress, Is there any one using same tape library. Which helps to
estimate
> the total storage capacity.
>
> Just to know how much percentage of compression in 3583L23 library using
> 3580ULTGen2 drive.
>
> Thanks,
> c.r.chandrasekhar
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
> Christo Heuer
> Sent: Friday, April 16, 2004 10:51 AM
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> Hi,
>
> I see you are a Tivoli certified consultant (TSM) - maybe they missed that
> part in
> the certification exam - why don't you read up on compression - if this
> puzzles you
> I'm sure plenty other things will confuse the hell out of you.
>
> Christo
> - Original Message -
> From: "Chandrashekar, C. R. (Chandrasekhar)" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, April 16, 2004 6:51 AM
> Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> > Hi,
> >
> > Just for clarification, I'm using LTO-Ult tape cartridge having capacity
> of
> > 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with
firmware
> > 38D0, and devclass was defined with device-type=LTO and Format=ULTRIM2C.
> Now
> > the tape is storing more then 500GB of data, Is it normal behavior.
> >
> > Thanks,
> > CRC,
> >
> > C.R.Chandrasekhar.
> > Systems Analyst.
> > Tivoli Certified Consultant (TSM).
> > TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
> > Phone No: 91-80-5136.
> > Email:[EMAIL PROTECTED]
> >
> >
> >
> >
> > **
> > PLEASE NOTE: The above email address has recently changed from a
previous
> naming standard -- if this does not match your records, please update them
> to use this new name in future email addressed to this individual.
> >
> > This message and any attachments are intended for the
> > individual or entity named above. If you are not the intended
> > recipient, please do not forward, copy, print, use or disclose this
> > communication to others; also please notify the sender by
> > replying to this message, and then delete it from your system.
> >
> > The Timken Company
> > **
>
> __
> E-mail Disclaimer and Company Information
>
> http://www.absa.co.za/ABSA/EMail_Disclaimer
>
>
> **
> PLEASE NOTE: The above email address has recently changed from a previous
naming standard -- if this does not match your records, please update them
to use this new name in future email addressed to this individual.
>
> This message and any attachments are intended for the
> individual or entity named above. If you are not the intended
> recipient, please do not forward, copy, print, use or disclose this
> communication to others; also please notify the sender by
> replying to this message, and then delete it from your system.
>
> The Timken Company
> **

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Willem Roos
The whole compression issue has always confused the hell out of me (no
certification here :-). Does 200-400 mean
"200-native-,-400-if-we-hardware-compress-as-we-stream-to-tape"?
Sometimes the client may also compress? I think salespeople over the
years have greatly abused this x-2x tape cartridge capacity thing to
their advantage - you can always double up because nobody knows what
you're talking about anyway.

And you mean LZ (Lempel-Ziv) algorithm, don't you :-?

---
  Willem Roos - (+27) 21 980 4941
  Per sercas vi malkovri 

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of Christo Heuer
> Sent: 16 April 2004 09:26
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> 
> 
> It depends on the actual type of data - take a 2Tb oracle db 
> that is mostly
> empty you will be getting a 90%+ compression ratio - so your 
> LTO-2 cartridge
> will show the capacity used as 2000Gb.
> The typical ratio IBM used to qoute was 3-1 in recent years 
> they changed
> this to 2-1,
> hence the 200-400 figure. The algorithm used for the comression is a
> modified ZL algorithm - similar to the algorithm used for pkzip etc.
> If you send already compressed data your tape usage will show 
> 200Gig or
> less - if you were getting negative compression ratios - data already
> compressed can grow if comressed again.
> So - there is no clear-cut answer - work on the native 
> capacity(200G), and
> everything else you get is a bonus.
> 
> Cheers
> Christo
> - Original Message -
> From: "Chandrashekar, C. R. (Chandrasekhar)" 
> <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, April 16, 2004 7:50 AM
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> 
> 
> > This is not a puzzle for me, Actually I want to know how 
> much data it can
> > compress, Is there any one using same tape library. Which helps to
> estimate
> > the total storage capacity.
> >
> > Just to know how much percentage of compression in 3583L23 
> library using
> > 3580ULTGen2 drive.
> >
> > Thanks,
> > c.r.chandrasekhar
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager 
> [mailto:[EMAIL PROTECTED] Behalf Of
> > Christo Heuer
> > Sent: Friday, April 16, 2004 10:51 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > Hi,
> >
> > I see you are a Tivoli certified consultant (TSM) - maybe 
> they missed that
> > part in
> > the certification exam - why don't you read up on 
> compression - if this
> > puzzles you
> > I'm sure plenty other things will confuse the hell out of you.
> >
> > Christo
> > - Original Message -
> > From: "Chandrashekar, C. R. (Chandrasekhar)" 
> <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Friday, April 16, 2004 6:51 AM
> > Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > > Hi,
> > >
> > > Just for clarification, I'm using LTO-Ult tape cartridge 
> having capacity
> > of
> > > 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with
> firmware
> > > 38D0, and devclass was defined with device-type=LTO and 
> Format=ULTRIM2C.
> > Now
> > > the tape is storing more then 500GB of data, Is it normal 
> behavior.
> > >
> > > Thanks,
> > > CRC,
> > >
> > > C.R.Chandrasekhar.
> > > Systems Analyst.
> > > Tivoli Certified Consultant (TSM).
> > > TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
> > > Phone No: 91-80-5136.
> > > Email:[EMAIL PROTECTED]
> > >
> > >
> > >
> > >
> > > 
> **
> > > PLEASE NOTE: The above email address has recently changed from a
> previous
> > naming standard -- if this does not match your records, 
> please update them
> > to use this new name in future email addressed to this individual.
> > >
> > > This message and any attachments are intended for the
> > > individual or entity named above. If you are not the intended
> > > recipient, please do not forward, copy, print, use or 
> disclose this
> > > communication to others; also please notify the sender by
> > > replying to this message, and then delete it from your system.
> > >
> > > The Timken Company
> > > 
> **
> >
> > __
> > E-mail Disclaimer and Company Information
> >
> > http://www.absa.co.za/ABSA/EMail_Disclaimer
> >
> >
> > 
> **
> > PLEASE NOTE: The above email address has recently changed 
> from a previous
> naming standard -- if this does not match your records, 
> please update them
> to use this new name in future email addressed to this individual.
> >
> > This message and any attachments are intended for the
> > individual or entity named above. If you are not the intended
> > recipient, please do not for

Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Christo Heuer
Yes - If you use the client-side compression you will see most of your
cartridges showing close to or just over 200Gig - I have mix of clients
doing compression and others not doing.
What I've noticed is that on average I'm getting between 30% and 50%
compression.
In earlier years IBM would qoute a 3-1 compression ration - 10/30 5/15 etc.
I think they lowered this to a more concervative number of 2-1 - and my tape
numbers reflect this:
LTOATS  282,437.4
LTOATS  344,472.0
LTOATS  570,294.9
LTOATS  383,550.0
LTOATS  387,271.4
LTOATS  457,437.0
LTOATS  359,432.9
LTOATS  329,021.7
LTOATS  333,663.8
LTOATS  456,539.2

As can be seen - somewhere between 280 and 570- giving an average capacity
of close to 400Gig.
On the other hand - if all my data was compressed before arriving at the
server it would have been very close to 200gig.
Cheers
Christo
- Original Message -
From: "Willem Roos" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 10:01 AM
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


The whole compression issue has always confused the hell out of me (no
certification here :-). Does 200-400 mean
"200-native-,-400-if-we-hardware-compress-as-we-stream-to-tape"?
Sometimes the client may also compress? I think salespeople over the
years have greatly abused this x-2x tape cartridge capacity thing to
their advantage - you can always double up because nobody knows what
you're talking about anyway.

And you mean LZ (Lempel-Ziv) algorithm, don't you :-?

---
  Willem Roos - (+27) 21 980 4941
  Per sercas vi malkovri

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Christo Heuer
> Sent: 16 April 2004 09:26
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> It depends on the actual type of data - take a 2Tb oracle db
> that is mostly
> empty you will be getting a 90%+ compression ratio - so your
> LTO-2 cartridge
> will show the capacity used as 2000Gb.
> The typical ratio IBM used to qoute was 3-1 in recent years
> they changed
> this to 2-1,
> hence the 200-400 figure. The algorithm used for the comression is a
> modified ZL algorithm - similar to the algorithm used for pkzip etc.
> If you send already compressed data your tape usage will show
> 200Gig or
> less - if you were getting negative compression ratios - data already
> compressed can grow if comressed again.
> So - there is no clear-cut answer - work on the native
> capacity(200G), and
> everything else you get is a bonus.
>
> Cheers
> Christo
> - Original Message -
> From: "Chandrashekar, C. R. (Chandrasekhar)"
> <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, April 16, 2004 7:50 AM
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> > This is not a puzzle for me, Actually I want to know how
> much data it can
> > compress, Is there any one using same tape library. Which helps to
> estimate
> > the total storage capacity.
> >
> > Just to know how much percentage of compression in 3583L23
> library using
> > 3580ULTGen2 drive.
> >
> > Thanks,
> > c.r.chandrasekhar
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager
> [mailto:[EMAIL PROTECTED] Behalf Of
> > Christo Heuer
> > Sent: Friday, April 16, 2004 10:51 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > Hi,
> >
> > I see you are a Tivoli certified consultant (TSM) - maybe
> they missed that
> > part in
> > the certification exam - why don't you read up on
> compression - if this
> > puzzles you
> > I'm sure plenty other things will confuse the hell out of you.
> >
> > Christo
> > - Original Message -
> > From: "Chandrashekar, C. R. (Chandrasekhar)"
> <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Friday, April 16, 2004 6:51 AM
> > Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > > Hi,
> > >
> > > Just for clarification, I'm using LTO-Ult tape cartridge
> having capacity
> > of
> > > 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with
> firmware
> > > 38D0, and devclass was defined with device-type=LTO and
> Format=ULTRIM2C.
> > Now
> > > the tape is storing more then 500GB of data, Is it normal
> behavior.
> > >
> > > Thanks,
> > > CRC,
> > >
> > > C.R.Chandrasekhar.
> > > Systems Analyst.
> > > Tivoli Certified Consultant (TSM).
> > > TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
> > > Phone No: 91-80-5136.
> > > Email:[EMAIL PROTECTED]
> > >
> > >
> > >
> > >
> > >
> **
> > > PLEASE NOTE: The above email address has recently changed from a
> previous
> > naming standard -- if this does not match your records,
> please update them
> > to use this new name in future email addressed to this individual.
> > >
> > > This m

Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Chandrashekar, C. R. (Chandrasekhar)
I have enabled compression at tape library, not at client side.

Thanks for giving me a valuable information.

CRC,


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Christo Heuer
Sent: Friday, April 16, 2004 2:31 PM
To: [EMAIL PROTECTED]
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


Yes - If you use the client-side compression you will see most of your
cartridges showing close to or just over 200Gig - I have mix of clients
doing compression and others not doing.
What I've noticed is that on average I'm getting between 30% and 50%
compression.
In earlier years IBM would qoute a 3-1 compression ration - 10/30 5/15 etc.
I think they lowered this to a more concervative number of 2-1 - and my tape
numbers reflect this:
LTOATS  282,437.4
LTOATS  344,472.0
LTOATS  570,294.9
LTOATS  383,550.0
LTOATS  387,271.4
LTOATS  457,437.0
LTOATS  359,432.9
LTOATS  329,021.7
LTOATS  333,663.8
LTOATS  456,539.2

As can be seen - somewhere between 280 and 570- giving an average capacity
of close to 400Gig.
On the other hand - if all my data was compressed before arriving at the
server it would have been very close to 200gig.
Cheers
Christo
- Original Message -
From: "Willem Roos" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 10:01 AM
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


The whole compression issue has always confused the hell out of me (no
certification here :-). Does 200-400 mean
"200-native-,-400-if-we-hardware-compress-as-we-stream-to-tape"?
Sometimes the client may also compress? I think salespeople over the
years have greatly abused this x-2x tape cartridge capacity thing to
their advantage - you can always double up because nobody knows what
you're talking about anyway.

And you mean LZ (Lempel-Ziv) algorithm, don't you :-?

---
  Willem Roos - (+27) 21 980 4941
  Per sercas vi malkovri

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Christo Heuer
> Sent: 16 April 2004 09:26
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> It depends on the actual type of data - take a 2Tb oracle db
> that is mostly
> empty you will be getting a 90%+ compression ratio - so your
> LTO-2 cartridge
> will show the capacity used as 2000Gb.
> The typical ratio IBM used to qoute was 3-1 in recent years
> they changed
> this to 2-1,
> hence the 200-400 figure. The algorithm used for the comression is a
> modified ZL algorithm - similar to the algorithm used for pkzip etc.
> If you send already compressed data your tape usage will show
> 200Gig or
> less - if you were getting negative compression ratios - data already
> compressed can grow if comressed again.
> So - there is no clear-cut answer - work on the native
> capacity(200G), and
> everything else you get is a bonus.
>
> Cheers
> Christo
> - Original Message -
> From: "Chandrashekar, C. R. (Chandrasekhar)"
> <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, April 16, 2004 7:50 AM
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> > This is not a puzzle for me, Actually I want to know how
> much data it can
> > compress, Is there any one using same tape library. Which helps to
> estimate
> > the total storage capacity.
> >
> > Just to know how much percentage of compression in 3583L23
> library using
> > 3580ULTGen2 drive.
> >
> > Thanks,
> > c.r.chandrasekhar
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager
> [mailto:[EMAIL PROTECTED] Behalf Of
> > Christo Heuer
> > Sent: Friday, April 16, 2004 10:51 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > Hi,
> >
> > I see you are a Tivoli certified consultant (TSM) - maybe
> they missed that
> > part in
> > the certification exam - why don't you read up on
> compression - if this
> > puzzles you
> > I'm sure plenty other things will confuse the hell out of you.
> >
> > Christo
> > - Original Message -
> > From: "Chandrashekar, C. R. (Chandrasekhar)"
> <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Friday, April 16, 2004 6:51 AM
> > Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > > Hi,
> > >
> > > Just for clarification, I'm using LTO-Ult tape cartridge
> having capacity
> > of
> > > 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with
> firmware
> > > 38D0, and devclass was defined with device-type=LTO and
> Format=ULTRIM2C.
> > Now
> > > the tape is storing more then 500GB of data, Is it normal
> behavior.
> > >
> > > Thanks,
> > > CRC,
> > >
> > > C.R.Chandrasekhar.
> > > Systems Analyst.
> > > Tivoli Certified Consultant (TSM).
> > > TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
> > > Phone No: 91-80-5136.
> > > Email:[EMAIL PROTECTED]
> > 

SV: Novell NetWare box not backing up SSI

2004-04-16 Thread Hougaard.Flemming FHG
Hi Rich

Clientversion? TSA versions? SP Level?

Regards
Flemming

-Oprindelig meddelelse-
Fra: Richard Taylor [mailto:[EMAIL PROTECTED]
Sendt: 15. april 2004 19:44
Til: [EMAIL PROTECTED]
Emne: Novell NetWare box not backing up SSI


I have a NW5 box that backs up data fine, but even after repeated checks that it's 
communicating properly with the TSM server, I get the following error every time:

04/13/2004 22:32:16 ANS1512E Scheduled event 'ENT-MAIN-NW001' failed.  Return code = 
12.
04/14/2004 22:00:22 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:23 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:23 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:24 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:24 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:25 ANS1228E Sending of object 'Server Specific Info/Server Specific 
Info' failed
04/14/2004 22:00:25 ANS4024E Error processing 'Server Specific Info/Server Specific 
Info': file write error
04/14/2004 22:00:25 ANS1802E Incremental backup of 'Server Specific Info/Server 
Specific Info' finished with 1 failure

Any ideas?

Rich

Rich Taylor
CEIT Server Ops
Clark County Data Center
1670 Pinto Ln
Las Vegas, NV 89106
455-2384
[EMAIL PROTECTED]

"When the work is done,
And the paycheck has been spent,
What is left but pride?"




___
www.kmd.dk   www.kundenet.kmd.dk   www.eboks.dk   www.civitas.dk   www.netborger.dk

Hvis du har modtaget denne mail ved en fejl vil jeg gerne, at du informerer mig og 
sletter den.
KMD skaber it-services, der fremmer effektivitet hos det offentlige, erhvervslivet og 
borgerne.

If you received this e-mail by mistake, please notify me and delete it. Thank you.
Our mission is to enhance the efficiency of the public sector and improve its service 
of the general public. 


Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Gianluca Mariani1
Just a small addendum to Christo's explanation (I can subscribe writing my 
signature in blood to it):
this issue about compression generates a lot of confusion because there's 
no standard commonly accepted viewpoint. one thing is looking at it client 
side, a totally different thing is looking at it on the tape drive.
I would always do my math with the native cartridge capacities and not the 
"potential" capacities.
the overly quoted compression ratios are, well, just indications, not to 
be taken as written in stone.
the 3-1 ratio comes from the mainframe, the only place where you're ever 
going to see that kind of ratio, the 2-1 is generally quoted for Open 
Systems.
real world ratios are non predictable and will depend on your specifc 
data, exactly as Christo has explained.
this applies to tape drive throughputs as well, obviously.
just to give a small example, on  3592 drives in a customer production 
environment,  I've gone from 40MB/s on rubbish data (basically non 
compressible) to around 70MB/s on decent data.yeah, ok, this was in a 
windoze environment and windoze has some serious issues in block size 
towards tape, but you get the idea.
the catch is that if you listen carefully to the sales rep, he should 
always say the magical phrase:"Up to 2-1, 3-1 compression". ..



Cordiali saluti
Gianluca Mariani
Tivoli TSM Global Response Team, Roma
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]

The Hitch Hiker's Guide to the Galaxy says  of the Sirius Cybernetics 
Corporation product that "it is very easy to be blinded to the essential 
uselessness of  them by the sense of achievement you get from getting them 
to work at all. In other words â and this is the rock solid principle  on 
which the  whole  of the Corporation's Galaxy-wide success is founded 
-their fundamental design flaws are  completely  hidden  by  their 
superficial design flaws"...



Christo Heuer <[EMAIL PROTECTED]> 
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
16/04/2004 11.00
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.






Yes - If you use the client-side compression you will see most of your
cartridges showing close to or just over 200Gig - I have mix of clients
doing compression and others not doing.
What I've noticed is that on average I'm getting between 30% and 50%
compression.
In earlier years IBM would qoute a 3-1 compression ration - 10/30 5/15 
etc.
I think they lowered this to a more concervative number of 2-1 - and my 
tape
numbers reflect this:
LTOATS  282,437.4
LTOATS  344,472.0
LTOATS  570,294.9
LTOATS  383,550.0
LTOATS  387,271.4
LTOATS  457,437.0
LTOATS  359,432.9
LTOATS  329,021.7
LTOATS  333,663.8
LTOATS  456,539.2

As can be seen - somewhere between 280 and 570- giving an average capacity
of close to 400Gig.
On the other hand - if all my data was compressed before arriving at the
server it would have been very close to 200gig.
Cheers
Christo
- Original Message -
From: "Willem Roos" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 10:01 AM
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


The whole compression issue has always confused the hell out of me (no
certification here :-). Does 200-400 mean
"200-native-,-400-if-we-hardware-compress-as-we-stream-to-tape"?
Sometimes the client may also compress? I think salespeople over the
years have greatly abused this x-2x tape cartridge capacity thing to
their advantage - you can always double up because nobody knows what
you're talking about anyway.

And you mean LZ (Lempel-Ziv) algorithm, don't you :-?

---
  Willem Roos - (+27) 21 980 4941
  Per sercas vi malkovri

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Christo Heuer
> Sent: 16 April 2004 09:26
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> It depends on the actual type of data - take a 2Tb oracle db
> that is mostly
> empty you will be getting a 90%+ compression ratio - so your
> LTO-2 cartridge
> will show the capacity used as 2000Gb.
> The typical ratio IBM used to qoute was 3-1 in recent years
> they changed
> this to 2-1,
> hence the 200-400 figure. The algorithm used for the comression is a
> modified ZL algorithm - similar to the algorithm used for pkzip etc.
> If you send already compressed data your tape usage will show
> 200Gig or
> less - if you were getting negative compression ratios - data already
> compressed can grow if comressed again.
> So - there is no clear-cut answer - work on the native
> capacity(200G), and
> everything else you get is a bonus.
>
> Cheers
> Christo
> - Original Message -

Re: SV: Novell NetWare box not backing up SSI

2004-04-16 Thread Richard Taylor
Flemming,

Client is 5.01f, TSANDS is 10511.02e, TSA500 is 5.05f, NetWare 5.1, SP6

Rich Taylor
CEIT Server Ops
Clark County Data Center
1670 Pinto Ln
Las Vegas, NV 89106
455-2384
[EMAIL PROTECTED]

"When the work is done,
And the paycheck has been spent,
What is left but pride?"


Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Tom Kauffman
That depends -- on the data and on the client.

I've got LTO-1 (100/200) GB tapes and drives. MS-Exchange Infostore backups
are compressed on disk and not decompressed by the client for backup - and I
get full tapes at 101 GB. On the other hand, my SAP/R3 (Oracle) database is
not compressed on the client -- and I get between 485 GB and 521 GB per
tape.

So - you should get at least 190 to 200 GB per tape on the LTO-2; you MAY
get considerably more than 400 GB. We'll be going LTO-2 in five months, and
I'm looking forward to seeing how much SAP database I can fit on one tape.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: Chandrashekar, C. R. (Chandrasekhar)
[mailto:[EMAIL PROTECTED]
Sent: Thursday, April 15, 2004 11:51 PM
To: [EMAIL PROTECTED]
Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.

Hi,

Just for clarification, I'm using LTO-Ult tape cartridge having capacity of
200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with firmware
38D0, and devclass was defined with device-type=LTO and Format=ULTRIM2C. Now
the tape is storing more then 500GB of data, Is it normal behavior.

Thanks,
CRC,

C.R.Chandrasekhar.
Systems Analyst.
Tivoli Certified Consultant (TSM).
TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
Phone No: 91-80-5136.
Email:[EMAIL PROTECTED]




**
PLEASE NOTE: The above email address has recently changed from a previous
naming standard -- if this does not match your records, please update them
to use this new name in future email addressed to this individual.

This message and any attachments are intended for the
individual or entity named above. If you are not the intended
recipient, please do not forward, copy, print, use or disclose this
communication to others; also please notify the sender by
replying to this message, and then delete it from your system.

The Timken Company
**


Any experience with Sepaton VTL

2004-04-16 Thread Johnson, Milton
I got a call from a rep asking if I was interested in a Sepaton S2100
VTL (Virtual Tape Library) (www.sepaton.com). It's billed as:
* a fiber connected SATA RAID Virtual Tape Library Appliance
* 3-200 TB Capacity / 1.6 TB/hour throughput
* configure up to 200 virtual tape drives
* Emulates various tape libraries
* serviced by IBM
* works with TSM

You would:
* define the VTL as a primary storage pool, called say SEPPOOL, and
point all your backups to SEPPOOL
* define SEPPOOL's "next stg pool" to be your traditional TAPEPOOL

Your present tape library would be used to cut, read and reclaim
off-site tapes and as a backup in case you unexpectedly fill up SEPPOOL.
There would be no need for collocation because of the speed of the VTL.
There would be no need for a DISKPOOL or migrations.  You could
effectively  reclaim off-site tapes with as few as 1 drive in your real
tape library.  It will also wash your car, mow your lawn and cure the
common cold (OK, I exaggerating a little).  I'm not sure on the pricing,
somewhere around $30K/3.6TB US dollars.

My questions include where's the down side?  What's the catch?  If your
choice is between expanding by purchasing a second 3494 frame or a S2100
VTL, why choose a 3494 frame?

Thanks,
H. Milton Johnson
UNIX Systems Administrator - USCC San Antonio, TX
Email: [EMAIL PROTECTED] 


Re: Any experience with Sepaton VTL

2004-04-16 Thread Dan Foster
Hot Diggety! Johnson, Milton was rumored to have written:
> I got a call from a rep asking if I was interested in a Sepaton S2100
> VTL (Virtual Tape Library) (www.sepaton.com). It's billed as:
>
> My questions include where's the down side?  What's the catch?  If your
> choice is between expanding by purchasing a second 3494 frame or a S2100
> VTL, why choose a 3494 frame?

Keep this rep honest -- ask him/her what the down sides (negatives) are,
what the typical failure modes are, and so forth.

You'll soon know if you're about to drink sugary Kool-Aid or not :-)

Well, simply put, I haven't heard of VTLs as a single source replacement
for tape drives per se. They are pretty good when you can't wait a
single second longer (or 2-3 minutes) for a restore to begin -- compare
disk vs tape restore load-to-ready time.

Lots of places has this kind of rapid restore requirements -- financial
firms (banks, Wall St, etc), hospitals, nuclear power plants, utilities,
and other places where any sort of downtime is extraordinarily bad.

However, I don't think disks yet have the long-term reliability that
tape drives do... well, server class SCSI drives *can* usually last 5
years in brutal 24x7 operation, but drives in general aren't too
tolerant of being underutilized or if it's the cheaper engineered hard
drives (e.g. typical IDE drives), overly utilized.

So the way I see VTLs as being most useful is if you can't wait the 2-3
minutes it takes to load+spool a tape to 'ready to peel data off'; it's
still no replacement for any serious archiving past perhaps 12 months or
so, and still needs another backup source to restore data from in case a
drive goes south and loses the data.

Modern tape drives are pretty zippy, too, at 70 MB/sec, don't forget.
Tapes (not tape drives) also contains far fewer moving parts that can
fail than hard drives which has a motor, PSU, depending on the sub-1mm
air gap (Bernoulli effect), etc.

VTLs has a place, in my honest opinion, but only if you've got the need
and only if it isn't the sole source for backup data. I don't think most
folks has this need; so it just seems like a big push for companies to
make money at your expense, unnecessarily (unless you actually do have a
need and a well-engineered overall setup). After coming off rough
economic times, you can expect lots of these pitches. :-) I've already
gotten two of these so far. :)

-Dan


Re: SV: Novell NetWare box not backing up SSI

2004-04-16 Thread Aaron Durkee
I have been through this document with the netware engineer ... but have not solved 
the same problem, found this on the web and as it turns out tivoli support sent me the 
same doc.

you may find it helpfull

1083455 - IBM Tivoli Storage Manager: Backing up Server Specific 
Information fails with "(TSA500.NLM 5.5 262) This program cannot create a 
file."
Problem Desc: 
Backing up Server Specific Info (SSI) will error with the following 
messages: (TSA500.NLM 5.5 262) This program cannot create a file. ANS1228E 
Sending of object 'Server Specific Info/Server Specific Info' failed 
ANS4024E Error processing 'Server Specific Info/Server Specific Info': 
file write error ANS1802E Incremental backup of 'Server Specific 
Info/Server Specific Info' finished with 1 failure
 

Solution: 
The TSM Errors (ANS1228E, ANS4024E and ANS1802E) are reacting to the error 
returned from the Novell SMS APIs (TSA500.NLM 5.5 262). Per Novell, 
http://www.novell.com/documentation/lg/nw51/index.html?sysmsenu/data/hpgnz5gr.html, 
this TSA500 error means the following:

TSA500-X-262: This program cannot create a file.
Source : tsa500.nlm
Explanation : The program could not create the specified file.
Action : Make sure the user has specified a valid directory path for the 
name space.
Action : Make sure the user has appropriate user access rights.

Check on the following:

o The login use for backups have the appropriate rights to the file 
system.
o Check to make sure that there was no temp files from the backup of 
Server Specific info. in the sys:system\tsa directory. 
One of these temporary files may be corrupt and preventing the backup. 
o Check the SYS volume to make sure there are no corruption and purge all 
erase file. 
o Check the SYS volume enough free space. 
o Check the files of SSI. Some of them auto generated and may be locked or 
corrupt. 

The five files of SSI are:

o SERVDATA.NDS
o DSMISC.LOG
o VOLSINFO.TXT
o STARTUP.NCF
o AUTOEXEC.NCF

o Was DSREPAIR running? If yes then it may have a lock on the NDS which 
may cause this error. If not check and make sure there are not other NLM 
locking the NDS. 
o Check the date and version of the DSBACKER.NLM and make sure that this 
is the correct version for your SP and NDS level.




Aaron Durkee
[EMAIL PROTECTED]
phone: (716) 862-1713
fax: (716) 862-1717
Networking and Telecomm Services
Western New York Catholic Health System


>>> [EMAIL PROTECTED] 04/16/04 06:13AM >>>
Hi Rich

Clientversion? TSA versions? SP Level?

Regards
Flemming

-Oprindelig meddelelse-
Fra: Richard Taylor [mailto:[EMAIL PROTECTED] 
Sendt: 15. april 2004 19:44
Til: [EMAIL PROTECTED] 
Emne: Novell NetWare box not backing up SSI


I have a NW5 box that backs up data fine, but even after repeated checks that it's 
communicating properly with the TSM server, I get the following error every time:

04/13/2004 22:32:16 ANS1512E Scheduled event 'ENT-MAIN-NW001' failed.  Return code = 
12.
04/14/2004 22:00:22 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:23 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:23 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:24 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:24 (TSA500.NLM 5.5 262) This program cannot create a file.
04/14/2004 22:00:25 ANS1228E Sending of object 'Server Specific Info/Server Specific 
Info' failed
04/14/2004 22:00:25 ANS4024E Error processing 'Server Specific Info/Server Specific 
Info': file write error
04/14/2004 22:00:25 ANS1802E Incremental backup of 'Server Specific Info/Server 
Specific Info' finished with 1 failure

Any ideas?

Rich

Rich Taylor
CEIT Server Ops
Clark County Data Center
1670 Pinto Ln
Las Vegas, NV 89106
455-2384
[EMAIL PROTECTED] 

"When the work is done,
And the paycheck has been spent,
What is left but pride?"




___
www.kmd.dk   www.kundenet.kmd.dk   www.eboks.dk   www.civitas.dk   www.netborger.dk 

Hvis du har modtaget denne mail ved en fejl vil jeg gerne, at du informerer mig og 
sletter den.
KMD skaber it-services, der fremmer effektivitet hos det offentlige, erhvervslivet og 
borgerne.

If you received this e-mail by mistake, please notify me and delete it. Thank you.
Our mission is to enhance the efficiency of the public sector and improve its service 
of the general public. 

CONFIDENTIALITY NOTICE
This message is confidential, intended only for the named recipient(s) and may contain 
information that is privileged, or exempt from disclosure under applicable law. If you 
are not the intended recipient(s), you are notified that the dissemination, 
distribution or copying of this message is strictly prohibited. If you receive this 
message in error, or are not the named recipient(s), please notify the sender by reply 
e-mail, delete this e-mail from your computer, and destroy any copies in any form 
immediately. Receipt by anyone other than the 

64-bit support on 390?

2004-04-16 Thread Joe Howell
We're going to a disaster-recovery exercise in a couple of months and one of the 
things that we want to try is running our mainframe environment on "z" hardware in 
64-bit mode, Just To See What Happens.  Is anyone running TSM 5.2.2 in 64-bit mode on 
a mainframe?  Any excitement waiting for me?


Joe Howell
Shelter Insurance Companies
Columbia, MO

-
Do you Yahoo!?
Yahoo! Tax Center - File online by April 15th


Re: Management Classes

2004-04-16 Thread Tom Kauffman
You may have confused us by using 'schedules' and not 'nodes' in the
original question.

My approach, for what it's worth, would be to set up a new management class
for the critical nodes and point them to a different disk and tape storage
pool. I currently have management classes for SAP/R3 production,
MS-Exchange, Other NT, AIX, and general archiving (usually Oracle
databases). And management classes for Oracle off-line redo logs (two
classes for two storage pools) and SAP archiving (seven classes with 1 thru
7 year archive retention, all aimed at the same storage pool). I've
optimized my copy pools for D/R -- we can recover SAP, MS-Exchange, and our
payroll system all at the same time with no tape contention -- because
they're on different tape copy pools.

Which, of course, required different primary storage pools, both tape and
disk. I've found it more expedient in defining storage pools to start with
the D/R requirement and work my way backward.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: Sam Rudland [mailto:[EMAIL PROTECTED]
Sent: Friday, April 16, 2004 2:15 AM
To: [EMAIL PROTECTED]
Subject: Re: Management Classes

Thanks for your information/ideas guys. The reason behind this
requirement is a bit messy. My company has a 3584 library on site with a
3583 at our DR site. Obviously we couldn't fit all the DR media into the
3583 at one time so the plan is to put data from key nodes into a
separate tape pool so that in a DR scenario those tapes can be loaded
into the 3583 and we can do restores without having to check tapes in
and out repeatedly.

I was hoping it would be a simple option I could use on the server side
fo things. I have a little TSM knowledge but not a lot so I think we are
going to get an expert in for a couple of days to relook at our
configuration and help design a new standard to ensure we are following
best practice.

Thanks for your help again,

Sam

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Raibeck
Sent: 15 April 2004 15:54
To: [EMAIL PROTECTED]
Subject: Re: Management Classes

Kent, your questions are very good ones and you make legitimate points.
My intent was to provide an answer to the question that was asked, which
was how to change the MC. Even then, your points notwithstanding, that
answer does not work if uses more than one MC at a time. But then
followed on with an invitation to be more specific about what he wanted
to do, and an admonition against modifying MC's in this fashion. If that
wasn't clear, then I should have probably emphasized that point first
and more strongly.
  :-)

As you point out, there are multiple ways to deal with this sort of
thing, but rather than speculate or write at length on the topic, I
think it better to understand the real need first.
Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Kent Monthei <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/15/2004 06:43
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Andy, if the clients/filesystems overlap with the other schedules, won't
this lead to excessive/unintended management class rebinding?
If they don't overlap, it would make more sense to just define a new
domain.  If they do overlap, it might be safer to configure an alternate
node name for each client, for use with the special schedules - but this
can lead to timeline continuity issues that will complicate future
point-in-time restores.   Would it be safer to follow your plan, but
just
toggle the existing MC's COPYGROUP settings and do the ACTIVATE
POLICYSET, instead of toggling between two MC's?

Kent Monthei
GlaxoSmithKline




"Andrew Raibeck" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
15-Apr-2004 09:24
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Why not define admin schedules that change the default management class
prior to the scheduled backup (create server macros that run ASSIGN
DEFMGMTCLASS followed by ACTIVATE POLICYSET, then schedule the macros)?

If that does not provide sufficient granularity, then it would help to
have more specific information on what you wish to do, and, just as
important, why. Normally I would recommend against flip-flopping
management classes in this fashion, at least not without knowing a lot
more about your scenario.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The 

Delete obsolete directories only?

2004-04-16 Thread Tab Trepagnier
TSM Server 5.1.8.0 on AIX; TSM Client 5.1.6.0 on Windows 2000

I have a situation where over time, the location of data on our network
has moved from server to server.  In many cases we moved the identity of
the first server to the second server, but the data paths were not
duplicated exactly.  For example,

\\server_name\d$\current_root_path\...
\\*\*\old_root_path\...

where "current_root_path" and "old_root_path" are peers under the same
"d$" parent.

Because the "old_root_path" became invalid on the first backup of the new
server, all the data under it was marked inactive by TSM.  No problem
there.
Once the RetOnly duration elapsed, all the FILES were purged from that
path.  Again, no problem there.

But the directories were retained, probably because they were bound to "no
limit" permanent management classes prior to our implementing DIRMC
controls.  Meaning those directories will live for the duration of the
server's identity or our TSM system, whichever ends first.
Those duplicate paths confuse our Help Desk.  I would like to delete just
the contents under "old_root_path" since there are no files under that
path.  But because both root paths are under the same filespace, I can't
delete the filespace.  I turned on the permission "node can delete
backups" but that still didn't let me kill that directory tree.

So, is there a way to kill the directory tree under "old_root_path" other
than killing the entire filespace?

TIA

Tab Trepagnier
TSM Administrator
Laitram, L.L.C.


Re: TSM Schedule

2004-04-16 Thread jianyu he
Thanks.
I have achieved this function. but sometimes the schedule will missed, I don't know 
why, and there is no information in the dsmsched.log.




nghiatd <[EMAIL PROTECTED]> wrote:
You should open and read dsmsched.log file in the directory 
"\tivoli\tsm\baclient\dsmsched.log". This file contains information relative to active 
schedules

Best regard,

Nghiatd

- Original Message -
From: Andrew Raibeck
To: [EMAIL PROTECTED]
Sent: Thursday, April 15, 2004 9:00 PM
Subject: Re: TSM Schedule


If you have done the basic steps that I have outlined, then there is
something wrong in your setup. I am unable to answer your question with a
simple "perform step x to achieve the backup."

To back up only the C: drive, you would put

DOMAIN C:

in your client options file (dsm.opt). You do not need the -SUBDIR=YES in
the schedule definition, although its presence is not the problem.

So now the issue becomes trying to identify where the problem lies.

It would help to know the following:

- Content of client options file (dsm.opt)

- Version of TSM client you are using.

- Output from the following client command:

dsmc query schedule

- Output from the following admin commands:

query node client3 format=detailed
query event standard daily_incr
query association standard daily_incr

(note: the last two commands assume that the domain name is STANDARD;
specify the correct domain name if my assumption is wrong.)

If the "dsmc query schedule" command returns a schedule, then it is likely
that the scheduler service is not running on the client. If "dsmc query
schedule" does not return a schedule, then you probably need to use the
DEFINE ASSOCIATION command to associate node CLIENT3 with the DAILY_INCR
schedule.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



jianyu he
Sent by: "ADSM: Dist Stor Manager"
04/15/2004 06:35
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: TSM Schedule






Hi Andrew,

I am studying TSM now, so I want to achieve the automatic backup task. I
have finished the manual backup, archieve, restore, and retrieve, but when
I try to achieve the function of shcedule, the client doesn't backup
during the execution.


tsm> query session
TSM Server Connection Information
Server Name.: HJY3
Server Type.: Windows
Server Version..: Ver. 5, Rel. 2, Lev. 2.0
Last Access Date: 04/15/2004 09:16:14
Delete Backup Files.: "Yes"
Delete Archive Files: "Yes"
Node Name...: CLIENT3
User Name...:

I had done the step that you mentioned. Is there another step I should do
if I want to backup the data of C$ of CLIENT3.

Normally, what should I do in the OBJECTS setting.

thanks

Andy


Andrew Raibeck wrote:
Hello Jianyu,

In general, in order for the scheduler to work, you must do the following:

- Define a schedule on the TSM server.

- Associate the client nodes with that schedule (using the DEFINE
ASSOCIATION command). For example:

DEF ASSOC STANDARD DAILY_INCR MYNODENAME

- Install and configure the TSM client software on the machine you wish to
back up, then start the scheduler on that client machine.

If the above outline does not help, then please provide more detail on
your problem. The detail in the solution can only be proportional to the
detail in the problem description. For instance, some questions that occur
to me:

1) Your given schedule definition is not very typical. You have
-subdir=yes in the OPTIONS setting, but no OBJECTS specified. You also
have the backup running hourly, with a duration of 10 minutes per
instance. It would help if you could indicate what exactly it is you are
trying to accomplish, including information on what exactly you want to
back up. Knowing *why* you want to do this might also provide some useful
insight.

2) When you say "my client doesn't work", what exactly does that mean?
Have you tried performing manual (nonscheduled) backups? If so, what are
the results? If not, then try testing a manual backup. One useful test
after installing and configuring the client is to run the command line
client like this:

dsmc query session

This will demonstrate basic connectivity to the TSM server (and it is
often good to make sure you can walk before you try to run).

3) Have you configured and started the scheduler? Examine dsmsched.log and
see whether the scheduler is successfully obtaining information about the
next scheduled event.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"G

Re: Management Classes

2004-04-16 Thread Andrew Raibeck
Sam,

Obviously there are two different sets of headaches: one involves how you
minimize the data that gets into your DR pool to begin with, and the other
involves the logistics of moving tapes in and out of the 3583. My vote
(not that I get one) says that the latter headache is the lesser of the
two!   :-)

Changing the management classes around wouldn't be of any use. In this
situation, rather than trying to change the default management class, I
would just update the existing copy group DESTINATION setting (for each
management class) to point to the 3583 pool, thus avoiding the management
class rebinding issue that Kent mentioned.

But that, too, is problematic. For example suppose a file is changed daily
and thus is backed up daily. The backups from Monday - Friday get put into
the local pool, and then you switch to the DR pool for Saturday's backup,
after which the tapes are shipped offsite. Now come Monday, your user
needs to restore the file from the version that was created on Saturday.
You've either got to retrieve the tape, or else your user is out of luck.

Another problem with this scenario is it assumes that a current backup
copy will exist in the 3583 pool, which may very likely not be the case.
Maybe a file that changes little, if at all, has a version in the local
pool. Thus in DR scenario, no version exists in the 3583 pool. Uh oh.

One suggestion that was made is to register a different node name for each
client and have those nodes back up on a weekly basis to the 3583 pool.
You can use a copy group MODE setting of ABSOLUTE to do "full" backups.
This would work, at the expense of creating redundant backups; perhaps a
compromise would be to do this for mission critical systems only. For
non-mission critical systems, back up your storage pools (via BACKUP
STGPOOL) to the 3583 pool, so during a DR restore they will need to have
tapes checked in and out of the library.

I'm sure that you are not the only hardware-constrained TSM user. Others
might be able to lend their own insight as to how they deal with this
issue.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Sam Rudland <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/16/2004 00:14
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Thanks for your information/ideas guys. The reason behind this
requirement is a bit messy. My company has a 3584 library on site with a
3583 at our DR site. Obviously we couldn't fit all the DR media into the
3583 at one time so the plan is to put data from key nodes into a
separate tape pool so that in a DR scenario those tapes can be loaded
into the 3583 and we can do restores without having to check tapes in
and out repeatedly.

I was hoping it would be a simple option I could use on the server side
fo things. I have a little TSM knowledge but not a lot so I think we are
going to get an expert in for a couple of days to relook at our
configuration and help design a new standard to ensure we are following
best practice.

Thanks for your help again,

Sam

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Raibeck
Sent: 15 April 2004 15:54
To: [EMAIL PROTECTED]
Subject: Re: Management Classes

Kent, your questions are very good ones and you make legitimate points.
My intent was to provide an answer to the question that was asked, which
was how to change the MC. Even then, your points notwithstanding, that
answer does not work if uses more than one MC at a time. But then
followed on with an invitation to be more specific about what he wanted
to do, and an admonition against modifying MC's in this fashion. If that
wasn't clear, then I should have probably emphasized that point first
and more strongly.
  :-)

As you point out, there are multiple ways to deal with this sort of
thing, but rather than speculate or write at length on the topic, I
think it better to understand the real need first.
Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew
Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Kent Monthei <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/15/2004 06:43
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Andy, if the clients/filesystems overlap with the other schedules, won't
this lead to excessive/unintended management class rebinding?
If they don't overlap, it would make more sense to just 

Re: TSM Schedule

2004-04-16 Thread Richard Sims
>I have achieved this function. but sometimes the schedule will missed,
>I don't know why, and there is no information in the dsmsched.log.

Then look at the log at the other end of the connection: the TSM server Activity Log.
There may be a situation of inadequate resources.  Be sure to also query your
schedules and assure that they are really scheduled the way you believe they are.

   Richard Sims   http://people.bu.edu/rbs


actlog output ahh

2004-04-16 Thread Justin Bleistein
I'm running an admin script which will do an:

"issue message XXX"

which sends a message to the actlog unfortunately it also records the:
"ADMIN ISSUED command: ISSUE MESSAGE" in the actlog
and the actual message in the actlog. BRUTAL, I see both messages and it's
annouying. Does anyone know of a way to
stop admin/user "admin issued command" messages from being directed to the
actlog?.

Thanks in advance.

--Justin Richard Bleistein
Unix/TSM Systems Administrator (Sungard Availability Services)


internal tsm commands / not show commands

2004-04-16 Thread Justin Bleistein
does anyone know of any internal tsm server commands which aren't
documented
besides show commands?
I'm trying to put a tsm design workshop together on some test systems and
it would
be nice to be able to tap into the db on test systems and poke around to
see how
the server works?.
Thanks!.

--Justin


Re: TSM Schedule

2004-04-16 Thread Andrew Raibeck
Hello Jianyu,

Take a look at the ANS1076E message, that is a clue as to what the trouble
is: There is a problem with whatever file(s) you are specifying in the
schedule definition.

Since the scheduled action is SELECTIVE rather than INCREMENTAL (which
contradicts the schedule name DAILY_INCR), obviously we are looking at a
different problem than before. If you've read my prior posts on this
subject, a recurring theme is a request for DETAIL about the problem -
this means options files, schedule definitions, node settings, complete
log data, and so on. Also, when diagnosing any kind of problem, it is
helpful to know WHAT you want to do, and HOW you are trying to accomplish
it, as there is very likely a conflict between the "what" and the "how".

I would strongly recommend that you consult the TSM Problem Determination
Guide at
http://publib.boulder.ibm.com/tividd/td/IBMStorageManagerMessages5.2.2.html,
and in particular the "Problem Determination" and "Diagnostic Tips" links.
Also consult the messages manual for error and warning messages such as
ANS1076E.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



jianyu he <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/15/2004 09:30
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: TSM Schedule






Hi,Andy

I got some information from dsmsched.log,  I don't know why 'DAILY_INCR'
failed.

Executing scheduled command now.
04/15/2004 12:14:58 Node Name: CLIENT3
04/15/2004 12:14:58 Session established with server HJY3: Windows
04/15/2004 12:14:58   Server Version 5, Release 2, Level 2.0
04/15/2004 12:14:58   Server date/time: 04/15/2004 12:16:09  Last access:
04/15/2004 12:14:17
04/15/2004 12:14:58 --- SCHEDULEREC OBJECT BEGIN DAILY_INCR 04/15/2004
12:15:00
04/15/2004 12:14:58 Selective Backup function invoked.
04/15/2004 12:14:58 ANS1076E *** Directory path not found ***
04/15/2004 12:15:01 --- SCHEDULEREC STATUS BEGIN
04/15/2004 12:15:01 --- SCHEDULEREC OBJECT END DAILY_INCR 04/15/2004
12:15:00
04/15/2004 12:15:01 ANS1512E Scheduled event 'DAILY_INCR' failed.  Return
code = 12.
04/15/2004 12:15:01 Sending results for scheduled event 'DAILY_INCR'.
04/15/2004 12:15:01 Results sent to server for scheduled event
'DAILY_INCR'.

Andrew Raibeck <[EMAIL PROTECTED]> wrote:
If you have done the basic steps that I have outlined, then there is
something wrong in your setup. I am unable to answer your question with a
simple "perform step x to achieve the backup."

To back up only the C: drive, you would put

DOMAIN C:

in your client options file (dsm.opt). You do not need the -SUBDIR=YES in
the schedule definition, although its presence is not the problem.

So now the issue becomes trying to identify where the problem lies.

It would help to know the following:

- Content of client options file (dsm.opt)

- Version of TSM client you are using.

- Output from the following client command:

dsmc query schedule

- Output from the following admin commands:

query node client3 format=detailed
query event standard daily_incr
query association standard daily_incr

(note: the last two commands assume that the domain name is STANDARD;
specify the correct domain name if my assumption is wrong.)

If the "dsmc query schedule" command returns a schedule, then it is likely
that the scheduler service is not running on the client. If "dsmc query
schedule" does not return a schedule, then you probably need to use the
DEFINE ASSOCIATION command to associate node CLIENT3 with the DAILY_INCR
schedule.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



jianyu he
Sent by: "ADSM: Dist Stor Manager"
04/15/2004 06:35
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: TSM Schedule






Hi Andrew,

I am studying TSM now, so I want to achieve the automatic backup task. I
have finished the manual backup, archieve, restore, and retrieve, but when
I try to achieve the function of shcedule, the client doesn't backup
during the execution.


tsm> query session
TSM Server Connection Information
Server Name.: HJY3
Server Type.: Windows
Server Version..: Ver. 5, Rel. 2, Lev. 2.0
Last Access Date: 04/15/2004 09:16:14
Delete Backup Files.: "Yes"
Delete Archive Files: "Yes"
Node Name...: CLIENT3
User Name...:

I had done the step that you mentioned. Is there another step I should do
if I want to backup the data of C$ 

Occupancy of a backupset

2004-04-16 Thread Tab Trepagnier
TSM 5.1.8.0 on AIX

Is there a way to easily determine the size of a backupset?

I track tape occupancy weekly to chart growth of our TSM system.  Each
Excel spreadsheet has an embedded "select from occupancy..." query.  That
allows me to determine how much data I have and where it is in about 5
minutes.

But data in a backupset does not show up under any "occupancy" measure
that I'm aware of.  As I shift more archives to backupsets, I'd like some
way to show that the reduction in archive space isn't real - that data is
just changing form to backupsets.  That way I could count the backupset
data along with all the other data I'm tracking.

TIA

Tab Trepagnier
TSM Administrator
Laitram, L.L.C.