Restoring DB2, SQL and Exchange

2002-09-03 Thread Hussein Abdirahman

Hi TSMers;

Anyone out there has a procedure on restoring those databases to a different
systems?

My DB2 is on AIX 4.3.3. The DB is currently on a RS6000 7026-B80. I want to
restore it
on RS6000 7046-B50. Does it matter?

SQL and Exchange are on Win2K.

your help is appreciated in advance

Hussein
UNIX Administrator
IrwinToy LTD
Toronto-Canada



Re: backup performance with db and log on a SAN

2002-09-03 Thread Adolph Kahan

You are correct the at 3:1 compression you will not do better than
42MB/sec. Now compression does vary quite a bit. I have one customer who
gets 5:1 compression with some of his Oracle databases; the others only
get a bit more than 2:1. To stream at 100MB/s you would have to get
compression better than 6:1. The tape drive always writes at the same
speed 14MB/sec. If the data is does not compress than your throughput
will be 14MB/sec at 2:1 compression you will get 28MB/sec and so on.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Eliza Lau
Sent: Monday, September 02, 2002 1:23 PM
To: [EMAIL PROTECTED]
Subject: Re: backup performance with db and log on a SAN

Daniel,


> 
> Hi Eliza
> 
> As I understand it, each "tape-HBA" has 3 3590E FC connnected to it.
The 
> two 21G database disks, are each connected to it's own HBA?

Yes, you are correct.

> 
> According to spec sheets, the 3590E FC could handle speed up to
100MB/s 
> with 3:1 compression. This means that with 3 drives, you could have up
to 
> 300MB/s. However, the HBA will only theoretically handle 125MB/s, or 
> 250MB/s with 2Gb FC.

We have 1Gbit FC adapters.  Are you sure that the 3590E FC drives can
stream at 100MB/s?   We were told that the tape drives are the
bottleneck
and not the adapters.

> 
> Thie means that the tape drives will not stream the data, cause the
data 
> flow will be queued(Some data to drive1, some data to drive2, some
data to 
> drive3 and so on). There should be some wait % on the drives.
> 
> What kind of a machine are you running? The smallest machine I know of

> that is sold today and could handle this amount of adapters, is the 
> P-Series 660, which has 3 or 6 PCI busses. Normally, you don't want 
> multiple FC HBA and Gb Ethernet adapters mixed in the same PCI bus, as

> theese are considered high-performance adapters that can utilize the
whole 
> bandwidth of the bus.

We have a P-Series 660 7026-6H1 with 12 PCI slots. The other adapters
being used
besides the 4 HBAs are a SCSIRaid adapter and a graphics monitor
adapter.
There are also 4 SCSI adapters left over from before the tape drives
were converted to FC. They should be taken out.

> 
> How many other adapters are installed in the machine? Gigabit
Ethernet, 
> 10/100 ethernet and so on.

The 10/100 ethernet has its own adapter, not in the PCI slots.

> 
> Best Regards
> 
> Daniel Sparrman
> ---
> Daniel Sparrman
> Exist i Stockholm AB
> Propellervägen 6B
> 183 62 HÄGERNÄS
> Växel: 08 - 754 98 00
> Mobil: 070 - 399 27 51
> 
> 
> 
> 
> Eliza Lau <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 2002-09-02 12:19
> Please respond to "ADSM: Dist Stor Manager"
> 
>  
> To: [EMAIL PROTECTED]
> cc: 
> Subject:Re: backup performance with db and log on a
SAN
> 
> 
> Thanks Daniel,
> 
> The host has 4 HBAs. Two attached to each of the two switches.  One
HBA is 
> 
> zoned for tape traffic and the other for disk traffic.  Only the db
and 
> log
> are on the Shark F20.  The disk stgpools are on attached SCSI disk
drives.
> The disks are 36G.  There are 8 LSSes.  Each LSS is striped across 8 
> drives
> and sliced into 21G partitions.  The db is on two of these 21G slices.
> Tpae library is a 3494 with 6 3590E FC drives.  All drives are
connected
> to the two switches, three each.
> 
> Come to think of it, the Shark only has 2 HBAs.  I have to verify
this, 
> since
> I am typing this from home.  But it only has to read from the Shark
and 
> write
> to tape through another port in the switch.
> 
> Eliza
> 
> > Hi
> > 
> > The large disks you are talking about, are you meaning large as
36GB, 
> 72GB 
> > an so on, or are you talking about LUN-sizes?
> > 
> > In a shark, you can have very large LUN:s, but they will consist of
a 
> > large number of smaller SSA-based hard drives. This means that you
will 
> > not have a performance impact on the disks.
> > 
> > Normally performance issues on TSM / SAN has to do with having disks
and 
> 
> > tapes on the same HBA. DB transactions is very randomly written, so
if 
> you 
> > for example are doing migration, TSM will write to both disks and 
> tape(DB 
> > transactions to disk, migration from disk, migration to tape). This
will 
> 
> > have a huge impact, as the HBA is arbitrated(which means only one
write 
> > can be done at a time). Also, doing backups and migration directly
to 
> > tape, assumes the ablility to write continous, sequential data. If
you 
> > have the DB on the same card, your clients, or the TSM server, won't
be 
> > able to stream data to the tapes, leading to poor performance.
> > 
> > Eliza, I'd suggest you use somekind of monitoring tool, like the 
> Storwatch 
> > specialist, to see throughput from/to disks and tape. I'm sure that
if 
> you 
> > separate the disks from the tape, you will see a performance
upgrade.
> > 
> > How many HBA:s do you have in your Shark?

Error 10 when installing Web Client

2002-09-03 Thread Bill Dourado

Hi,

I come against"Error10  updating the registry password"
 "The registry node key couldn't  be
located"

WHEN

attempting to install TSM Web Client via GUI wizard, on a NT(4SP6)  backup
client.

Could somebody please help me ?

I have already tried removing TSM services & deinstalling TSM , and
reinstalling.

Server  & backup client both 4.1


T.I.A


Bill Dourado

*** Disclaimer ***
This email and any files transmitted with it contains privileged and
confidential information and is intended for the addressee only. If you
are not the intended recipient you must not disseminate, distribute or
copy this email. Please notify the sender at once that you have received
it by mistake and then delete it from your system.

Any views expressed in this email are those of the sender and not
necessarily those of ALSTEC or its associated companies.

Whereas ALSTEC takes reasonable precautions to ensure this email is virus
free, no responsibility will be accepted by it for any loss or damage
whether direct or indirect which results from the contents of this email
or its attachments.

ALSTEC Group Limited - Registered in England No.3939840 - www.alstec.com
Registered office: Cambridge Road, Whetstone, Leicester. LE8 6LH. England.
Telephone: +44 (0)116 275 0750



Re: backup performance with db and log on a SAN

2002-09-03 Thread Seay, Paul

Actually, the H drive is listed at 70 MB/sec.  What IBM had done was limit
the the number to 42MB/sec because of the 14 MB/sec at the head at 3:1.  But
at 5:1 you can get up to 70 MB/sec.  I have seen has high as 50 MB/sec even
on the E Drive with fibre channel.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Adolph Kahan [mailto:[EMAIL PROTECTED]] 
Sent: Monday, September 02, 2002 10:16 PM
To: [EMAIL PROTECTED]
Subject: Re: backup performance with db and log on a SAN


You are correct the at 3:1 compression you will not do better than 42MB/sec.
Now compression does vary quite a bit. I have one customer who gets 5:1
compression with some of his Oracle databases; the others only get a bit
more than 2:1. To stream at 100MB/s you would have to get compression better
than 6:1. The tape drive always writes at the same speed 14MB/sec. If the
data is does not compress than your throughput will be 14MB/sec at 2:1
compression you will get 28MB/sec and so on.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Eliza Lau
Sent: Monday, September 02, 2002 1:23 PM
To: [EMAIL PROTECTED]
Subject: Re: backup performance with db and log on a SAN

Daniel,


> 
> Hi Eliza
> 
> As I understand it, each "tape-HBA" has 3 3590E FC connnected to it.
The 
> two 21G database disks, are each connected to it's own HBA?

Yes, you are correct.

> 
> According to spec sheets, the 3590E FC could handle speed up to
100MB/s 
> with 3:1 compression. This means that with 3 drives, you could have up
to 
> 300MB/s. However, the HBA will only theoretically handle 125MB/s, or
> 250MB/s with 2Gb FC.

We have 1Gbit FC adapters.  Are you sure that the 3590E FC drives can
stream at 100MB/s?   We were told that the tape drives are the
bottleneck
and not the adapters.

> 
> Thie means that the tape drives will not stream the data, cause the
data 
> flow will be queued(Some data to drive1, some data to drive2, some
data to 
> drive3 and so on). There should be some wait % on the drives.
> 
> What kind of a machine are you running? The smallest machine I know of

> that is sold today and could handle this amount of adapters, is the
> P-Series 660, which has 3 or 6 PCI busses. Normally, you don't want 
> multiple FC HBA and Gb Ethernet adapters mixed in the same PCI bus, as

> theese are considered high-performance adapters that can utilize the
whole 
> bandwidth of the bus.

We have a P-Series 660 7026-6H1 with 12 PCI slots. The other adapters being
used besides the 4 HBAs are a SCSIRaid adapter and a graphics monitor
adapter. There are also 4 SCSI adapters left over from before the tape
drives were converted to FC. They should be taken out.

> 
> How many other adapters are installed in the machine? Gigabit
Ethernet, 
> 10/100 ethernet and so on.

The 10/100 ethernet has its own adapter, not in the PCI slots.

> 
> Best Regards
> 
> Daniel Sparrman
> ---
> Daniel Sparrman
> Exist i Stockholm AB
> Propellervägen 6B
> 183 62 HÄGERNÄS
> Växel: 08 - 754 98 00
> Mobil: 070 - 399 27 51
> 
> 
> 
> 
> Eliza Lau <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 2002-09-02 
> 12:19 Please respond to "ADSM: Dist Stor Manager"
> 
>  
> To: [EMAIL PROTECTED]
> cc: 
> Subject:Re: backup performance with db and log on a
SAN
> 
> 
> Thanks Daniel,
> 
> The host has 4 HBAs. Two attached to each of the two switches.  One
HBA is 
> 
> zoned for tape traffic and the other for disk traffic.  Only the db
and 
> log
> are on the Shark F20.  The disk stgpools are on attached SCSI disk
drives.
> The disks are 36G.  There are 8 LSSes.  Each LSS is striped across 8
> drives
> and sliced into 21G partitions.  The db is on two of these 21G slices.
> Tpae library is a 3494 with 6 3590E FC drives.  All drives are
connected
> to the two switches, three each.
> 
> Come to think of it, the Shark only has 2 HBAs.  I have to verify
this, 
> since
> I am typing this from home.  But it only has to read from the Shark
and 
> write
> to tape through another port in the switch.
> 
> Eliza
> 
> > Hi
> > 
> > The large disks you are talking about, are you meaning large as
36GB, 
> 72GB
> > an so on, or are you talking about LUN-sizes?
> > 
> > In a shark, you can have very large LUN:s, but they will consist of
a 
> > large number of smaller SSA-based hard drives. This means that you
will 
> > not have a performance impact on the disks.
> > 
> > Normally performance issues on TSM / SAN has to do with having disks
and 
> 
> > tapes on the same HBA. DB transactions is very randomly written, so
if 
> you
> > for example are doing migration, TSM will write to both disks and
> tape(DB
> > transactions to disk, migration from disk, migration to tape). This
will 
> 
> > have a huge impact, as the HBA is arbitrated(which means only one
write 
> > can be done at a time). Also, doing backups and migration directly

Re: LAN FREE BACKUPS

2002-09-03 Thread Seay, Paul

What is MAXSCR set to?  If it is higher than what your current volume count
is, then call support.  If you have run out of volumes raise the number and
try again.  I am betting this is the case and that there is a logic error
where the SAN client looks for volumes in the scratch pool only.  What I do
not understand is how you could have volumes in your storage pool as "EMPTY"
unless you have performed a DEFINE VOLUME to put them in the pool
permanently.  Even if you had reuse delay on they would show a status of
"PENDING".


Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: FRANCISCO ROBLEDO [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 02, 2002 6:05 PM
To: [EMAIL PROTECTED]
Subject: LAN FREE BACKUPS


I am having some problems with the LAN Free backups.
Server is 5.1.1 (AIX 4.3.3), client is 5.1.1 , and
Storage Agent 5.1.1.

I make node backups for 3 weeks.

This Weekend, the backups report this:

ANR0522W Transaction failed for session 6331 for node
AIXSAP (AIX) - no space available in storage pool SAN_AIXSAP_INCR and all
successor pools.
08/29/02 12:12:39 ANR0403I Session 6331 ended for
node AIXSAP (AIX).
08/29/02 12:12:43 ANE4952I (Session: 6330, Node:
AIXSAP)  Total number of objects inspected:   83,795
08/29/02 12:12:43 ANE4954I (Session: 6330, Node:
AIXSAP)  Total number of objects backed up:420

But, the Storage pool have empty volume and the
library have Scratch volumes.

If I disable the Lanfree option, the backups are
successfuly.

Thanks for any help

Francisco.

_
Do You Yahoo!?
Informacisn de Estados Unidos y Amirica Latina, en Yahoo! Noticias.
Vismtanos en http://noticias.espanol.yahoo.com



Data in offsite pool that is not in onsite pool

2002-09-03 Thread Steve Hicks

Our offsite pool reclamation process is calling for tapes that are listed
as being in the vault (and they actually are). Obviously reclamation can
not take place for these volumes, as they are not onsite. First, what is
the best way to go about fixing this? Second, what could have gotten our
pools out of sync? Any help is much appreciated.
Thanks,
Steve Hicks
CIS System Infrastructure Lead /
AIX Administrator
Knoxville Utilities Board



finding old filespaces from unix clients...

2002-09-03 Thread Cook, Dwight E

I've noticed that if an entire filesystem is removed from a unix TSM client,
the environment (as a whole) will keep that data until manually purged.
I think this is OK/FINE/WHAT I WANT because you never know why an entire
filesystem has gone away (might just be varied offline, etc...)

but I still see obsolete data hanging on out there from 1998, 1999-ish so
what I've done is

select node_name,filespace_name as "filespace_name ",cast(backup_start as
varchar(10)) as bkupst, cast(backup_end as varchar(10)) as bkupend from
adsm.filespaces where cast(backup_end as varchar(7))<>'2002-09' > old_FS_out


or use ~where cast(backup_end as varchar(4))<>'2002'

this gives me a list of filesystems that I might manually purge after
further investigation.

just thought I'd pass this along.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



multiple reclaim processes

2002-09-03 Thread Levinson, Donald A.

Is there a way to have multiple reclaim processes for the same storage pool?
It seems silly to have an 8 spindle library and only use two of them to try
to keep up with my reclaim processing.

TSM 5.1.1
AIX 4.3.3



This transmittal may contain confidential information intended solely for
the addressee. If you are not the intended recipient, you are hereby
notified that you have received this transmittal in error; any review,
dissemination, distribution or copying of this transmittal is strictly
prohibited. If you have received this communication in error, please notify
us immediately by reply or by telephone (collect at 907-564-1000) and ask to
speak with the message sender. In addition, please immediately delete this
message and all attachments. Thank you.



Re: finding old filespaces from unix clients...

2002-09-03 Thread bbullock

As with most TSM commands, are a million ways to skin this cat. Here
is a sql query I use:

select node_name, Filespace_name, (current_timestamp-backup_end)days as
"days_since_backup" from filespaces where
cast((current_timestamp-backup_end) days as decimal) >=180 order by
"days_since_backup"

In my case, I only start to clean up the orphaned filesystems after
they have not had a backup in the last 180 day (thus the 180 in the
command). It gives me a listing of the node, the filespace and the number of
days since a backup on that filespace completed.

Thanks,
Ben


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 10:48 AM
To: [EMAIL PROTECTED]
Subject: finding old filespaces from unix clients...


I've noticed that if an entire filesystem is removed from a unix TSM client,
the environment (as a whole) will keep that data until manually purged.
I think this is OK/FINE/WHAT I WANT because you never know why an entire
filesystem has gone away (might just be varied offline, etc...)

but I still see obsolete data hanging on out there from 1998, 1999-ish so
what I've done is

select node_name,filespace_name as "filespace_name ",cast(backup_start as
varchar(10)) as bkupst, cast(backup_end as varchar(10)) as bkupend from
adsm.filespaces where cast(backup_end as varchar(7))<>'2002-09' > old_FS_out


or use ~where cast(backup_end as varchar(4))<>'2002'

this gives me a list of filesystems that I might manually purge after
further investigation.

just thought I'd pass this along.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: multiple reclaim processes

2002-09-03 Thread Malbrough, Demetrius

Donald,

As far as I know, only one reclamation process per stgpool!

Regards,

Demetrius Malbrough
AIX Storage Specialist

-Original Message-
From: Levinson, Donald A. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 11:00 AM
To: [EMAIL PROTECTED]
Subject: multiple reclaim processes


Is there a way to have multiple reclaim processes for the same storage pool?
It seems silly to have an 8 spindle library and only use two of them to try
to keep up with my reclaim processing.

TSM 5.1.1
AIX 4.3.3



This transmittal may contain confidential information intended solely for
the addressee. If you are not the intended recipient, you are hereby
notified that you have received this transmittal in error; any review,
dissemination, distribution or copying of this transmittal is strictly
prohibited. If you have received this communication in error, please notify
us immediately by reply or by telephone (collect at 907-564-1000) and ask to
speak with the message sender. In addition, please immediately delete this
message and all attachments. Thank you.



Re: finding old filespaces from unix clients...

2002-09-03 Thread Williams, Tim P {PBSG}

Good stuff...thanks!

-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 1:31 PM
To: [EMAIL PROTECTED]
Subject: Re: finding old filespaces from unix clients...


As with most TSM commands, are a million ways to skin this cat. Here
is a sql query I use:

select node_name, Filespace_name, (current_timestamp-backup_end)days as
"days_since_backup" from filespaces where
cast((current_timestamp-backup_end) days as decimal) >=180 order by
"days_since_backup"

In my case, I only start to clean up the orphaned filesystems after
they have not had a backup in the last 180 day (thus the 180 in the
command). It gives me a listing of the node, the filespace and the number of
days since a backup on that filespace completed.

Thanks,
Ben


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 10:48 AM
To: [EMAIL PROTECTED]
Subject: finding old filespaces from unix clients...


I've noticed that if an entire filesystem is removed from a unix TSM client,
the environment (as a whole) will keep that data until manually purged.
I think this is OK/FINE/WHAT I WANT because you never know why an entire
filesystem has gone away (might just be varied offline, etc...)

but I still see obsolete data hanging on out there from 1998, 1999-ish so
what I've done is

select node_name,filespace_name as "filespace_name ",cast(backup_start as
varchar(10)) as bkupst, cast(backup_end as varchar(10)) as bkupend from
adsm.filespaces where cast(backup_end as varchar(7))<>'2002-09' > old_FS_out


or use ~where cast(backup_end as varchar(4))<>'2002'

this gives me a list of filesystems that I might manually purge after
further investigation.

just thought I'd pass this along.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: Restoring DB2, SQL and Exchange

2002-09-03 Thread Osip Mikunis

-Original Message-
From: Hussein Abdirahman [mailto:[EMAIL PROTECTED]]
Sent: Tue 9/3/2002 4:18 PM
To: [EMAIL PROTECTED]
Cc:
Subject: Restoring DB2, SQL and Exchange



Hi TSMers;

Anyone out there has a procedure on restoring those databases to a different

systems?

>SQL and Exchange are on Win2K.

Works fine with versions 2.x. Just follow instruction in online help :-).
Tip: change node name in dsm.opt etc.



Re: Temporary Backup

2002-09-03 Thread Coats, Jack

I guess the term Temporary Backup was misleading.  I need to have this one
set of data be kept short term, compared to what we do for most backups.
This is data that needs to be backed up until it gets spun off to DVD for
archival purposes.  The DVDs get sent off once a week, so if we keep the
data for 10 days, the management has considered that sufficient.

Thanks for the thought.

-Original Message-
From: Greg Garrison [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 18, 2002 4:38 PM
To: [EMAIL PROTECTED]
Subject: Re: Temporary Backup


Hey Jack-

Just a thought.  Why not just do an archive then delete the data when you
are finished?

Greg A. Garrison
Account Storage Manager
IBM Global Services, Service Delivery Center - West
9022 S. Rita Road, Rm 2785, Tucson, AZ 85744
Phone (520) 799-5429  Internal 321-5429
Pager 800-946-4646 PIN 6034139
e-mail: [EMAIL PROTECTED]



  "Coats, Jack"
 cc:
  Sent by: "ADSM:   Subject:  Temporary Backup
  Dist Stor Manager"
  <[EMAIL PROTECTED]
  EDU>


  07/17/2002 12:51
  PM
  Please respond to
  "ADSM: Dist Stor
  Manager"





I have one server that I need to backup everything except on file system on
like I do all my others.

Then I need to backup this ONE file system with a different retention than
everything else.
(like 10 days, where everything else is kept for 60 after it is deleted).

How should I set this up?

... TIA ... Jack



Re: Data in offsite pool that is not in onsite pool

2002-09-03 Thread Coats, Jack

If it is in a copypool, make sure the tapes are set up properly in the data
base as being offsite.  Then in reclamation of the copypool, it should use
the onsite copy to build the new data image for the offsite copypool.

-Original Message-
From: Steve Hicks [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 8:58 AM
To: [EMAIL PROTECTED]
Subject: Data in offsite pool that is not in onsite pool


Our offsite pool reclamation process is calling for tapes that are listed
as being in the vault (and they actually are). Obviously reclamation can
not take place for these volumes, as they are not onsite. First, what is
the best way to go about fixing this? Second, what could have gotten our
pools out of sync? Any help is much appreciated.
Thanks,
Steve Hicks
CIS System Infrastructure Lead /
AIX Administrator
Knoxville Utilities Board



Re: test for DRM

2002-09-03 Thread Chetan H. Ravnikar

Pual appreciate your time,

thanks bowing

On Sat, 31 Aug 2002, Seay, Paul wrote:
> What SUN probably did not tell you was they were probably 32K or 64K blocks
> to get that data rate.  TSM Database uses 4K and most other databases are 4K
> or 8K.  Are you using raw volumes for the disk pools?  If so, talk to SUN to
> find out what the optimal block sizes are.  If you are using a file system,
> same thing.
T3's cannot be used as a JBOD, at a minimum there should be a volume
created of a few disks and they support Raid 0, 1 and 5

what we have on some configurations are simply RAID 1, so that during
writes data is just stripped across..

any idea, what storage tyoes are out there predominently used (raid or
just a JBOD)

>
> Be careful how you talk to Tivoli about RAID-5 versus JBOD.  They think of
> RAID-5 as being a bunch of disks you have raided, not a hardware raid
> solution with a high end controller cache.
good point! I will keep this in mind. I do read that TSM writes data
in(variable)block rates of 4 to 64K. With T3 storage where this has to
set to a static value and I have it set to 64K. Did we even go and buy
the wrong storage for TSM storage pools..


> 20MB to 30MB/sec sounds about right for a T3.  The T3 is a midrange device.
> It may not be a good choice for a high write activity workload versus a
> JBOD/Raid-1.  I do not know how smart the T3 is.  The 280 is a pretty good
> little box based on my review of it, so I do not know that is the problem.

SUN280 had an issue, when we had the recovery-logs on the root
(system)disk. The disk got busy as there was contention and a lot of i/o
getting queued, thereby increasing the queue depth! Later we moved the
recovery logs on to an External D2-array on the on-board SCSI-3 . Looks
bettter now.


> You are preaching to the choir on a protected disk pool.  If you lose it you
> have lost the backups for that night which may be unacceptable.
yes :)

driving large critical backups on direct_to_tape is what we have chosen
and the remaining goes to disk. But tapes are slower than disks .. so
still on a quest for faster, safer (insured) backups!


> One thing to consider is large files you may want to send directly to tape.
> You can do this by setting a maximum file size in the primary disk pool.
> TSM will mount up a tape and any time a file from a client exceeds the
> limit, it writes it to tape instead of disk.  But, unless you have very
> reliable tape and create your backup storage pool copies in time to
> recapture the backup if the tape is bad, I do not know if this is an option
> for you.
where can I read about this, max file size param settings/config! I will
still lookinto this.

THe reason. My large(file, oracle * SAP) backups run off of the same
client and hence, we have the same node registered as 2 different nodes
and talk to 2 different domains, via 2 different *opt* files..
Managing this within our internal customer base and having them not touch
and change things has been an increasing pain. I am looking for a simple
straight configuration.. hence the search


> Backup Storage Pool commands start where they left off.
thanks, what is the gaurantee here. There is no way TSM can miss files ..
leading to integrity errors!?

We have actually cancelled
*copy stg on-site off-site maxpr=2* because it went passed days.. because
the next day backups needed tape-drive resources. But we have had
processes completing successfully.


> You say your environment is huge.  How many tapes do you have.  How much are
> you trying to backup to these servers.  They may not be the right fit.
we backup between 100 to 150 GB every day and during weekends we backup
approx 200 GB. All data is changed data. (Oracle) This particular server
has a STKL700 lib with 320 slots and 5 DLT7000 drives SCSI (daisy
chained)installed  on to 2 33Mhz SCSI diff cards on the SUN280r. fast
wide SCSI diff do 40MB/sec and the drives do 5MB/sec



> As far as the data integrity issue.  Did you not back something up?  Where
here is what we did and *probabaly should have not done* on a pressure to
check integrity of off-site tapes. We picked a node & BTW we hvae
collocation on and hence with a select command (below) got a list of all
tapes on-site and marked them destroyed. With the same select command, got
a list of all the off-site tapes (all this on the production server) and
have them checked in as private and read-only, initiated a restore.

The process, just came back with an error as below. Tivoli level 2 with
all traces open. could not figure out how this could have happened. We did
audit checks on the volumes. The audit checks were successful :(

select statement
==
select volume_name from volumeusage where node_name='TEKTON-SAP' and
stgpool_name='OS_TAPEPOOL_SERVER'
==

> you missing some data?  What do you mean?  This is one place TSM really
> shines over the other backup products.

error below
==
08/07/02   14:34:13  ANR1424W Read access denied for volume 220417 -
volume ac

Re: multiple reclaim processes

2002-09-03 Thread bbullock

Yes, I believe that that's right, reclamations are one per
storagepool.

The only way I know to get around it is to write a script/select
statement to look for the tapes that are the least utilized and do a "move
data" on those tapes. I have not automated the process, but have a simple
select that gives me a list of all the tapes and the "pct_reclaim" from the
volumes table. The "pct_reclaim" is the opposite of
"% utilized", so it's kind of a "% empty" value.

select volume_name, pct_reclaim, stgpool_name from  volumes where
status='FULL' order by 3,2

I run this script when I want to predict how changing the
reclamation threshold on a storage pool will effect processing. i.e. It will
show me how many tapes are above a given threshold and would be reclaimed.

Thanks,
Ben

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 12:58 PM
To: [EMAIL PROTECTED]
Subject: Re: multiple reclaim processes


Donald,

As far as I know, only one reclamation process per stgpool!

Regards,

Demetrius Malbrough
AIX Storage Specialist

-Original Message-
From: Levinson, Donald A. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 11:00 AM
To: [EMAIL PROTECTED]
Subject: multiple reclaim processes


Is there a way to have multiple reclaim processes for the same storage pool?
It seems silly to have an 8 spindle library and only use two of them to try
to keep up with my reclaim processing.

TSM 5.1.1
AIX 4.3.3



This transmittal may contain confidential information intended solely for
the addressee. If you are not the intended recipient, you are hereby
notified that you have received this transmittal in error; any review,
dissemination, distribution or copying of this transmittal is strictly
prohibited. If you have received this communication in error, please notify
us immediately by reply or by telephone (collect at 907-564-1000) and ask to
speak with the message sender. In addition, please immediately delete this
message and all attachments. Thank you.



Re: Consumption of new scratch tape increase abnormally

2002-09-03 Thread Lai, Kathy KL

Dear Paul,

Thanks for you help. I've got the information base on the SQL you provided.
How can I analyze my information so that I can know where my tapesss gone...
I got around 1400 tapes from the result of the second SQL. Does it mean that
I have great problem on that? If yes, how can I solve it? Thank you very
much.

> Regards,
> Kathy Lai


-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 29, 2002 1:16 PM
To: [EMAIL PROTECTED]
Subject: Re: Consumption of new scratch tape increase abnormally


"I know the name of this tune."

Is expiration running?
What is the reclaim percent for your pools?

Issue this command to see how your volumes fair on utilization on average:

Select cast(avg(pct_utilized) as decimal(5,2)), stgpool_name,
count(stgpool_name) from volumes group by stgpool_name order by 1

Issue this one to figure out what the bad ones are:

Select cast(volume_name as char(7)) as "Volume",
cast(substr(stgpool_name,1,25) as char(25)) as "Storage Pool ",
cast(pct_utilized as decimal(5,2)) as "% Util", cast(status as char(10)) as
"Status   ", cast(access as char(10)) as "Access" from volumes where
pct_utilized < 40 order by 3

And "my all time favorite", find out the tapes that were checked in private
that should be scratch:

select volume_name from libvolumes where status='Private' and
libvolumes.volume_name not in (select volume_name from volumes) and
libvolumes.volume_name not in (select volume_name from volhistory where type
in ('BACKUPFULL', 'BACKUPINCR', 'DBSNAPSHOT', 'EXPORT'))

Hope this helps.  The quotes are from the Fifth Element, which character?




Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Lai, Kathy KL [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, August 28, 2002 9:46 PM
To: [EMAIL PROTECTED]
Subject: Consumption of new scratch tape increase abnormally


Dear all,

I am using ADSM v3.1 with HSM and DRM option. Recently ( this half year ) I
found that the consumption of new scratch tape increase abnormally. I have
to initial 180 tapes in only 23 days!!! However, I have check that the
backup volume doesn't have great change. What is the cause of the scenerio?
Will there be any problem in my daily operation so that tapes that have been
initilize cannot be reuse so the consumption of new tapes increase so
significantly? Please help as it is a quite urgent issue as my management
will not invest anymore to buy new tapes if the root cause of this scenerio
cannot be found!!! Thank you very much.

> Regards,
> Kathy Lai
> Outsourcing & Managed Services
> Pacific Century CyberWorks
> * 852-8101 2790
> [EMAIL PROTECTED]
>



Re: backup performance with db and log on a SAN

2002-09-03 Thread Christo Heuer

Roger,

Some very good points made here - only thing I'm curious about
is why you do not suggest to create a non-Raid LUN on the
Shark - this will give Eliza the best of both worlds.
I created a Non-Raid drawer in our shark specifically for
database types that require Non-Raid type disk behaviour,
which works well.
(It is just a pity that the Shark can only format a whole
drawer as non-raid)...

So - Eliza - just format a non-raid drawer and allocate
your database on these LUN's and see if that makes a
difference - then you will know if it is Raid-5 related
or not.

Regards
Christo


Roger,
The problem here is we have no idea what is the type of disk subsystem they
have.  Once we find that out we will know.

My TSM database is on a Shark 2105-F20 (it is RAID-5 under the covers).  My
database is 85GB and takes 1.3 hours to backup to Magstar drives.  I
consider that good for something that has 4K blocks and totally random.  We
stripe the database as well, may be not a good thing to do, but we did it
that way.  We are going to try some other things soon to see how we can
improve performance.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Roger Deschner [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 01, 2002 2:32 PM
To: [EMAIL PROTECTED]
Subject: Re: backup performance with db and log on a SAN


What a FASCINATING data point!

I think the problem is simply that it is RAID5. The WDSF/ADSM/TSM/ITSM
Database is accessed rather randomly during both normal operations, and
during database backup. RAID5 is optimized for sequential I/O operations.
It's great for things like conventional email systems that use huge mailbox
files, and read and rewrite them all at once. But not for this particular
database. Massive cache is worse than useless, because not only are you
reading from random disk locations, but each time you do, your RAID box is
doing a bunch of wasted I/O to refill the cache from someplace else as well.
Over and over for each I/O operation.

On our system, I once tried limited RAID on the Database, in software using
the AIX Logical Volume Manager, and it ran 25% slower on Database Backups.
Striping hurts, too. So I went the other way, and got bunches of small, fast
plain old JBOD disks, and it really sped things up. (Ask your used equipment
dealer about a full drawer of IBM 7133-020 9.1gb SAA disk drives - they are
cheap and ideally suited to the TSM DB.) Quite simply, more disk arms mean a
higher multiprogramming level within the server, and better performance.
Seek distances will always be high with a random access pattern, so you want
more arms all seeking those long distances at the same time.

OTOH, the Log should do fine with RAID5, since it is much more sequential.
Consider removing TSM Mirroring of the Log when you put it back into RAID5.

Can you disable the cache, or at least make it very small? That might help.

A very good use of your 2TB black box of storage: Disk Storage Pools. The
performance aspects of RAID5 should be well suited to online TSM Storage
Pools. You could finally hold a full day's worth of backups online in them,
which is an ideal situation as far as managing migration and copy pool
operations "by the book". This might even make client backups run faster.
RAID5 would protect this data from media failure, so you don't need to worry
about having only one copy of it for a while. Another good use: Set up a
Reclamation Storage Pool in it, which will free up a tape drive and
generally speed reclamation. Tape volumes are getting huge these days, so
you could use this kind of massive storage, optimized for sequential
operations, very beneficially for this.

So, to summarize, your investment in the SAN-attached Black Box O' Disk
Space is still good, for everything you probably planned to put in it,
EXCEPT for the TSM Database. That's only 36GB in your case, so leaving it
out of the Big Box is removing only 2% of it. If the other 98% works well,
the people who funded it should be happy.

P.S. I'm preparing a presentation for Share in Dallas next spring on this
exact topic; I really appreciate interesting data points like this. Thank
you for sharing it.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Sat, 31 Aug 2002, Eliza Lau wrote:

>I recently moved the 36G TSM database and 10G log from attached SCSI
>disk drives to a SAN. Backing the db now takes twice as long as it used
>to (from 40 minutes to 90 minutes).  The old attached disk drives are
>non-RAID and TSM mirrored.  The SAN drives are RAID-5 and TSM mirrored.
>I know I have to pay a penalty for writing to RAID-5.  But considering
>the massive cache of the SAN it should not be too bad.  In fact,
>performance of client backups hasn't suffered.
>
>However, the day after the move, I noticed that backup db ran for twice
>as long.  It just doesn't make sense it will take a 1

Re: backup performance with db and log on a SAN

2002-09-03 Thread Eliza Lau

Thanks Daniel,

The host has 4 HBAs. Two attached to each of the two switches.  One HBA is 
zoned for tape traffic and the other for disk traffic.  Only the db and log
are on the Shark F20.  The disk stgpools are on attached SCSI disk drives.
The disks are 36G.  There are 8 LSSes.  Each LSS is striped across 8 drives
and sliced into 21G partitions.  The db is on two of these 21G slices.
Tpae library is a 3494 with 6 3590E FC drives.  All drives are connected
to the two switches, three each.

Come to think of it, the Shark only has 2 HBAs.  I have to verify this, since
I am typing this from home.  But it only has to read from the Shark and write
to tape through another port in the switch.

Eliza

> Hi
> 
> The large disks you are talking about, are you meaning large as 36GB, 72GB 
> an so on, or are you talking about LUN-sizes?
> 
> In a shark, you can have very large LUN:s, but they will consist of a 
> large number of smaller SSA-based hard drives. This means that you will 
> not have a performance impact on the disks.
> 
> Normally performance issues on TSM / SAN has to do with having disks and 
> tapes on the same HBA. DB transactions is very randomly written, so if you 
> for example are doing migration, TSM will write to both disks and tape(DB 
> transactions to disk, migration from disk, migration to tape). This will 
> have a huge impact, as the HBA is arbitrated(which means only one write 
> can be done at a time). Also, doing backups and migration directly to 
> tape, assumes the ablility to write continous, sequential data. If you 
> have the DB on the same card, your clients, or the TSM server, won't be 
> able to stream data to the tapes, leading to poor performance.
> 
> Eliza, I'd suggest you use somekind of monitoring tool, like the Storwatch 
> specialist, to see throughput from/to disks and tape. I'm sure that if you 
> separate the disks from the tape, you will see a performance upgrade.
> 
> How many HBA:s do you have in your Shark?
> 
> Also, check to see that the logs, diskpook and database is not on the same 
> volumes. This will also generate bad performance.
> 
> The best practice is to have db & log on one HBA, diskpool on one HBA, and 
> tape drives on one or more HBA:s(depending on amount of tapes drives). 
> This is however recommended for large environments, with > 200 mid-size 
> clients.
> 
> Best Regards
> 
> Daniel Sparrman
> ---
> Daniel Sparrman
> Exist i Stockholm AB
> Propellervägen 6B
> 183 62 HÄGERNÄS
> Växel: 08 - 754 98 00
> Mobil: 070 - 399 27 51
> 
> 
> 
> 
> Remco Post <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 2002-09-02 10:48
> Please respond to "ADSM: Dist Stor Manager"
> 
>  
> To: [EMAIL PROTECTED]
> cc: 
> Subject:Re: backup performance with db and log on a SAN
> 
> 
> On zaterdag, augustus 31, 2002, at 05:18 , Eliza Lau wrote:
> 
> > I recently moved the 36G TSM database and 10G log from attached SCSI
> > disk
> > drives to a SAN. Backing the db now takes twice as long as it used to
> > (from 40 minutes to 90 minutes).  The old
> > attached disk drives are non-RAID and TSM mirrored.  The SAN drives are
> > RAID-5 and TSM mirrored.  I know I have to pay a penalty for writing to
> > RAID-5.  But considering the massive cache of the SAN it should not be
> > too bad.  In fact, performance of client backups hasn't suffered.
> >
> > However, the day after the move, I noticed that backup db ran for twice
> > as long.  It just doesn't make sense it will take a 100% performance hit
> > from reading from RAID-5 disks.  Our performance guys looked at the sar
> > data and didn't find any bottlenecks, no excessive iowait, paging, etc.
> > The solution is to move the db and log
> > back to where they were.  But now management says: "We purchased this
> > very expensive 2T IBM SAN and you are saying that you can't use it."
> > Meanwhile, our Oracle people happily report that they are seeing
> > the performance of their applications enjoy a 10% increase.
> >
> > Has anyone put their db and log on a SAN and what is your experience?
> 
> Yes we have. SAN is very bad for database performance. We used it as a
> temp storage space while we were reorganizing the ssa disks on our TSM
> server. The reason SAN attached storage gives poor performance is the
> size of the disks. You now have just a few large disks for your
> database, while in the past you probably had a few more smaller disks to
> hold the same amount of data. Disk seek times have not kept pace with
> disk size, so while the disks are a lot bigger, they take about the same
> amount of time to find a block of data. Since almost each database read
> access requires a seek, more spindles give more bandwith and this better
> performance. Even worse in raid5, since now all disks must have read the
> block of data before the raid5 controller can return anything.
> 
> Raid5 will give you another performance hit, especi

Re: backup performance with db and log on a SAN

2002-09-03 Thread Eliza Lau

Roger,

Thanks for the detailed analysis.  This is what I was planning to do: moved
the db back to attahced SCSI drives.  Re-configuring one drawer in the Shark
to non-RAID as another person suggested is out of the question since TSM
is using only a small portion of the Shark.  Please read the other messages
that I posed for our SAN configuration.

Eliza

>
> What a FASCINATING data point!
>
> I think the problem is simply that it is RAID5. The WDSF/ADSM/TSM/ITSM
> Database is accessed rather randomly during both normal operations, and
> during database backup. RAID5 is optimized for sequential I/O
> operations. It's great for things like conventional email systems that
> use huge mailbox files, and read and rewrite them all at once. But not
> for this particular database. Massive cache is worse than useless,
> because not only are you reading from random disk locations, but each
> time you do, your RAID box is doing a bunch of wasted I/O to refill the
> cache from someplace else as well. Over and over for each I/O operation.
>
> On our system, I once tried limited RAID on the Database, in software
> using the AIX Logical Volume Manager, and it ran 25% slower on Database
> Backups. Striping hurts, too. So I went the other way, and got bunches
> of small, fast plain old JBOD disks, and it really sped things up. (Ask
> your used equipment dealer about a full drawer of IBM 7133-020 9.1gb SAA
> disk drives - they are cheap and ideally suited to the TSM DB.) Quite
> simply, more disk arms mean a higher multiprogramming level within the
> server, and better performance. Seek distances will always be high with
> a random access pattern, so you want more arms all seeking those long
> distances at the same time.
>
> OTOH, the Log should do fine with RAID5, since it is much more
> sequential. Consider removing TSM Mirroring of the Log when you put it
> back into RAID5.
>
> Can you disable the cache, or at least make it very small? That might
> help.
>
> A very good use of your 2TB black box of storage: Disk Storage Pools.
> The performance aspects of RAID5 should be well suited to online TSM
> Storage Pools. You could finally hold a full day's worth of backups
> online in them, which is an ideal situation as far as managing migration
> and copy pool operations "by the book". This might even make client
> backups run faster. RAID5 would protect this data from media failure, so
> you don't need to worry about having only one copy of it for a while.
> Another good use: Set up a Reclamation Storage Pool in it, which will
> free up a tape drive and generally speed reclamation. Tape volumes are
> getting huge these days, so you could use this kind of massive storage,
> optimized for sequential operations, very beneficially for this.
>
> So, to summarize, your investment in the SAN-attached Black Box O' Disk
> Space is still good, for everything you probably planned to put in it,
> EXCEPT for the TSM Database. That's only 36GB in your case, so leaving
> it out of the Big Box is removing only 2% of it. If the other 98% works
> well, the people who funded it should be happy.
>
> P.S. I'm preparing a presentation for Share in Dallas next spring on
> this exact topic; I really appreciate interesting data points like this.
> Thank you for sharing it.
>
> Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]
>
>
> On Sat, 31 Aug 2002, Eliza Lau wrote:
>
> >I recently moved the 36G TSM database and 10G log from attached SCSI disk
> >drives to a SAN. Backing the db now takes twice as long as it used to
> >(from 40 minutes to 90 minutes).  The old
> >attached disk drives are non-RAID and TSM mirrored.  The SAN drives are
> >RAID-5 and TSM mirrored.  I know I have to pay a penalty for writing to
> >RAID-5.  But considering the massive cache of the SAN it should not be
> >too bad.  In fact, performance of client backups hasn't suffered.
> >
> >However, the day after the move, I noticed that backup db ran for twice
> >as long.  It just doesn't make sense it will take a 100% performance hit
> >from reading from RAID-5 disks.  Our performance guys looked at the sar
> >data and didn't find any bottlenecks, no excessive iowait, paging, etc.
> >The solution is to move the db and log
> >back to where they were.  But now management says: "We purchased this
> >very expensive 2T IBM SAN and you are saying that you can't use it."
> >Meanwhile, our Oracle people happily report that they are seeing
> >the performance of their applications enjoy a 10% increase.
> >
> >Has anyone put their db and log on a SAN and what is your experience?
> >I have called it in to Tivoli support but has yet to get a callback.
> >Has anyone noticed that support is now very non-responsive?
> >
> >server; AIX 4.3.3,  TSM 4.2.1.15
> >
> >Thanks,
> >Eliza Lau
> >Virginia Tech Computing Center
> >1700 Pratt Drive
> >Blacksburg, VA 24060
> >[EMAIL PROTECTED]
> >
>



Re: Cross platform restores?

2002-09-03 Thread Rafael Mendez

Hi Steve,
Although there are some tricks to do that, I do not think that practice is acceptable. 
Why?, because "data is money" and, restoring data wherever you want means, in my 
opinion, data "traveling" without control of its owner. (Do not think you are the 
"good one", think on people attacking your data, your network... so on.)
At tech level, that practice could cause problems in TSM server and TSM clients and 
finally remember that NT file system(NTFS) are not the same type of Unix file systems.

Just my point of view.
Rafael

-- Mensaje original --
to: "Copper, Steve" <[EMAIL PROTECTED]>
cc:
date: 8/30/2002 11:50:52 AM
subject: Cross platform restores?



> Hi All,

>

> Quick question:

>

> Is it possible to do a cross platform restore? If so, how?

>

> I have some users who wish to restore a backup image of an NT4 server onto a

> unix(AIX) server for purposes of DR (not my place to ask why). In my mind

> this could be possible as it is effectively only a binary file. I have been

> trying the "q backup" command with the fromnode option on the AIX server but

> it can't seem to see any files from the NT node. The access is set on the NT

> server to allow all users from all nodes access to the filespaces so I don't

> think this is a problem.

>

> TIA

>

> Steve Copper

>

>

>

>

>

> _

>

> This email and any files transmitted with it are confidential and

> intended solely for  the use of the individual or  entity to whom

> they  are addressed.  If  you have received  this email in  error

> please notify [EMAIL PROTECTED]

>

> This  footnote also confirms  that this  message  has been  swept

> for all known viruses by the MessageLabs  Virus Scanning Service.

> For further  information  please visit http://www.messagelabs.com

>


___
Obtin gratis tu cuenta de correo en StarMedia Email. !Regmstrate hoy mismo!. 
http://www.starmedia.com/email



Re: backup performance with db and log on a SAN

2002-09-03 Thread Remco Post

On maandag, september 2, 2002, at 01:49 , Daniel Sparrman wrote:

> Hi Eliza
>
> As I understand it, each "tape-HBA" has 3 3590E FC connnected to it. The
> two 21G database disks, are each connected to it's own HBA?
>
> According to spec sheets, the 3590E FC could handle speed up to 100MB/s
> with 3:1 compression. This means that with 3 drives, you could have up
> to
> 300MB/s. However, the HBA will only theoretically handle 125MB/s, or
> 250MB/s with 2Gb FC.
>
Fortunately, an 3590E will do only about 16MB/s or so, so 3 drives will
not fill up a FC link 3:1 compression is very rare in my experience,
2:1 or less is far more likely...


---
Met vriendelijke groeten,

Remco Post

SARA - Stichting Academisch Rekencentrum Amsterdamhttp://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167
PGP keys at http://home.sara.nl/~remco/keys.asc

"I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer
industry
didn't even foresee that the century was going to end." -- Douglas Adams



Re: backup performance with db and log on a SAN

2002-09-03 Thread Eliza Lau

Daniel,


> 
> Hi Eliza
> 
> As I understand it, each "tape-HBA" has 3 3590E FC connnected to it. The 
> two 21G database disks, are each connected to it's own HBA?

Yes, you are correct.

> 
> According to spec sheets, the 3590E FC could handle speed up to 100MB/s 
> with 3:1 compression. This means that with 3 drives, you could have up to 
> 300MB/s. However, the HBA will only theoretically handle 125MB/s, or 
> 250MB/s with 2Gb FC.

We have 1Gbit FC adapters.  Are you sure that the 3590E FC drives can
stream at 100MB/s?   We were told that the tape drives are the bottleneck
and not the adapters.

> 
> Thie means that the tape drives will not stream the data, cause the data 
> flow will be queued(Some data to drive1, some data to drive2, some data to 
> drive3 and so on). There should be some wait % on the drives.
> 
> What kind of a machine are you running? The smallest machine I know of 
> that is sold today and could handle this amount of adapters, is the 
> P-Series 660, which has 3 or 6 PCI busses. Normally, you don't want 
> multiple FC HBA and Gb Ethernet adapters mixed in the same PCI bus, as 
> theese are considered high-performance adapters that can utilize the whole 
> bandwidth of the bus.

We have a P-Series 660 7026-6H1 with 12 PCI slots. The other adapters being used
besides the 4 HBAs are a SCSIRaid adapter and a graphics monitor adapter.
There are also 4 SCSI adapters left over from before the tape drives
were converted to FC. They should be taken out.

> 
> How many other adapters are installed in the machine? Gigabit Ethernet, 
> 10/100 ethernet and so on.

The 10/100 ethernet has its own adapter, not in the PCI slots.

> 
> Best Regards
> 
> Daniel Sparrman
> ---
> Daniel Sparrman
> Exist i Stockholm AB
> Propellervägen 6B
> 183 62 HÄGERNÄS
> Växel: 08 - 754 98 00
> Mobil: 070 - 399 27 51
> 
> 
> 
> 
> Eliza Lau <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 2002-09-02 12:19
> Please respond to "ADSM: Dist Stor Manager"
> 
>  
> To: [EMAIL PROTECTED]
> cc: 
> Subject:Re: backup performance with db and log on a SAN
> 
> 
> Thanks Daniel,
> 
> The host has 4 HBAs. Two attached to each of the two switches.  One HBA is 
> 
> zoned for tape traffic and the other for disk traffic.  Only the db and 
> log
> are on the Shark F20.  The disk stgpools are on attached SCSI disk drives.
> The disks are 36G.  There are 8 LSSes.  Each LSS is striped across 8 
> drives
> and sliced into 21G partitions.  The db is on two of these 21G slices.
> Tpae library is a 3494 with 6 3590E FC drives.  All drives are connected
> to the two switches, three each.
> 
> Come to think of it, the Shark only has 2 HBAs.  I have to verify this, 
> since
> I am typing this from home.  But it only has to read from the Shark and 
> write
> to tape through another port in the switch.
> 
> Eliza
> 
> > Hi
> > 
> > The large disks you are talking about, are you meaning large as 36GB, 
> 72GB 
> > an so on, or are you talking about LUN-sizes?
> > 
> > In a shark, you can have very large LUN:s, but they will consist of a 
> > large number of smaller SSA-based hard drives. This means that you will 
> > not have a performance impact on the disks.
> > 
> > Normally performance issues on TSM / SAN has to do with having disks and 
> 
> > tapes on the same HBA. DB transactions is very randomly written, so if 
> you 
> > for example are doing migration, TSM will write to both disks and 
> tape(DB 
> > transactions to disk, migration from disk, migration to tape). This will 
> 
> > have a huge impact, as the HBA is arbitrated(which means only one write 
> > can be done at a time). Also, doing backups and migration directly to 
> > tape, assumes the ablility to write continous, sequential data. If you 
> > have the DB on the same card, your clients, or the TSM server, won't be 
> > able to stream data to the tapes, leading to poor performance.
> > 
> > Eliza, I'd suggest you use somekind of monitoring tool, like the 
> Storwatch 
> > specialist, to see throughput from/to disks and tape. I'm sure that if 
> you 
> > separate the disks from the tape, you will see a performance upgrade.
> > 
> > How many HBA:s do you have in your Shark?
> > 
> > Also, check to see that the logs, diskpook and database is not on the 
> same 
> > volumes. This will also generate bad performance.
> > 
> > The best practice is to have db & log on one HBA, diskpool on one HBA, 
> and 
> > tape drives on one or more HBA:s(depending on amount of tapes drives). 
> > This is however recommended for large environments, with > 200 mid-size 
> > clients.
> > 
> > Best Regards
> > 
> > Daniel Sparrman
> > ---
> > Daniel Sparrman
> > Exist i Stockholm AB
> > Propellervägen 6B
> > 183 62 HÄGERNÄS
> > Växel: 08 - 754 98 00
> > Mobil: 070 - 399 27 51
> > 
> > 
> > 
> > 
> > Remco Post <[EMAIL PROTECTED]>
> > Sent by: "ADSM: Dist Stor Manager" <

Re: How to list files backed up

2002-09-03 Thread Don France

If you use the GUI, you can click on the column heading "Backup Date";  it
will sort the list.  BTW, if your system is setup properly, you can also
just "grep" the dsmsched.log file to find messages for a given file --
combined with tail, you can zero in on the specific date in question.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cahill, Ricky
Sent: Friday, August 30, 2002 12:53 AM
To: [EMAIL PROTECTED]
Subject: How to list files backed up


I must be missed something really obvious as all I want to do is list all
the files that were backed up by a node in it's last backup, but  can't seem
to find any simple way to do this.

Heelp

Thanks in advance

  ..Rikk




Equitas Limited, 33 St Mary Axe, London EC3A 8LL, UK
NOTICE: This message is intended only for use by the named addressee
and may contain privileged and/or confidential information.  If you are
not the named addressee you should not disseminate, copy or take any
action in reliance on it.  If you have received this message in error
please notify [EMAIL PROTECTED] and delete the message and any
attachments accompanying it immediately.

Equitas reserve the right to monitor and/or record emails, (including the
contents thereof) sent and received via its network for any lawful business
purpose to the extent permitted by applicable law

Registered in England: Registered no. 3173352 Registered address above





Re: TSM upgrade

2002-09-03 Thread Don France

I suggest you install the TSM-4.1 (not 5.1), restore the db from "old" TSM
server as part of the migration;  that way, (a) you preserve all the old
backups (without need to export/import), and (b) you get the added
space/performance on Win2K that you currently lack on the old, NT box.
After successful "migration" to the new TSM server box, upgrade to 5.1.1.5
(or later)... read the README docs (server first, then clients later is only
*one* way to do it) about all the migration considerations.  In this case,
you *may* be required to audit the db before upgrade will be allowed to
proceed.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mario Behring
Sent: Monday, August 19, 2002 7:49 AM
To: [EMAIL PROTECTED]
Subject: TSM upgrade


Hi list,

I have to upgrade my TSM Server and clients from 4.1 to 5.1. The new
server, TSM 5.1, will be running on a different machine under Windows 2000
Server. The old one is now running on a Windows NT Server system. My
storage unit is a IBM 3590 tape library (SCSI connected). The TSM 4.1
database is 17GB in size and the Recovery Log is 2GB.

Do you guys have any tips on how should I do this ?? I mean, which is the
best and more secure way to do it. I4ve heard that I cannot simply backup
the TSM 4.1 database and restore it on the TSM 5.1 Server. And I can4t
install TSM 5.1 over 4.1 because the old server is  well  old and
there is no space left on the disks.

Any help will be apreciated.

Thanks.

Mario Behring

__
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com



Re: Solaris 9

2002-09-03 Thread Mark Stapleton

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
John Bremer
> Some of our users are asking about a Solaris 9 TSM client?  It is not yet
> listed at the support system page.
>
> Anyone have an idea about when support for this will be available?  Any
> experience with the current TSM Solaris client on 9?

>From http://www.tivoli.com/support/storage_mgr/adclsuns.htm:

No currently available version of the TSM client for Solaris supports
Solaris version 9. That's not to say that you couldn't get it to work, but
Tivoli does not officially support it, and will tell you so when you seek
help for it. You probably already know this. Why not try it yourself, and
tell us all how it came out?

IBM/Tivoli employees who are participants in this list will not be able to
make any official announcements or pronouncements about Solaris 9, so all
the responses you'll most likely get here will be about worth the electrons
it took to send the message to you. Relax, and it'll be here when it's here.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: Solaris 9

2002-09-03 Thread Bob Booth - UIUC

I asked the same question several weeks ago.  I have not yet called
ITSM support on the issue, however, I was able to hack the install script
and Solaris 9 seems to work with no problems (so far).

I will let the list know what support tells me about client support for
Solaris 9.

Good Luck.

bob

On Mon, Sep 02, 2002 at 07:25:46AM -0600, John Bremer wrote:
> *SMers,
>
> Some of our users are asking about a Solaris 9 TSM client?  It is not yet
> listed at the support system page.
>
> Anyone have an idea about when support for this will be available?  Any
> experience with the current TSM Solaris client on 9?
>
> Thanks.  John



Re: About Backupset

2002-09-03 Thread Joshua Bassi

No, only 1 clients data can be stored on a backupset tape.


-- 
Joshua S. Bassi 
IBM Certified - AIX 4/5L, SAN, Shark 
Tivoli Certified Consultant - ADSM/TSM 
eServer Systems Expert -pSeries HACMP 

AIX, HACMP, Storage, TSM Consultant 
Cell (831) 595-3962 
[EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
mephi (¼B¬ê½å)
Sent: Monday, September 02, 2002 12:20 AM
To: [EMAIL PROTECTED]
Subject: About Backupset

Hello!
Anyone knows that if clients'  Backupset can be put into one tape
volume?
Example: 4 clients need to make weekly backup, and use a 100GB LTO
tape volume to make weekly backupset.
Thanks for help

Mephi Liu



Re: test for DRM

2002-09-03 Thread Don France

The recommended way to do a DR test is to take your copy pool (or a subset,
or a special-set) to another machine (NOT YOUR PRODUCTION box); if you
decide to use DRM (even if not) follow the instructions found in the Admin.
Guide -- this is about the most well-written piece of info across all the
books, even if you are not using the DRM component, it will teach you the
essential parts of performing a successful DR exercise.

Using a backup of your production TSM db, along with the other essential
config files, you install TSM and load the db on an alternate (DR-test)
machine;  if you're platform- and TSM-savvy, you could just install it on
the same system and even use the same library, to do an informal, in-house
test.  It's this alternate system that gets the "DR" treatment of marking
volumes destroyed, etc;  but, proceeding down this path without first
ensuring your backups are being done successfully will only lead to
disappointment... as putting the cart in front of the horse.

Meanwhile, Sun (their website, doc or PS folks) can help you with the spec.s
needed to configure your solution;  specifically, total system configuration
can be thrown off balance by inadequate distribution of the component
loads -- especially, NIC's need healthy sizings of CPU power (as in
Gigabit-E'net cards).  JBOD is great, but, in some cases you just need
protection -- consider raid-0+1 (or, go straight to tape for data that's
mission-critical and cannot simply be re-backed up the next night).

Hope this helps.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Chetan H. Ravnikar
Sent: Saturday, August 31, 2002 9:09 AM
To: [EMAIL PROTECTED]
Subject: test for DRM


Hi there and thanks in advance for all your tips and recommendations


we have a huge distributed new TSM setup, with server spread across the
campuses. We recently moved from 3 ADSM 3.1 servers to 9 TSM 4.2.2
servers all direct attached SUN 280r(sol-2.8), SUN T3 and Spectralogic 64K
libs

I have a few questions

1. We have TSM working on Solaris2.8 with SUN T3 storage for mirrored DB
   and storage pools. Our performances is nowhere close to what SUNs
   recomended T3 sustained writes which is 80MB. Recovery logs are on
   external D130 disk-packs

   has anyone seen a setup with SUN and is this normal? My writes to
diskpools are at 20 to 30 MB and that is slow. I have a raid5 setup for
the storage pools,
   Tivoli suggests JBOD for storagepools rather than raid5!? but how do  I
protect myself from a disk fail on a critacal quarter financial backup..
since the source gets overwritten as soon as they throw the data on to my
stoarge pools T3 (primary)

2. One such setup has a StorageTek L7000 lib and my customer wanted me to
prove that the tapes from offsite do work.

Tivoli suggests that I do not test DRM on a production system. But I had
no choice but to atleast test for bad media on the primary tapepool!if any
so I went ahead picked *a* node

with a select statement, marked all the tapes destroyed on the primary
tape pool(for that node), and started a restore of a filesystem. Prior
I had a bunch of tapes recalled from the off-site pertinent to the same
node.
Had them checked in as private and waited, to see if TSM picks those tapes
since the onsite were marked destroyed. This process has been rather
lengthy and tedious and unsuccessful

Has anyone done a rather simpler test for bad media, to prove that the
off-site tapes do work, less to say the test I performed came back with
data integrity errors and my customers are not happy and with all traces
setup.. Tivoli was unclear how that happened

(Tivoli claimed, there could be a flaw in my DRM process)

3. The last question, during a copy storage pools process, if I *cancel*
the process (since it took days), the next time I start (manual or via a
script) does it pick up from where it stoped!


thanks for all your responses, forgive me, My knowledge is pretty limited
and I started learning Tivoli while I started this project

Cheers..
Chetan



Error 10 when installing Web Client

2002-09-03 Thread Bill Dourado

Hi,

I come against"Error 10  updating the registry password"
 "The registry node key couldn't  be
located"

WHEN

attempting to install TSM Web Client via GUI wizard, on a NT(4SP6)  backup
client.

Could somebody please help me ?

I have already tried removing TSM services & deinstalling TSM , and
reinstalling.

TSM  Server  & backup client both 4.1


T.I.A


Bill Dourado






*** Disclaimer ***
This email and any files transmitted with it contains privileged and
confidential information and is intended for the addressee only. If you
are not the intended recipient you must not disseminate, distribute or
copy this email. Please notify the sender at once that you have received
it by mistake and then delete it from your system.

Any views expressed in this email are those of the sender and not
necessarily those of ALSTEC or its associated companies.

Whereas ALSTEC takes reasonable precautions to ensure this email is virus
free, no responsibility will be accepted by it for any loss or damage
whether direct or indirect which results from the contents of this email
or its attachments.

ALSTEC Group Limited - Registered in England No.3939840 - www.alstec.com
Registered office: Cambridge Road, Whetstone, Leicester. LE8 6LH. England.
Telephone: +44 (0)116 275 0750



Re: Unload/load db and long expirary

2002-09-03 Thread MC Matt Cooper (2838)

Hi Rodney,
I am at about the same place.  My question is, How long did it take
to do the UNLOAD and RELOAD of your DB?
Matt

-Original Message-
From: Rodney clark [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 02, 2002 3:15 AM
To: [EMAIL PROTECTED]
Subject: Unload/load db and long expirary

Just a quick heads up,

We had been struggling with a really slow expirary process on a 33 gbyte
database.
Typically the process would delete inspect abount 30 or 50 objects a second.
Expirary was inspecting about a million objects a day.
No good a client was archiving a million objects a day.

On the weekend we did a unload load ( the second this year ) bingo we are
now
inspecting objects about 300 to 400 per second incidentally, most of the
objects
are being deleted.

The database went from 70% usage to 50%.

Lets just hoop it stays this way.
Server is 4.2.1.11


-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-



Re: Solaris 9

2002-09-03 Thread Remco Post

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On maandag, september 2, 2002, at 11:26 , Mark Stapleton wrote:
> IBM/Tivoli employees who are participants in this list will not be able
> to
> make any official announcements or pronouncements about Solaris 9, so
> all
> the responses you'll most likely get here will be about worth the
> electrons
> it took to send the message to you. Relax, and it'll be here when it's
> here.
>

Which is a bit a problem, Solaris 9 has been publicly available for a
few months now and the beta's are publicly avaialable for well over half
a year now. So why is there still no support from Tivoli for the most
recent release of one of the most used server platforms worldwide? This
is actually a problem in our company as well, we're posponing long
overdue upgrades due to lack of a TSM client. Hacking the client install
script is IMHO not the way to go. Tivoli should just do a better job in
providing this client

- ---
Met vriendelijke groeten,

Remco Post

SARA - Stichting Academisch Rekencentrum Amsterdamhttp://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167
PGP keys at http://home.sara.nl/~remco/keys.asc

"I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer
industry
didn't even foresee that the century was going to end." -- Douglas Adams


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (Darwin)

iD8DBQE9dK5yBIoCv9yTlOwRAp5+AJwN4Cs7VMZ5p0jvkJJP3yzJXs1c0gCeOAEB
c+8GoOemifqyLisJsBuOYPo=
=J3MP
-END PGP SIGNATURE-



Virtual Volumes...

2002-09-03 Thread asr

Greetings all.  For those of you using virtual volumes, how large do you make
the volumes?  When I started doing this, I had the following opinions:

+ The server volumes should be significantly smaller than the remote physical
  storage.

It would be a real pain for most of the virtual volumes to be spread over
different tapes; multiple physical mounts for each virtual mount?  Ugh.


+ The server volumes should be large enough not to be a huge load.

Each virtual volume is a "file" on the hosting server.  For each TB of data,
that's 1000 files if volumes are 1G; actually probably more like 1300, with
reclamation at 50%.  You certainly don't want them as small as 100M.


I selected 1GB for my virtual volumes, but am starting to re-think the issue,
maybe going for 10G or so.

Anyone want to share thought processes?


- Allen S. Rout



Re: backup performance with db and log on a SAN

2002-09-03 Thread Roger Deschner

In my experience, "smart load balancing" does not really exist. I
thought I had heard of it, so I went looking on my system to see if it
was doing anything to help. If the Database has plenty of room in it,
and you add a new extent in hope of spreading out the I/O load, that
extent will not be used at all. You cannot spread out the I/O load just
by adding more DBvols. Since I could find no evidence of it helping, I
suspect it can't hurt much either, in a RAID situation.

I have had to use the brute force method - "dumb load balancing". That
is, squeezing the database into the shape I want with DELETE DBVOL.
Making this work takes careful advance planning, but the payoff can be
big. RAID may make this harder, since the underlying physical disk
structure may be hidden from you.

I cannot speak for RAID, because I have avoided it, but for JBOD disks,
is not a big problem to have DB and Log on the same disk, except that
their I/O patterns are very different, so you might want to tune them
separately. It is also not a problem to have multiple DB extents on the
same physical disk, as long as they are adjacent to one another.
Considering the random I/O pattern of the ITSM database, two adjacent
extents should be similar to one larger extent, in terms of average seek
distances. Since it neither helps nor hurts, you can use this as an
opportunity to make your manual load balancing come out the way you want
it.

The performance effect of multiple Logvols is truly neutral, because the
Log is a circular buffer, so generally only one of them is used at a
time. There are exceptions, but not enough to matter for performance.
>From a practical standpoint, having the Log split into at least two
parts can give you flexibility in moving it around without taking the
server down. In the case of the Log, I/O load balancing happens
automatically, since the whole thing is one big circular buffer.

Several people here on this list have said that Database unload/reload
is a very great help. The official ITSM manuals also say this. There are
two problems, however. First, it would take more downtime than we could
afford. Second, the benefit is temporary, and will gradually be un-done
as a part of normal processing. There were rumors that IBM was
considering making a background process that could gradually reorg the
database when the system wasn't busy. This would be great, and could
achieve a permanent improvement in database disk performance. I hope
they do this.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Mon, 2 Sep 2002, Remco Post wrote:

>On maandag, september 2, 2002, at 11:26 , Daniel Sparrman wrote:
>
>> Hi
>>
>> The large disks you are talking about, are you meaning large as 36GB,
>> 72GB
>> an so on, or are you talking about LUN-sizes?
>>
>
>Disk size, 72 GB or so
>
>> In a shark, you can have very large LUN:s, but they will consist of a
>> large number of smaller SSA-based hard drives. This means that you will
>> not have a performance impact on the disks.
>>
>
>I know, I also know that you will have performance impact on your disks.
>I noticed that especially the IBM ssa raid controller (4-P)  gives very
>bad performance on any kind of raid. I don't have a shark, so I can't
>talk about it's raid controller. Having eg. both the db volumes and the
>logvolumes on the same raidgroup will for sure give you very bad
>performance on the disks. Also, I don't think it's a good idea to have a
>database (or log) spread across multiple partitions on the same
>raidgroup. TSM will try to do 'smart' load balancing, which will
>decrease performance in that case since the disks will have to do more
>seeks.
>
>---
>Met vriendelijke groeten,
>
>Remco Post
>
>SARA - Stichting Academisch Rekencentrum Amsterdamhttp://www.sara.nl
>High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167
>PGP keys at http://home.sara.nl/~remco/keys.asc
>
>"I really didn't foresee the Internet. But then, neither did the computer
>industry. Not that that tells us very much of course - the computer
>industry
>didn't even foresee that the century was going to end." -- Douglas Adams
>



Help!!! ANR8447E and ANR1401W

2002-09-03 Thread Sandor Maklari

Hi All,

We have a following TSM server configuration:

AIX 4.3.3 ML9
Atape   7.1.5.0
tivoli.tsm.devices.aix43.rte  4.2.2.10
tivoli.tsm.server.rte  4.2.2.10
IBM 3583 LTO library with 2 drives  (Drive firmware: 25D4)

Nodes maxnummp option value: 2
Device class mount limit: drives
Library drives are in online.

When both drive are used in library LIB03  (state: idle, in use or
dismounting), in this time, other client try to access any drive, we get a
following error messages:

08/28/02   23:58:25  ANR0406I Session 33633 started for node RMAN_NODE (TDP
  Oracle AIX) (Tcp/Ip 1.1.2.3(33538)).
08/28/02   23:58:25  ANR8447E No drives are currently available in library
  LIB03.
08/28/02   23:58:25  ANR1401W Mount request denied for volume 900019 -
mount
  failed.
08/28/02   23:58:26  ANR8447E No drives are currently available in library
  LIB03.
08/28/02   23:58:26  ANR1401W Mount request denied for volume 900019 -
mount
  failed.
08/28/02   23:58:26  ANR8447E No drives are currently available in library
  LIB03.
08/28/02   23:58:26  ANR1401W Mount request denied for volume 900019 -
mount
  failed.
08/28/02   23:58:26  ANR0525W Transaction failed for session 33633 for node
  RMAN_NODE (TDP Oracle AIX) - storage media
inaccessible.

Any ideas?

Sorry for my poor English!

Maklári Sándor

E-mail: [EMAIL PROTECTED]




Re: LAN FREE BACKUPS

2002-09-03 Thread David Gratton

Make sure you stop/start the dsmsta process every time you make a change on the
TSM server..So stop/start the dsmsta process and try again...


Dave Gratton
IBM
416-774-1396





FRANCISCO ROBLEDO <[EMAIL PROTECTED]> on 09/02/2002 06:04:33 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: David Gratton/IBMSOTS/ScotiabankGroup)
Subject:  LAN FREE BACKUPS



I am having some problems with the LAN Free backups.
Server is 5.1.1 (AIX 4.3.3), client is 5.1.1 , and
Storage Agent 5.1.1.

I make node backups for 3 weeks.

This Weekend, the backups report this:

ANR0522W Transaction failed for session 6331 for node
AIXSAP (AIX) - no space available in storage pool
SAN_AIXSAP_INCR and all successor pools.
08/29/02 12:12:39 ANR0403I Session 6331 ended for
node AIXSAP (AIX).
08/29/02 12:12:43 ANE4952I (Session: 6330, Node:
AIXSAP)  Total number of objects inspected:   83,795
08/29/02 12:12:43 ANE4954I (Session: 6330, Node:
AIXSAP)  Total number of objects backed up:420

But, the Storage pool have empty volume and the
library have Scratch volumes.

If I disable the Lanfree option, the backups are
successfuly.

Thanks for any help

Francisco.

_
Do You Yahoo!?
Informacisn de Estados Unidos y Amirica Latina, en Yahoo! Noticias.
Vismtanos en http://noticias.espanol.yahoo.com



DSMSERV DUMPDB -> DEVCLASS FILE (DEVTYPE=FILE) ---- Possible? If so, what are the requirements ?

2002-09-03 Thread Kent Monthei

I made a test copy of my TSM database volumes on a test server, then tried
running DSMSERV DUMPDB.  The test server doesn't have access to a tape
drive, but has substantial disk storage, so I defined a DEVCLASS FILE
DEVTYPE=FILE in 'devconfig', then ran DSMSERV DUMPDB using FILE as a
target.   That seemed to work for a minute or two (DUMPDB allocated a file
on disk), but then terminated with an out-of-space error.   The target
volume had more than enough space to hold the entire 50GB DB, but only
about 500MB were allocated before the operation failed.

Did I miss a requirement, or is this simply not possible?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



Consolidating NT servers to a NAS

2002-09-03 Thread Kauffman, Tom

We're about to replace four NT fileservers with one IBM NAS-box. Is there
any way to migrate or export/import the TSM backup data? The directory
structure is changing along with the UNC name and the drive letter
(filesystem). In addition, we're splitting our biggest server over two
drives in the new box.

I can't see a good way to keep my 'x' generations of \\fileserv1\d$ when the
new system is going to have \\fileserv1\d$\data as one drive (fileserve\z$)
and \\fileserv1\d$\userhome as another (fileserve\y$).

Adding to the fun & frolic, \\fileserv2\d$ will also end up under
\\fileserve\z$.

Suggestions?

Do I just camp on the old data for 45 days and then blow it away? The vast
majority of the files of interest change daily, so my five-version backup
just barely covers the work-week. The rest of the files are huge seldom-used
access databases that (a) probably wouldn't be detected as corrupt for
months and (b) probably won't have a useable copy in TSM by the time we find
out it's corrupt anyhow.

TIA

Tom Kauffman
NIBCO. Inc



select for copy groups??

2002-09-03 Thread Joseph Dawes

Does anyone have a sql for finding which servers are associated with which
copy group?





Joseph Dawes
I/T Infrasctructure - Unix Technical Support
Chubb & Son, a Division of Federal Insurance Company
15 MountainView Road
Warren,New Jersey 07059

Office:908.903.3890



OpenVMS's backups don't send summary information

2002-09-03 Thread Henrry Aranda

Hi,

I have a TSM server v5.1.1.4 on W2k, ABC client v3.1.0.4 for OpenVMS on
Alpha, TSM Clients v5.1.0.0 on W2k and TDPs for MSExchange and MSSQL.
When I backup an OpenVMS server with ABC client, summary information isn't
sent to TSM server, the summary table doesn't contain information for these
backup operations.
The backups with tsm client and tdps on windows works fine, I applied the
latest patch available on the ftp site but it didn't resolve the problem.

Any help will be appreciated.

Henry Aranda

_
MSN Photos is the easiest way to share and print your photos:
http://photos.msn.com/support/worldwide.aspx



Re: select for copy groups??

2002-09-03 Thread Mark Stapleton

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Joseph Dawes
> Does anyone have a sql for finding which servers are associated with which
> copy group?

Servers are not associated with copygroups. Copygroups are associated with
management classes, and there is a default management class for each policy
domain. Nodes are associated with domains, but a node can easily backup to
non-default management classes by using entries in the include/exclude list.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: multiple reclaim processes

2002-09-03 Thread Seay, Paul

I have a set of perl/ksh scripts that have automated this.  They work well.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 4:35 PM
To: [EMAIL PROTECTED]
Subject: Re: multiple reclaim processes


Yes, I believe that that's right, reclamations are one per
storagepool.

The only way I know to get around it is to write a script/select
statement to look for the tapes that are the least utilized and do a "move
data" on those tapes. I have not automated the process, but have a simple
select that gives me a list of all the tapes and the "pct_reclaim" from the
volumes table. The "pct_reclaim" is the opposite of "% utilized", so it's
kind of a "% empty" value.

select volume_name, pct_reclaim, stgpool_name from  volumes where
status='FULL' order by 3,2

I run this script when I want to predict how changing the
reclamation threshold on a storage pool will effect processing. i.e. It will
show me how many tapes are above a given threshold and would be reclaimed.

Thanks,
Ben

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 12:58 PM
To: [EMAIL PROTECTED]
Subject: Re: multiple reclaim processes


Donald,

As far as I know, only one reclamation process per stgpool!

Regards,

Demetrius Malbrough
AIX Storage Specialist

-Original Message-
From: Levinson, Donald A. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 11:00 AM
To: [EMAIL PROTECTED]
Subject: multiple reclaim processes


Is there a way to have multiple reclaim processes for the same storage pool?
It seems silly to have an 8 spindle library and only use two of them to try
to keep up with my reclaim processing.

TSM 5.1.1
AIX 4.3.3



This transmittal may contain confidential information intended solely for
the addressee. If you are not the intended recipient, you are hereby
notified that you have received this transmittal in error; any review,
dissemination, distribution or copying of this transmittal is strictly
prohibited. If you have received this communication in error, please notify
us immediately by reply or by telephone (collect at 907-564-1000) and ask to
speak with the message sender. In addition, please immediately delete this
message and all attachments. Thank you.