Re: Re. War stories: Restores/200GB ?

2002-03-17 Thread Daniel Sparrman
Hi When reaching volumes over >200GB for a server, you need to do find ways to minimize the amount of data that has to be restored in case of a disaster. If this is a fileserver, the most efficent way to speed up the restore time of the whole server, would be to implement HSM. Normally, 40-60% of data on a fileserver is older than 60 days. Therefore, moving this information to tape, but letting the users still see the files using for example Explorer, would result in only having to restore 40-60% of data in case of a disaster(the only information that has to be restored is the data that has not been moved by the HSM client.) The HSM client migrates data transparent to the users. This means that even if the information is migrated, the user will still see the information as if it were on disk. Only difference is that it will take a few more seconds to open the file.This means that a restore that normally would take 6-8 hours, would only take 3-4 hours. Optimizing the performance of the client would probably save you an hour. However, if this is a critical machine, 1 hour isn't good enough. Therefore, implementing HSM is the simplest and most efficient way to secure the restore time of the server.Best Regards Daniel Sparrman---Daniel SparrmanExist i Stockholm ABBergkällavägen 31D192 79 SOLLENTUNAVäxel: 08 - 754 98 00Mobil: 070 - 399 27 51-"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: -To: [EMAIL PROTECTED]From: "Don France (P.A.C.E.)" <[EMAIL PROTECTED]>Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>Date: 03/17/2002 12:39AMSubject: Re. War stories: Restores/200GB ?There are several keys to speed in restoring a large number files with TSM; they are:  1.. If using WindowsNT/2000 or AIX, be sure to use DIRMC, storing primary pool on disk, migrate to FILE on disk, then copy-pool both (this avoids tape mounts for the directories not stored in TSM db due to ACL's);  I've seen *two* centralized ways to implement DIRMC -- (1) using client-option-set, or (2) establish the DIRMC management class as the one with the longest retention (in each affected policy domain);   2.. Restore the directories first, using -DIRSONLY (this minimizes NTFS db-insert thrashing);   3.. Consider multiple, parallel restores of high-level directories -- despite potential contention for tapes in common, you want to keep the data flowing on at least one session to maximize restore speed;   4.. Consider using CLASSIC restore, rather than no-query restore -- this will minimize tape mounts, as classic restore analyzes which files to request and has the server sort the tapes needed -- though tape mounts may not be an issue with your high-performance configuration;   5.. If you must use RAID-5, realize that you will spend TWO write cycles for every write;  if using EMC RAID-S (or ESS), you may want to increase write-cache to as large as allowed (or turn it off, altogether).  Using 9 or 15 physical disks will help.A client of mine just had a server disk failure last weekend;  it had local disk configured with RAID-5 (hardware RAID controller attached to Dell-Win2000 server) -- after addressing items 1 to 3, above, we were able to saturate the 100Mbps network, achieving 10-15 GB/Hr for the entire restore -- only delays incurred were attributable to tape mounts... this customer had an over-committed silo, so tapes not in silo had to be checked-in on demand.  316 GB restored in approx. 30 hours.  Their data was stored under 10 high-level directories, so we ran two restore sessions in parallel -- only had two tape drives -- and disabled other client schedules during this exercise.For your situation, 250 GB and millions of files, and assuming DIRMC (item #1, above), you should be able to see 5 - 10 GB/Hr -- 50 hours at 5 GB/Hr, 25 hours at 10 GB/Hr.  So you are looking at two or three days, typically.Large numbers of small files is the "Achilles Heal" of any file-based backup/restore operation -- restore is the slowest (since you are fighting with the file system of the client OS) because of the way file systems traverse directories and reorganize branches "on the fly", it's important to minimize the "re-org" processing (in NTFS, by populating the branches with leaves AFTER first creating all the branches). We did some benchmarks and compared notes with IBM;  on another client, we developed the basic expectation that 2-7 GB/Hr was the "standard" for comparison purposes -- you can exceed that number by observing the first 3 recommended configuration items, above.How to mitigate this:  (a) use image backup (now available for Unix, soon to be available on Win2000) in concert with file-level progressive incremental; and (b) limit your file server file systems to either 100 GB or "X" million files, then start a separate file system or server upon reaching that threshold... You need to test for your environment to determine what is the acceptable standard to implement.Hope this helps.Don FranceTechnical Ar

Local Backupset

2002-03-17 Thread Robert Ouzen

Hi to All

I need the correct steps to create a local backupset for a client in case of
the server is down. I already generate a backupset  client for the server on
a DLT.

Any ideas …….

Regards Robert Ouzen
[EMAIL PROTECTED]



Re: Local Backupset

2002-03-17 Thread Daniel Sparrman
Hi There is no difference between a local and a server based backupset. You can take the backupset tape from the server library and insert it into a locally attached DLT tapedrive on the client, and then restore the backupset from that tapedrive. Best Regards Daniel Sparrman---Daniel SparrmanExist i Stockholm ABBergkällavägen 31D192 79 SOLLENTUNAVäxel: 08 - 754 98 00Mobil: 070 - 399 27 51-"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: -To: [EMAIL PROTECTED]From: Robert Ouzen <[EMAIL PROTECTED]>Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>Date: 03/17/2002 11:27AMSubject: Local BackupsetHi to AllI need the correct steps to create a local backupset for a client in case ofthe server is down. I already generate a backupset  client for the server ona DLT.Any ideas …….Regards Robert Ouzen[EMAIL PROTECTED]

""
Description: Binary data


Re: Magstar 3494 Tape Library

2002-03-17 Thread Seay, Paul

The 3494 works with TSM, Legato, and NetBackup.  I think ArcServe works
also.  B1A drives are not SAN Ready.  I think you have to convert them to
E1A to get the FC upgrade feature, but I am not sure on that.  The FC
upgrade feature is about 7K.  You can look up the feature upgrade
preqrequsits on WWW.IBMLINK.IBM.COM.  You may also be able to use a SAN Data
Gateway, though some customers have trouble getting them implemented.  The
issue is you have to use exactly what IBM SAN Central says to use.

-Original Message-
From: Zosimo Noriega (ADNOC IS&T) [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 17, 2002 2:27 AM
To: [EMAIL PROTECTED]
Subject: Magstar 3494 Tape Library


hi,
I hope anybody can help me.  I have 3494 TL with 4 3590B1A drives.  Is it
SAN ready? Can I able to connect this tape drives (3590B1A) on the IBM SAN?
Or i need to upgrade the drive to 3590E1A to provide SAN Connectivity.  Is
this library exclusive use only for ADSM/TSM software?

thanks in advance,
Zosi Noriega
ADNOC-UAE



Re: Magstar 3494 Tape Library

2002-03-17 Thread Adolph Kahan

1. You have to upgrade to 3590E1A.
2. The library does not have to be exclusive to TSM. You can share it
with other systems, as long as the other system does not want exclusive
use. For example lots of shops share their 3494 library between OS/390
and TSM running on W2K, AIX and other supported platforms.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Zosimo Noriega (ADNOC IS&T)
Sent: Sunday, March 17, 2002 2:27 AM
To: [EMAIL PROTECTED]
Subject: Magstar 3494 Tape Library

hi,
I hope anybody can help me.  I have 3494 TL with 4 3590B1A drives.  Is
it
SAN ready? Can I able to connect this tape drives (3590B1A) on the IBM
SAN?
Or i need to upgrade the drive to 3590E1A to provide SAN Connectivity.
Is
this library exclusive use only for ADSM/TSM software?

thanks in advance,
Zosi Noriega
ADNOC-UAE



Re: Stop with DRM

2002-03-17 Thread Seay, Paul

Actually, I think the issue is you are taking the default for the MOVE
DRMEDIA command which is SOURCE=DBFULL.  If you specify SOURCE=DBS, it will
get the DBSNAPSHOT tapes.

I use DBNONE and then issue a specific MOVE DRMEDIA command for each
DBSNAPSHOT volume that I want to eject.  The reason is we run a DBFULL
onsite and a DBSnapshot every day, plus 2 offisite DBSnaphots.  That way I
am fairly certain that I have a DBSnapshot that I can restore from in a
disaster.

-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 17, 2002 2:14 AM
To: [EMAIL PROTECTED]
Subject: Re: Stop with DRM


On Tue, 29 Jan 2002 10:03:27 +0100, it was written:

>if I make a backup of the TSM database the databasetape is moved to a
>DRM off-site state. Now I want to get rid of that feature.

Easy. If you run

Q DRM

the default flag for source is DBB (full backups). If you don't want to send
your backups to off-site vaulting, run

Q DRM SOURCE=DBS

The only database backups that will go to off-site vaulting then would be
database snapshots.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: Re. War stories: Restores > 200GB ?

2002-03-17 Thread Seay, Paul

On the ESS the write cache is not optional.  You must use it.  That is where
all the write performance comes from.  It eliminates the RAID-5 Write
penalty and basically changes the writes to RAID-3, no reads before write,
when writing sequentially.

Not sure what you mean by 2 write cycles.  RAID-1 has two write cycles.
RAID-5 has one write cycle to all the drives in the stripe set unless all
the parity is on a single drive, but no one does it that way anymore (EMC
kind of does with RAID-S).  The issue with RAID-5 is the read before write
so that you can recalculate the parity.  On sequential write of full
internal 32K blocks (6 or 7 depending on the disk group) you do not need to
read back because you are going to write the whole stripe.  So, the ESS
calculates the parity in the SSA adapters and writes it to the disks as the
IO occurs.

The rest of the stuff in here is really good stuff.  Though backupsets are
extremely hard to manage for customers with only 3494/3590 libaries because
the client cannot do the restore.  The reality is they run so fast anyway
many of the issues that backupsets eliminate do not necessarily apply if you
use collocation and parallel restores of them.

-Original Message-
From: Don France (P.A.C.E.) [mailto:[EMAIL PROTECTED]]
Sent: Saturday, March 16, 2002 6:40 PM
To: [EMAIL PROTECTED]
Subject: Re. War stories: Restores > 200GB ?


There are several keys to speed in restoring a large number files with TSM;
they are:
  1.. If using WindowsNT/2000 or AIX, be sure to use DIRMC, storing primary
pool on disk, migrate to FILE on disk, then copy-pool both (this avoids tape
mounts for the directories not stored in TSM db due to ACL's);
  I've seen *two* centralized ways to implement DIRMC -- (1) using
client-option-set, or (2) establish the DIRMC management class as the one
with the longest retention (in each affected policy domain);
  2.. Restore the directories first, using -DIRSONLY (this minimizes NTFS
db-insert thrashing);
  3.. Consider multiple, parallel restores of high-level directories --
despite potential contention for tapes in common, you want to keep the data
flowing on at least one session to maximize restore speed;
  4.. Consider using CLASSIC restore, rather than no-query restore -- this
will minimize tape mounts, as classic restore analyzes which files to
request and has the server sort the tapes needed -- though tape mounts may
not be an issue with your high-performance configuration;
  5.. If you must use RAID-5, realize that you will spend TWO write cycles
for every write;  if using EMC RAID-S (or ESS), you may want to increase
write-cache to as large as allowed (or turn it off, altogether).  Using 9 or
15 physical disks will help. A client of mine just had a server disk failure
last weekend;  it had local disk configured with RAID-5 (hardware RAID
controller attached to Dell-Win2000 server) -- after addressing items 1 to
3, above, we were able to saturate the 100Mbps network, achieving 10-15
GB/Hr for the entire restore -- only delays incurred were attributable to
tape mounts... this customer had an over-committed silo, so tapes not in
silo had to be checked-in on demand.  316 GB restored in approx. 30 hours.
Their data was stored under 10 high-level directories, so we ran two restore
sessions in parallel -- only had two tape drives -- and disabled other
client schedules during this exercise.

For your situation, 250 GB and millions of files, and assuming DIRMC (item
#1, above), you should be able to see 5 - 10 GB/Hr -- 50 hours at 5 GB/Hr,
25 hours at 10 GB/Hr.  So you are looking at two or three days, typically.

Large numbers of small files is the "Achilles Heal" of any file-based
backup/restore operation -- restore is the slowest (since you are fighting
with the file system of the client OS) because of the way file systems
traverse directories and reorganize branches "on the fly", it's important to
minimize the "re-org" processing (in NTFS, by populating the branches with
leaves AFTER first creating all the branches). We did some benchmarks and
compared notes with IBM;  on another client, we developed the basic
expectation that 2-7 GB/Hr was the "standard" for comparison purposes -- you
can exceed that number by observing the first 3 recommended configuration
items, above.

How to mitigate this:  (a) use image backup (now available for Unix, soon to
be available on Win2000) in concert with file-level progressive incremental;
and (b) limit your file server file systems to either 100 GB or "X" million
files, then start a separate file system or server upon reaching that
threshold... You need to test for your environment to determine what is the
acceptable standard to implement.

Hope this helps.

Don France

Technical Architect - Tivoli Certified Consultant



Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA



Re: Arhive Bit on Netware after Tape Backup/Restore

2002-03-17 Thread Cameron Ambrose

Hi Andy

 I was under the impression(from an TSM rep) that TSM didn't pay any
attention to the archive bit, but tracked the dates/changes of files
itself.

Regards Cameron



Andy Carlson
<[EMAIL PROTECTED]To: [EMAIL PROTECTED]
RENET.ORG> cc:
Sent by: "ADSM:Subject: Arhive Bit on Netware after 
Tape Backup/Restore
Dist Stor
Manager"
<[EMAIL PROTECTED]
ST.EDU>


03/16/2002
03:12 AM
Please respond
to "ADSM: Dist
Stor Manager"







We have some big Netware (running Groupwise) servers that the admins
will backup/upgrade os/restore using DLT tape instead of TSM.  The
problem is that they claim the arhive bit is set off, so TSM backs all
the files up on the next incremental.  Has anyone else experienced
this?  Is there a neat utility to turn the archive bits back
on?  Thanks.

Andy Carlson |\  _,,,---,,_
[EMAIL PROTECTED]ZZZzz /,`.-'`'-.  ;-;;,_
BJC Health System   |,4-  ) )-,_. ,\ (  `'-'
St. Louis, Missouri'---''(_/--'  `-'\_)
Cat Pics: http://andyc.dyndns.org/animal.html



Re: Filespace name is blank

2002-03-17 Thread StorageGroupAdmin StorageGroupAdmin

Suffered the same problem a month or two ago

This is an effect of the unicode processing within the TSM server code.
This problem is resolved in the latest release (4.2.1.8).  The actual
filespace name is corrcetly stored within the TSM dbase & will be
displayedas expected when you upgrade.


Peter Griffin
Sydney Water


>>> [EMAIL PROTECTED] 03/16/02 08:03pm >>>
Do anyone know why the filespace name is ...

Node Name   Filespace   FSID Platform FilespaceIs Capacity
=
Pct
Name  Type  Filespace (MB)
=
Util
Unicode?
=20
--- ---   - -  =
-
ELVISDB ...1 WinNTNTFS Yes 4 060,1
=
66,6
ELVISDB ...2 WinNTNTFS Yes 8 667,9
=
10,0
ELVISDB ...3 WinNTNTFS Yes21 930,0
=
10,8
ELVISDB ...4 WinNTSYSTEM   Yes 0,0
=
0,0



Christian Pallinder
Storage Specialist

Wineasy AB, Dal=E9num, Hus 112, SE-181 70 Liding=F6
Phone: +46 8 563 110 00 Direct: +46 8 563 110 44
Cell: +46 701 880 044 Fax: +46 8 563 110 10
[EMAIL PROTECTED]


---
This message has been scanned by MailSweeper.
---


---
This e-mail is solely for the use of the intended recipient
and may contain information which is confidential or
privileged. Unauthorised use of its contents is prohibited.
If you have received this e-mail in error, please notify
the sender immediately via e-mail and then delete the
original e-mail.
---



Bar Code Labels

2002-03-17 Thread Allan J Mills

Folks

I have a dilemma

Does anyone out there know where in Australia I can get bar code labels
for
LTO tapes for an IBM 3583 tape library.

Went through the exercise with 7337 library and DLT7000 and after 4
attempts
just manually label'd the tapes as none that were supplied were readable.

Not having much luck this time so thought I would ask the experts.
(IBM Tivoli team were not able to assist)


My Thanks to all

Axm



Re: Bar Code Labels

2002-03-17 Thread Clive Johnson

I have found out that Dell supply them. If you contact the sales department I am sure 
they can help you.

Clive




A.Mills@PATRIC
K.COM.AU To: [EMAIL PROTECTED]
Sent by: cc:
[EMAIL PROTECTED]   Subject: Bar Code Labels
ST.EDU


18/03/2002
08:53
Please respond
to ADSM-L






Folks

I have a dilemma

Does anyone out there know where in Australia I can get bar code labels
for
LTO tapes for an IBM 3583 tape library.

Went through the exercise with 7337 library and DLT7000 and after 4
attempts
just manually label'd the tapes as none that were supplied were readable.

Not having much luck this time so thought I would ask the experts.
(IBM Tivoli team were not able to assist)


My Thanks to all

Axm




--

This e-mail may contain confidential and/or privileged information. If you are not the 
intended recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy this e-mail. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden.



Andreas Buser ist außer Haus

2002-03-17 Thread Andreas Buser

Ich werde ab  16.03.2002 nicht im Büro sein. Ich kehre zurück am
24.03.2002.

Ich werde Ihre Nachricht nach meiner Rückkehr beantworten.
Bei dringenden Fragen wenden Sie sich an T. Sellner (Tel 285 8646)
Email: [EMAIL PROTECTED]


Re: scheduler service overwrites baclient's password

2002-03-17 Thread Bill Boyer

I've seen problems on some Winders machines where if the value returned by
the %COMPUTERNAME% is in lower-case letters and you didn't specify NODENAME,
then different portions of TSM react differently. What I had was that the
Web client interface would change the encrypted stored password that what
was originally stored with the TSM B/A client and scheduler service.

I don't know if this is your problems, but try specifying the NODENAME in
the DSM.OPT file and on the DSMCUTIL command that installs the service.
Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Joel Fuhrman
Sent: Friday, March 15, 2002 9:45 PM
To: [EMAIL PROTECTED]
Subject: scheduler service overwrites baclient's password


The password key which is located in the registry at
HKLM\software\ibm\adsm\backupclient\nodes\W7\adsm is removed
when the scheduler service is started.  The scheduled log contains:

03/15/2002 18:11:11 Querying server for next scheduled event.
03/15/2002 18:11:11 Node Name: W7
03/15/2002 18:11:11 Please enter your user id : ANS1029E
Communications have been dropped.
03/15/2002 18:11:11 Scheduler has been stopped.

Removing and re-installing the Scheduler Service and TSM (version 4.2.1.20)
did not help.


BACKGROUND: The system was created using a clone disk on which the TSM
installed.  After the disk is cloned, the hostname is changed along with
the sids.  The host then joins the domain. The TSM client is used to
register with the TSM server.  Finally, the TSM scheduler service is created
and it wipes out the password.

Any suggestions would be appreciated.



versioning for user's files

2002-03-17 Thread Ken Long

Hello all...

We're migrating our NetWare servers, currently backed up with ArcServe, to
W2K servers which will be backed up with TSM.  These NetWare servers
contain programs, users' home directories, and shared directories.

The existing NetWare server backup is a full backup every day, all to one
DLT tape.  The number of tapes in rotation provides for going about six
weeks, or 30 backups, before a tape is reused.  Also, at the end of each
month the tape is removed from the rotation and held for two years.

My challenge is to provide a reasonable level of backup availability with
TSM.  All I've backed up with TSM thus far are application servers where
users (other than administrators) don't have access.  We keep 7 versions on
those servers.

Our NetWare users sometimes ask for restores of files more than a year old,
and their requests have been fulfilled.  I'm struggling to come up with a
version scheme which will provide a similar level of service.

It seems a bit extreme, but my first thought is to keep 30 versions and do
a monthly backupset of the W2K file server.  But that 30 versions is having
a nasty effect on calculations of the resulting TSM database size.

What kind of versioning do you set for user files, shared documents, etc.,
which are, in my mind, more volatile than application server files because
users are involved?  Any examples of the model you use would be
appreciated.

Thanks... Ken



Re: Restore OS of AIX 4.3.3

2002-03-17 Thread Al'shaebani, Bassam

This is from past experience:
(I'm sure this is left unsaid, but you never know). Initially you have
to have been backing up the OS filesystems.
Once you install the client, assigned the original IP address, highlight
the 
filesystems you need to restore. 
Once the restore begins, you will be prompted to over-write current
files (DO NOT OVER-WRITE)
If you over-write it will corrupt you system files (not to mension the
ODM entries) and you will have to rebuild. 
after the restore is complete, you should have your original system
back..
regards...

-Original Message- 
From: Yahya Ilyas 
Sent: Fri 3/15/2002 11:27 AM 
To: [EMAIL PROTECTED] 
Cc: 
Subject: Restore OS of AIX 4.3.3



Recently we lost both mirror disks on one of the RS6000,
AIX4.3.3 system.  I
installed base OS, and TSM client and than restored selected
files on the
new system.  How can I restore complete OS?

We have TSM server level 4.1.4.

I have done complete restore of an HP system by installing base
OS on first
disk than installing OS on second disk.  Than I rebooted from
first disk,
mounted second disk and restored OS on second disk, after
restore was
complete I rebooted from second disk.  Will this same procedure
work on AIX
platform.  I am told that on AIX platform after restoring OS on
different
hdisk system will not boot.

Thanks


>   -
>   Yahya Ilyas
>   Systems Programmer Sr
>   Systems Integration & Management
>   Information Technology
>   Arizona State University, Tempe, AZ 85287-0101
>
>   [EMAIL PROTECTED]
>   Phone: (480) 965-4467
>
>





Re: versioning for user's files

2002-03-17 Thread Seay, Paul

The real question is what is your business requirement?  Forget what you
have done in the past for a moment and ask your customers what their
business recovery requirement is.

If they could recovery a file from forever ago and it changed once a
day there is a large cost to do that.  Most customers will say they need
something for up to 3 years ago.  We arbitrarily presume we can just save a
yearly or monthly tapes and get back what they want.  Let's say they created
a file and worked on it for March 10th through March 25 and needed to keep a
backup of it for 3 years.  Let's just say now you saved every full backup at
the beginning of every month.  This file would be completely lost if it got
accidentally corrupted or deleted before April 1.  So, who is the fool in
this scenario when it is discovered 3 years later.

TSM avoids all of this, you have a deleted file policy. You have
retention versions and a way to expire versions that get really old.  But,
it always keeps the copy that matches the active backup on disk unless you
as an administrator delete the file space or expire directories you do not
want using an exclude.dir.

This all said.  You probably are looking at time as being your
retention policy for inactive versions of a file.  The only question is how
many versions of a file does a user think they need to go back to.
Remember, with TSM you can use different management classes for different
data on the same client and manage it differently.

So, to answer you more directly.  Think about a very long deleted
file expiration time and a lower number of retain deleted versions.  Think
about NOLIMIT (or large number)for the number of versions and a period of
time that reflects your recovery requirements.  No matter what you come up
with if you use some kind of backupset type of thing you will lose a
customer's data eventually.  It only saves a copy of the current active
data.  You will need a copypool to send your data offsite with TSM, meaning
at least 2 tape drives and probably a lot more tapes, but the recovery
capability will be immearsurably improved.

-Original Message-
From: Ken Long [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 17, 2002 8:22 PM
To: [EMAIL PROTECTED]
Subject: versioning for user's files


Hello all...

We're migrating our NetWare servers, currently backed up with ArcServe, to
W2K servers which will be backed up with TSM.  These NetWare servers contain
programs, users' home directories, and shared directories.

The existing NetWare server backup is a full backup every day, all to one
DLT tape.  The number of tapes in rotation provides for going about six
weeks, or 30 backups, before a tape is reused.  Also, at the end of each
month the tape is removed from the rotation and held for two years.

My challenge is to provide a reasonable level of backup availability with
TSM.  All I've backed up with TSM thus far are application servers where
users (other than administrators) don't have access.  We keep 7 versions on
those servers.

Our NetWare users sometimes ask for restores of files more than a year old,
and their requests have been fulfilled.  I'm struggling to come up with a
version scheme which will provide a similar level of service.

It seems a bit extreme, but my first thought is to keep 30 versions and do a
monthly backupset of the W2K file server.  But that 30 versions is having a
nasty effect on calculations of the resulting TSM database size.

What kind of versioning do you set for user files, shared documents, etc.,
which are, in my mind, more volatile than application server files because
users are involved?  Any examples of the model you use would be appreciated.

Thanks... Ken



Re: Bar Code Labels

2002-03-17 Thread Sis Team Sis Team

Hi,

We use a company called DISCOSOURCE, Australia Pty Ltd.

Richmond, VIC, Aust.
PH - (03) 9429 9355
www.discosource.com.au 

The have LTO, Optical, DLT etc etc etc  Tape and Barcode lables.

Martin.
/\/\


>>> [EMAIL PROTECTED] 03/18/02 08:53am >>>
Folks

I have a dilemma

Does anyone out there know where in Australia I can get bar code labels
for
LTO tapes for an IBM 3583 tape library.

Went through the exercise with 7337 library and DLT7000 and after 4
attempts
just manually label'd the tapes as none that were supplied were readable.

Not having much luck this time so thought I would ask the experts.
(IBM Tivoli team were not able to assist)


My Thanks to all

Axm



***
This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they 
are addressed. If you have received this email in error please let us 
know by reply email or phone and delete all copies from your computer 
system.
***



Pooooor TDP for Exchange Performance via SAN

2002-03-17 Thread Karsten Huettmann

Hi everybody! 
 I'm sitting here looking at my customers environment and wonder 
how TDP for Exchange 
 is so slow via SAN. 
 We have a complete Win2000 environment with all the newest 
patches installed 
 (Server 4.2.1.12, StorageAgent 4.2.1.12, API 4.2.1.30, TDP Exch. 
2.2) 
 We share two drives of a 3584 library via SAN data gateway 
(newest firmware). 

 The backup via LAN (!) is nearly as fast as it could be (up to 12 
MB/s without compression). 

 But the backup via SAN ist very slow. 
 It starts slow (at 1200 KB/s) and slows down to 800 KB/s. 

 Does anyone have recommendations tuning the dsm.opt-file or 
tdpexc.cfg-file... 

 my dsm.opt-file: 
 tcpwindowsize 63 
 tcpbuffsize 32 
 txnb 25600 

 my tdpexc.cfg-file: 
 buffers 4 
 buffersize 2048 

 ... or any other ideas? 
 (Are there any rules-of-thumb how to tune this options?) 

 Thanx in advance, 
 Karsten Huettmann, c.a.r.u.s. IT AG, Germany
-
Mit freundlichen Grüssen / with regards
Karsten Hüttmann
--
Karsten Hüttmann
c.a.r.u.s. Information Technology AG
Advanced System Center
Bornbarch 9, 22848 Norderstedt, Germany

E-Mail: [EMAIL PROTECTED]
Firma: +49.40.51435.3231
Mobil: +49.171.7634388
Fax: +49.40.51435.



Re: Pooooor TDP for Exchange Performance via SAN

2002-03-17 Thread Seay, Paul

What else do you have on the SAN Data Gateway?  Does the client connect to a
switch that the SAN Data Gateway does or what?  There has been a lot of
discussion on this subject over the past weeks.  It has always been the
configuration of the SAN Data Gateway.

-Original Message-
From: Karsten Huettmann [mailto:[EMAIL PROTECTED]] 
Sent: Monday, March 18, 2002 1:35 AM
To: [EMAIL PROTECTED]
Subject: Por TDP for Exchange Performance via SAN


Hi everybody! 
 I'm sitting here looking at my customers environment and wonder 
how TDP for Exchange 
 is so slow via SAN. 
 We have a complete Win2000 environment with all the newest 
patches installed 
 (Server 4.2.1.12, StorageAgent 4.2.1.12, API 4.2.1.30, TDP Exch. 
2.2) 
 We share two drives of a 3584 library via SAN data gateway 
(newest firmware). 

 The backup via LAN (!) is nearly as fast as it could be (up to 12 
MB/s without compression). 

 But the backup via SAN ist very slow. 
 It starts slow (at 1200 KB/s) and slows down to 800 KB/s. 

 Does anyone have recommendations tuning the dsm.opt-file or 
tdpexc.cfg-file... 

 my dsm.opt-file: 
 tcpwindowsize 63 
 tcpbuffsize 32 
 txnb 25600 

 my tdpexc.cfg-file: 
 buffers 4 
 buffersize 2048 

 ... or any other ideas? 
 (Are there any rules-of-thumb how to tune this options?) 

 Thanx in advance, 
 Karsten Huettmann, c.a.r.u.s. IT AG, Germany
-
Mit freundlichen Grüssen / with regards
Karsten Hüttmann
--
Karsten Hüttmann
c.a.r.u.s. Information Technology AG
Advanced System Center
Bornbarch 9, 22848 Norderstedt, Germany

E-Mail: [EMAIL PROTECTED]
Firma: +49.40.51435.3231
Mobil: +49.171.7634388
Fax: +49.40.51435.