Re: 8 Characters in TSM

2006-05-22 Thread William

Thanks Dave, that explains everything!

On 5/21/06, David Longo <[EMAIL PROTECTED]> wrote:


I started a thread on this about 9 months ago.

Short summary:  The "tsm barocde length" on AIX is
for LTO1 and LTO2 tapes only - note comment by that on your lsattr output.

The outout is corect, LTO1/2 wil show up as 6 char if that is what you
were
initially using.  And will stay 6 char, even for new LTO1/2 tapes you
label/initialize.

LTO3 is 8 char.  Best practice going forward is to use 8 char.

There is an IBM Technote# 1217789 on this with lot of details.


David Longo

>>> [EMAIL PROTECTED] 05/21/06 3:20 PM >>>
I have a stranger behavior in my TSM Server ( at least to me).

TSM Server 5.3.3 on AIX 5.3, tape library 3584 with LTO3 drives.

library on AIX, it shows "tsm_barcode_len 6"
root# lsattr -El smc0
alt_pathing no Enable Alternate Pathing
Support True
debug_trace no Debug Trace Logging
Enabled  True
dev_status
N/A  False
devtype 03584L32   Device
Type  False
location
Location True
lun_id  0x1Logical Unit
Number  True
new_name   New Logical
Name True
node_name   0x500507623f0d6e01 World Wide Node
Name False
primary_device  smc0   Primary Logical
Device   False
reserve_support yesUse Reserve/Release on Open and
CloseTrue
retain_reserve  no Retain
Reservation   False
scsi_id 0x3SCSI Target
ID   True
trace_logging   no Trace Logging
EnabledTrue
*tsm_barcode_len 6*  TSM Barcode Length for Ultrium
1/Ultrium 2 Media True
ww_name 0x500507630f4c6d01 World Wide Port
Name False


Library definition on TSM Server as:
DEFINE LIBRARY 3584 LIBTYPE=SCSI SHARED=YES AUTOLABEL=NO RESETDRIVE=YES


On TSM Server, we have old LTO1 tapes as as below although from Specialist
we see "L1" at the end of it:
A1
A2
...

Yesterday when I used the command to checkin new LTO3 tapes with the
command:
LABEL libv 3584_LIB search=bulk checkin=scratch labels=barcode
The new tapes were labeled as:
B1L3
B2L3
..

How could this happen?

If those LTO1 tapes get expired and reclaimed, will it become 8 character
or
still keeps 6 character?

Thanks in advance.

##
This message is for the named person's use only.  It may
contain confidential, proprietary, or legally privileged
information.  No confidentiality or privilege is waived or
lost by any mistransmission.  If you receive this message
in error, please immediately delete it and all copies of it
from your system, destroy any hard copies of it, and notify
the sender.  You must not, directly or indirectly, use,
disclose, distribute, print, or copy any part of this message
if you are not the intended recipient.  Health First reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of
a particular entity;  and (2) the sender is authorized by
the entity to give such views or opinions.
##



Re: AW: TDP for Exchange - Management Class

2006-05-22 Thread Del Hoobler
This information is not correct.
You can schedule a Data Protection for Exchange node.
Use the ACTION=COMMAND type schedule.
There is an appendix in the Data Protection for Exchange
User's Guide that shows you exactly how to do it.

Thanks,

Del



"ADSM: Dist Stor Manager"  wrote on 05/20/2006
10:16:04 AM:

> Actually you will need a fourth node to provide scheduling. The TDP for
> mail cannot be schedule, only the backup archive client could be
> schedule.  You would schedule the BAC to issue a command to start the
> TDP.  Actually I update my exchange nodes once a week to a different
> domain to force it to use a different storage pool, but the retention is
> the same for both policy domain. This is to force the creation of an
> offsite copy of the exchange data once a week.  This is adequate for my
> office, so your mileage may vary.  We also only have weekly Iron
> Mountain pickups.
>
> -Original Message-
> Hi ,
>
> Became  a little bit confusing . So What the purpose of many
> management class per node utility If can't use for this purpose describe
> ?
>
> DomainNodename MgmInclude
> Schedule   Opt File
>
> Domain_Exchange   Exchange EXCH_daily INCLUDE "*\...\full"
> EXCH_daily   Sched_Exch_daily dsm.opt (Default)
> Domain_Exchange   Exchange EXCH_monthly   INCLUDE "*\...\full"
> EXCH_monthly Sched_Exch_montlydsm_monthly.opt
> Domain_Exchange   Exchange EXCH_yearlyINCLUDE "*\...\full"
> EXCH_yearly  Sched_Exch_yearlydsm_yearly.opt
>
> If I understand correctly with this configuration every backup the files
> will be rebinding to the MGM in action !!
>
> And the only way to achieve it , is to create also 3 different nodenames
> . Correct
>
> So wasteful 
>
> Regards Robert Ouzen


Re: AW: TDP for Exchange - Management Class

2006-05-22 Thread Del Hoobler
I can understand how it can be a little confusing,
but if the management classes were used how you suggest,
it would contradict the whole basis of management classes
in the backup copy groups. This is backup, not archive.
That means each backup object on the TSM server, with 
the same name, gets the same management class. 
This is a strict rule, that has no exceptions.
This is the same as the BA client backing up a file.
That is why I recommend using a COPY-type backup
for your Exchange "archival" purposes. You can bind the
COPY-type backups to a management class that meets
your longer term needs.  And, all of this can be automated, 
with no manual intervention.

Thanks,

Del



"ADSM: Dist Stor Manager"  wrote on 05/20/2006 
04:56:43 AM:

> Hi ,
> 
> Became  a little bit confusing . So What the purpose of many 
> management class per node utility If can't use for this purpose describe 
?
> 
> DomainNodename MgmInclude 
> Schedule   Opt File
> 
> Domain_Exchange   Exchange EXCH_daily INCLUDE "*\...\full" 
> EXCH_daily   Sched_Exch_daily dsm.opt (Default)
> Domain_Exchange   Exchange EXCH_monthly   INCLUDE "*\...\full" 
> EXCH_monthly Sched_Exch_montlydsm_monthly.opt
> Domain_Exchange   Exchange EXCH_yearlyINCLUDE "*\...\full" 
> EXCH_yearly  Sched_Exch_yearlydsm_yearly.opt 
> 
> If I understand correctly with this configuration every backup the 
> files will be rebinding to the MGM in action !!
> 
> And the only way to achieve it , is to create also 3 different 
> nodenames . Correct
> 
> So wasteful 
> 
> Regards Robert Ouzen
> 
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
> Behalf Of Volker Maibaum
> Sent: Friday, May 19, 2006 11:57 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: AW: TDP for Exchange - Management Class
> 
> Hi, 
> 
> thanks to all for the very helpful feedback!
> 
> I didn't think of using the "copy backup" for monthly and yearly 
> backups. That will make it a lot easier
> 
> I guess that I will use the monthly policy for copy backups 
> INCLUDE "*\...\copy" MONTHLY
> 
> And use a seperate dsm.opt file (dsm.yearly.opt) to bind the yearly 
> copy backups to the proper management class.
> C:\Programme\Tivoli\TSM\TDPExchange\start_full_yearly_backup.cmd
> pointing to dsm.yearly.opt
> 
> regards, 
> 
> Volker
> 
> 
> > 
> Am Freitag, den 19.05.2006, 11:34 +0200 schrieb Salak Juraj:
> > Hi Del!
> > 
> > I might be wrong because I do not use TDP 4 Mails by myself, I am only 

> > RTFM, but I´d think about simplified "solution 2 by Del":
> > 
> > Background: 
> > I think the only reason for having different requirements for monthly 
> > an yearly backups is TSM storage space, if this were not a problem
> keeping monthly backups for as long as yearly backups should be kept
> would be preferrable.
> > 
> > a) create only 1 NODENAME
> > b) define 
> > INCLUDE "*\...\full"  EXCH_STANDARD and maybe
> > INCLUDE "*\...\incr" EXCH_STANDARD and maybe
> > INCLUDE "*\...\diff"  EXCH_STANDARD
> > appropriately to your regular (daily) backup requirements
> > 
> > c) define
> > INCLUDE "*\...\copy" EXCH_MONTHLY_AND_YEARLY appropriate to 
maximal 
> > combined  requirements of your monthly AND yearly requirements AND 
> > have EXCH_MONTHLY point to separate TSM storage pool (EXCH_VERYOLD)
> > 
> > d) on regular basis (maybe yearly) check out all full tapes from 
> EXCH_VERYOLD storage pool from library.
> > Disadvantage: reclamation of backup storage pool issues because of 
> > offsite tapes in primary storage pool, but this can be solved as well.
> > 
> > You will end with a bit less automated restore (only) for very old 
> > data but with very clear and simple concept for everyda/everymonth 
> > backup operations and with more granularity (monthly) even for 
> data older than a year.
> > 
> > I am interested in your thoughts and doubts about this configuration!
> > 
> > regards
> > Juraj
> > 
> > 
> > 
> > 
> > > -Ursprüngliche Nachricht-
> > > Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
> > > Auftrag von Del Hoobler
> > > Gesendet: Freitag, 12. Mai 2006 15:14
> > > An: ADSM-L@VM.MARIST.EDU
> > > Betreff: Re: TDP for Exchange - Management Class
> > > 
> > > Hi Volker,
> > > 
> > > Are you using separate NODENAMEs for each of the different DSM.OPT 
> > > files? If not, your solution won't do what you think.
> > > 
> > > Data Protection for Exchange stores objects in the backup pool, not 
> > > the archive pool. That means, each full backup gets the same TSM 
> > > Server name (similar to backing the same file name up with the BA 
> > > Client.) It follows normal TSM Server policy rules.
> > > That means, if you are performing FULL backups using the same 
> > > NODENAME, each time you back up with a different management class, 
> > > all previo

TSM 5.3.3 startup in /etc/inittab on AIX ( problem?)

2006-05-22 Thread Lawrence Clark
Hello,

We upgraded TSM from 5.2 to 5.3.3  over the weekend.

We also upgraded AIX on that server from 5.2 to 5.3.3

After the AIX upgrade, TSM aborts on startup from /etc/inittab

However, opening a terminal window an enetering the same startup brings
up TSM with no problem. In fact, it is still running for several days.

The /etc/inittab entry:

autosrvr:2:once:/usr/tivoli/tsm/server/bin/rc.adsmserv >/dev/console
2>&1 #Start
 the Tivoli Storage Manager server

The startup in the terminal window:

/usr/tivoli/tsm/server/bin/rc.adsmserv >/dev/console 2>&1

Anyone experienced similar problems?














The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain information that is confidential, privileged, and/or otherwise exempt 
from disclosure under applicable law.  If this electronic message is from an 
attorney or someone in the Legal Department, it may also contain confidential 
attorney-client communications which may be privileged and protected from 
disclosure.  If you are not the intended recipient, be advised that you have 
received this message in error and that any use, dissemination, forwarding, 
printing, or copying is strictly prohibited.  Please notify the New York State 
Thruway Authority immediately by either responding to this e-mail or calling 
(518) 436-2700, and destroy all copies of this message and any attachments.


Re: TSM 5.3.3 startup in /etc/inittab on AIX ( problem?)

2006-05-22 Thread Richard Sims

Without resulting messages or AIX Error Log problem indications, we
can't provide a lot of assistance, but these thoughts...

 - It's not a good idea to direct service messages to the console,
   both because they become lost and because there can be issues
   with the console.  It's better to have like:
/usr/tivoli/tsm/server/bin/rc.adsmserv >> /var/log/tsmserv/
tsmserv.log 2>&1

 - Starting an application like TSM from inittab can be problematic...
   The relative position can be an issue, as for example being too
early in
   the system start sequence such that volumes are not yet ready.
Though the
   position may look okay, the subsystem starts may be asynchronous,
and your
   process ends up in a race with other establishment processes.
   Further, by starting from Init, you may not have needed environment
   variables in effect, which are in a login session. (You can
compensate for
   this by modifying rc.adsmserv.)

Look through your /var/adm/ras/conslog (if not over-cycled) and other
logging areas for substantive evidence of the problem.

   Richard Sims

On May 22, 2006, at 9:26 AM, Lawrence Clark wrote:


Hello,

We upgraded TSM from 5.2 to 5.3.3  over the weekend.

We also upgraded AIX on that server from 5.2 to 5.3.3

After the AIX upgrade, TSM aborts on startup from /etc/inittab

However, opening a terminal window an enetering the same startup
brings
up TSM with no problem. In fact, it is still running for several days.

The /etc/inittab entry:

autosrvr:2:once:/usr/tivoli/tsm/server/bin/rc.adsmserv >/dev/console
2>&1 #Start
 the Tivoli Storage Manager server

The startup in the terminal window:

/usr/tivoli/tsm/server/bin/rc.adsmserv >/dev/console 2>&1

Anyone experienced similar problems?


group collocation on virtual volumes/ remote-server volumes ?

2006-05-22 Thread Rainer Wolf

Hi TSMers,

I have looked through the archive and docu but did not find :
Scenario:
nodes at a local tsm-server with only disk-cache (limited space) and
migrating data via server-server to a remote-tsm-server
having the tapes (with much more space)

the question is now:
is it possible to make use of the group-collocation-feature when
having disk-cache as Primary StoragePools and the next-storagepool
is on a remote-tsm-server's virtual volumes ?
...or does it just has no effect - setting this server-device-class-storage pool
with those virtual volumes to 'collocation=group' ?

best regards and thanks in advance
Rainer


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: TSM 5.3.3 startup in /etc/inittab on AIX ( problem?)

2006-05-22 Thread Lawrence Clark
the conslog shows:

1148172312 May 20 20:45:12 tsmserv syslog:err|error syslogd: unkn
own priority name "*": errno = 2

the entry in /etc/inittab  has remained much the same since ADSM 3

TSM was upgraded and brought up prior to the AIX  upgrade, so it
appears to be something related to the AIX upgrade.


>>> [EMAIL PROTECTED] 05/22/06 9:57 AM >>>
Without resulting messages or AIX Error Log problem indications, we
can't provide a lot of assistance, but these thoughts...

  - It's not a good idea to direct service messages to the console,
both because they become lost and because there can be issues
with the console.  It's better to have like:
 /usr/tivoli/tsm/server/bin/rc.adsmserv >> /var/log/tsmserv/
tsmserv.log 2>&1

  - Starting an application like TSM from inittab can be
problematic...
The relative position can be an issue, as for example being too
early in
the system start sequence such that volumes are not yet ready.
Though the
position may look okay, the subsystem starts may be asynchronous,
and your
process ends up in a race with other establishment processes.
Further, by starting from Init, you may not have needed
environment
variables in effect, which are in a login session. (You can
compensate for
this by modifying rc.adsmserv.)

Look through your /var/adm/ras/conslog (if not over-cycled) and other
logging areas for substantive evidence of the problem.

Richard Sims

On May 22, 2006, at 9:26 AM, Lawrence Clark wrote:

> Hello,
>
> We upgraded TSM from 5.2 to 5.3.3  over the weekend.
>
> We also upgraded AIX on that server from 5.2 to 5.3.3
>
> After the AIX upgrade, TSM aborts on startup from /etc/inittab
>
> However, opening a terminal window an enetering the same startup
> brings
> up TSM with no problem. In fact, it is still running for several
days.
>
> The /etc/inittab entry:
>
> autosrvr:2:once:/usr/tivoli/tsm/server/bin/rc.adsmserv >/dev/console
> 2>&1 #Start
>  the Tivoli Storage Manager server
>
> The startup in the terminal window:
>
> /usr/tivoli/tsm/server/bin/rc.adsmserv >/dev/console 2>&1
>
> Anyone experienced similar problems?


The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain information that is confidential, privileged, and/or otherwise exempt 
from disclosure under applicable law.  If this electronic message is from an 
attorney or someone in the Legal Department, it may also contain confidential 
attorney-client communications which may be privileged and protected from 
disclosure.  If you are not the intended recipient, be advised that you have 
received this message in error and that any use, dissemination, forwarding, 
printing, or copying is strictly prohibited.  Please notify the New York State 
Thruway Authority immediately by either responding to this e-mail or calling 
(518) 436-2700, and destroy all copies of this message and any attachments.


Re: TSM 5.3.3 startup in /etc/inittab on AIX ( problem?)

2006-05-22 Thread Richard Sims

Larry - Review your /etc/syslog.conf file...

The 5.3 AIX level may be more stringent about its historic contents,
or the AIX 5.3 install may have changed something in it.
It sounds like there is an asterisk in a Priority position in the file.

  Richard Sims

On May 22, 2006, at 10:23 AM, Lawrence Clark wrote:


the conslog shows:

1148172312 May 20 20:45:12 tsmserv syslog:err|error syslogd: unkn
own priority name "*": errno = 2

the entry in /etc/inittab  has remained much the same since ADSM 3

TSM was upgraded and brought up prior to the AIX  upgrade, so it
appears to be something related to the AIX upgrade.


Re: Moving our TSM Server off MVS/ZOS to Win2003

2006-05-22 Thread Zoltan Forray/AC/VCU
Just finished doing the same thing.

We created a new TSM server on RH Linux 4 (We had too many issued with
Windows not wanting to share Fibre-Channel attached tape drives) and
either:

1.  Performed EXPORT TO SERVER
2.  Created new same named nodes and then deleted the old backups from the
zOS server.

Sure, moving some nodes took many days (around 100GB per 24-hours is what
we could get out of the MF, since it only has a 10/100Mb nic).

We had about 12TB to move. This has been in progress for 3-4 months.




Shannon Bach <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" 
05/19/2006 12:56 PM
Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] Moving our TSM Server off MVS/ZOS to Win2003







We recently received the okay to move our current TSM Server off the
MVS/ZOS mainframe to a Windows server.  Along with this, we will be
getting an IBM 3584 Library with (6)TS1120 Jaguar tape drives ...this will
be exclusive to TSM and may be located at an offsite location (still
waiting for decision from above).  And I'll have 800 GB's of disk from an
IBM DS6800.  I'll have to export/move the current date from a 3594 Magstar
ATL and some older archived data on a VTL to the Jaguar.   That will
consist of moving date from 3590e carts with around 20 GB's of data to
cartridges with a capacity of 300 GB's.

Having always been a "mainframer" :~)... I am wondering if anyone else
here has gone through this transition and wouldn't mind passing on some
useful tips.  I have been browsing on the Internet for a redbook or white
paper... even a checklist of considerations, but haven't found much as
yet.

Any tips would be greatly appreciated...and feel free to email me
directly.

Thank you...  Shannon





Madison Gas & Electric Co
Operations Analyst -Data Center Services
Information Management Systems
[EMAIL PROTECTED]
Office 608-252-7260


Re: group collocation on virtual volumes/ remote-server volumes ?

2006-05-22 Thread Allen S. Rout
>> On Mon, 22 May 2006 16:14:28 +0200, Rainer Wolf <[EMAIL PROTECTED]> said:



> the question is now:
> is it possible to make use of the group-collocation-feature when
> having disk-cache as Primary StoragePools and the next-storagepool
> is on a remote-tsm-server's virtual volumes ?

I think your question has a category answer; because I have to answer
'yes' it works, but I don't believe you'll care, if you think about
it.

Yes, it works; that is, if nodes Alpha, Alex, and Allen are in
COLLOCGROUP 'A', then they will have all their data directed to the
same virtual volume.

Of course, that doesn't say anything about which physical tape volume
'A' will be sitting on when time comes to read it.  So it won't be on
a separate tape volume than a volume holding group B data.


If you want to get this kind of distinction working on the other side
of a virtual-volume link, you're going to have to split things up by
node on the virtual-volume target side.  Then, you'll have different
SERVER definitions on the source server, and different devclasses, and
different stgpools.

Probably more pain in the patoot than you desire.

- Allen S. Rout


Re: multiple instance recommendations

2006-05-22 Thread Dave Mussulman
On Fri, May 19, 2006 at 10:05:38PM -0400, Allen S. Rout wrote:
> > I have questions about server sizing and scaling.  I'm planning to
> > transition from Networker to TSM a client pool of around 300 clients,
> > with the last full backup being about 7TB and almost 200M files.  The
> > TSM server platform will be RHEL Linux.
>
> > I realize putting all of that into one TSM database is going to make it
> > large and unwieldy.
>
> You may be underestimating TSM; the size there is well in bounds for
> one server.  The file count is a little high, but I'm not convinced
> it's excessive.  The biggest server in my cluster has 70M files, in a
> 70% full DB of 67GB assigned capacity. So if that means 67GB ~= 100M
> files, you might be talking 130-150GB of database.  I wouldn't do that
> on my SSA, but there's lots of folks in that size range on decent
> recent disk.

I don't have a lot of a priori knowledge of TSM, so any sizing
recommendations I've seen have come from the Performance Guides,
interviews with a few admins, and general consensus from web searches
I've done.  I agree that if the general concern is getting database
operations done in a managable timeframe, as long as I can architect the
db disk well enough, it should size-scale-up fairly well.  Thoughts on
that?


> Do you feel your file size profile is anomolous?  My 70M file DB is
> ~23TB; that ratio is an order of magnitude off mine, and my big server
> is a pretty pedestrian mix of file and mail servers.

My numbers come from the Networker reports on the last full.  Our
domain is largely desktop systems with everything being backed up.
Those large numbers of small files puts our average system at 722k files
and 29GB of storage (averaging 39k a file?)  I'm trying to float towards
just protecting user data (and not backing up C:\Windows 250 times,
especially on systems where we would never do a baremetal restore.)  A
compromise against management that doesn't want to risk data loss by
people putting things in inappropriate (and not backed up) places would
be to use different MCs to provide almost no versioning outside of
defined user data spaces -- but that doesn't get around my high file
count problem.


> I have heard of some dual-attach SCSI setups, but never actually seen
> one in the wild.  If I were going to point at one upgrade to improve
> your life and future upgrade path, getting onto a shareable tape tech
> would be it.  I have drunk the FC kool-ade.  It's tasty, have some. :)

I don't disagree.  Maybe I'll start looking at storage routers to share
my SCSI drives over FC.


> > Tape access-wise, is there a hardware advantage putting multiple
> > instances on the same system?
>
> Yes, it solves your drive sharing problem.  All the TSM instances
> would be looking at /dev/rmtXX.  Your LM instance can do traffic
> direction to figure out who's got the drive, and they are all using
> the same bus, same attach.

Ah, that helps.  I guess I could design one beefy system as a sole-TSM
instance, and if things get too bogged down, split it into two (or
three) TSM instances on that same hardware and not have to worry (as
much) about device sharing because it's all local.  If it got to the
point where I'd want to split that to different hardware, I'd look at
moving the drives to FC and sharing them that way.  That makes sense to
me.


> I like the 'beefy box' solution for all purposes except test instance.
> Make sure it's got plenty of memory. 6G? 8?

Can you clarify the memory needs?  I was thinking 2G of RAM per
instance; would I need more?

Dave


Re: Journal Testing Tool / Mass file,folder creator

2006-05-22 Thread Andrew Raibeck
> Is there an updated Faq that addresses the unix client JBB?

No, not at this time.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

IBM Tivoli Storage Manager support web page:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 2006-05-18
06:30:20:

> Andy,
>
> Thanks for the pointer to the JBB Faq. I realize you were addressing a
> Windows question when you replied, but for us unix customers the Faq
> appeares to be dated and incorrect, namely:
>
> "What platforms and TSM client versions is Journal Based Backup
> available
> for?
>
> Journal Based Backup is currently only implemented on the Windows
> platform and will
> runs on any supported 32-bit version of Windows NT/2000/XP/2003.
>
> It does not work on non-NT based versions of Windows (Win95, Win98,
> WinMe).
>
> Journal Based Backup was introduced in TSM Windows 32-bit client
> version 4.2.2.
>
> As of this writing, the most up to date version of this function is TSM
> client version 5.2.2."
>
> Is there an updated Faq that addresses the unix client JBB?
>
> David
>
> >>> [EMAIL PROTECTED] 5/17/2006 3:56:09 PM >>>
> I would recommend going to the web site in my sig, and searching on
> "journal based backup faq". That should yield an FAQ for journal-based
> backup that you will find informative.
>
> We do not have any such tools for general release to create or delete
> files as you ask. However, it should not be too difficult to script up
> something that can use simple operating system commands like "copy" or
> "echo" to rapidly generate large numbers of files.
>
> Simple example (assumes you have a test file called "c:\testfile") from
> a
> Windows OS prompt to create 5 files in a directory called c:\testdir:
>
> md c:\testdir
> for /l %a in (1,1,5) do @copy c:\tsm\baclient\dsm.opt
> c:\testdir\testfile%a
>
> As for deleting the files, if you contain them within a directory (or
> manageable number of directories) with no other production files, you
> can
> use the the operating system's "rd" or "del" commands to remove the
> files.
>
> In general, though, I would advise that you use the information in the
> FAQ
> to get started. Among other things, it includes tools to help you
> determine whether jbb is a fit for a particular system without
> actually
> having to install and run jbb.
>
> Andy Raibeck
> IBM Software Group
> Tivoli Storage Manager Client Development
> Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
> Internet e-mail: [EMAIL PROTECTED]
>
> IBM Tivoli Storage Manager support web page:
> http://www-306.ibm.
> com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
>
>
> The only dumb question is the one that goes unasked.
> The command line is your friend.
> "Good enough" is the enemy of excellence.
>
> "ADSM: Dist Stor Manager"  wrote on 2006-05-17
> 12:13:06:
>
> > Hi,
> >
> > I would like to test journal based backup and incr backup
> > extensively. Does anybody know of a really fast tool that
> > creates/deletes millions of (temp) files and folders so that the
> > journal is filled?
> > Has anybody already some specific conseption in mind on how to to
> > that? Would IBM kindly release such a tool to the public?
> >
> > Best Regards
> >
> > ..--
> > ___
> > Try the New Netscape Mail Today!
> > Virtually Spam-Free | More Storage | Import Your Contact List
> > http://mail.netscape.com


Re: multiple instance recommendations

2006-05-22 Thread Allen S. Rout
>> On Mon, 22 May 2006 11:57:20 -0500, Dave Mussulman <[EMAIL PROTECTED]> said:
> On Fri, May 19, 2006 at 10:05:38PM -0400, Allen S. Rout wrote:

> I don't have a lot of a priori knowledge of TSM, so any sizing
> recommendations I've seen have come from the Performance Guides,
> interviews with a few admins, and general consensus from web
> searches I've done.  I agree that if the general concern is getting
> database operations done in a managable timeframe, as long as I can
> architect the db disk well enough, it should size-scale-up fairly
> well.  Thoughts on that?

Well, I know of instances with 500G databases which are viewed as
"Well ripe for splitting", rather than as "Screaming panic crisis",
which is what that would be on my disk plant.  So the ceiling is high.

If you plan for possible splitting, and implement it as the scaling
becomes inconvenient, you should be good.  And if that's "never", then
your overhead for the e.g. library manager instance is not bad.

You have a luxury which, I anticipate, many onlookers are envying:
you're starting from scratch.  That'll help. :)


> My numbers come from the Networker reports on the last full.  [...]

OK, I'd have to punt that to "not enough data".  But since the
transition won't be instantaneous in any case, you will be able to
sketch the problem beforehand.


>> I like the 'beefy box' solution for all purposes except test instance.
>> Make sure it's got plenty of memory. 6G? 8?

> Can you clarify the memory needs?  I was thinking 2G of RAM per
> instance; would I need more?


Um.  This is a need I have poorly quantified in my own shop.  What I
know is that I've got 4G for my 11 instances with a total DB
allocation in the 250GB range.  I've got 5G of swap, and I use it;
sometimes I use all of it.  I'm not sure where it's going all the
time, because, though I occasionally wait for 3-7 seconds for a
response on 'q proc', the client interaction does not "seem" to
suffer.  But I don't really know what that means.

Your LM instance will consume negligible core.  I have 9 instances
consuming >200M core total, >160M resident. Oh, heck with this.  Here.


Top Processes  Procs=191 mode=4 (1=Basic, 2=CPU 3=Perf 4=Size 5=I/O w=wait-procs
  PID%CPU  Size   Res   Res   Res  Char RAM  Paging   Command
 UsedKB   Set  Text  Data   I/O Use io other repage  0 dsmserv
 962606   0.0 488244 324760  4464 320296 0  8%000 dsmserv
 741538   7.0 430616 164560  4464 160096 178356  4%000 dsmserv
 761968   2.0 424252 177924  4464 173460 22749814  4%000 dsmserv
 696478   0.0 325512 36608  4464 32144 0  1%000 dsmserv
1310754   0.0 322328 115380  4464 110916 0  3%000 dsmserv
 757808   2.5 252976 171596  4464 167132 169561  4%000 dsmserv
 790706   4.0 248324 180948  4464 176484 600614  4%000 dsmserv
 868510   0.0 244196 163280  4464 158816 0  4%000 dsmserv
 946258   0.0 222660 164312  4464 159848 0  4%000 dsmserv
 397336   9.0 192108 76844  4464 72380 87697718  2%000 dsmserv
 835662   0.0 134928 19676  4464 15212 0  0%000 dsmserv
1409074   0.0 126216 11536  4464  7072 0  0%000 dsmserv

This is quiet, stabilized behavior.

NMON is your friend, AIX folks.


- Allen S. Rout


Re: multiple instance recommendations

2006-05-22 Thread Andy Huebner
With 200M files and only 7TB you must have many small files.  To guess
at how many objects will be in your DB you could use the file count from
an incremental backup.  Just multiply by your retention and add the base
count.

For comparison purposes I have 149M objects in a 71% used 94GB DB.  The
source data is about 40TB, of which TSM has 72TB stored.  TSM backs up
about 1.2TB per night.

To keep LTO3's fed, be sure you have adequate system busses to handle
the simultaneous reads and writes from disk, tape, network, DB and Logs.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Dave Mussulman
Sent: Friday, May 19, 2006 3:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] multiple instance recommendations

Hello,

I have questions about server sizing and scaling.  I'm planning to
transition from Networker to TSM a client pool of around 300 clients,
with the last full backup being about 7TB and almost 200M files.  The
TSM server platform will be RHEL Linux.

I realize putting all of that into one TSM database is going to make it
large and unwieldy.  I'm just not sure how best to partition it in my
environment and use the resources available to me.  (Or what resources
to ask for if the addition of X will make a much easier TSM
configuration.) For database and storage pools, I will have a multiple
TB SAN allocation I can divide between instances.  I have one 60 slot HP
MSL6060 library (SCSI), with two LTO-3 SCSI drives.  There is also an
external SCSI LTO-3 drive.

My understanding of a shared SCSI library indicates that the library is
SCSI-attached to a server, but drive allocation is done via SAN
connections or via SCSI drives that are directly attached to the
different instances.  (Meaning the directly attached SCSI drives are not
sharable.) Is that true, at least as far as shared libraries go?  The
data doesn't actually go through the library master to a directly
connected drive, does it?

If not, and I still wanted to use sharing, I could give each instance a
dedicated drive - but since two drives seems like the minimum for TSM
tape operations, I don't really think it's wise to split them.
(However, if the 'best' solution would be to add two more drives to max
out the library, I can look into doing that.)

If the drives need to be defined just on one server, it looks like
server-to-server device classes and virtual volumes are the only
solution.  I don't really like the complexity of one instance storing
anothers' copy pools inside of an archive pool just to use tape, but it
looks like things are heading that way.

Other than the obvious hardware cost savings, I don't really see the
advantage of multiple instances on the same hardware.  (I haven't
decided yet if we would use one beefy server or two medium servers.)  If
you load up multiple instances on the same server, do you give them
different IP interfaces to make distinguishing between them in client
configs and administration tools easier?  Tape access-wise, is there a
hardware advantage putting multiple instances on the same system?

Any recommendations on any of this?  Your help is appreciated.

Dave


This e-mail (including any attachments) is confidential and may be legally 
privileged. If you are not an intended recipient or an authorized 
representative of an intended recipient, you are prohibited from using, copying 
or distributing the information in this e-mail or its attachments. If you have 
received this e-mail in error, please notify the sender immediately by return 
e-mail and delete all copies of this message and any attachments.
Thank you.