Re: Point-In-Time Restore

2001-03-08 Thread Bernhard Unold

Hello, Andy
I understand your arguments, but want to know the benefits of the method
to store the directory not in the same manner as the files. For Example
from a certain computer we do a archive, we want to keep for 14 days.
But the directories are kept for 365 days. So storage is wasted on tape
and in the db. For special purpose we have a mgmtclass without any
limits. Now some clients store their directories there at incremental
backup. No good idea!!

What is stored for a directory entry? What can i do with the directory,
if the files belonging to it are expired? In your example: What is the
state or content of the directory restored PIT to day 1? Is it possible
to restore Myfile to the version of day 1 if the directory backup
belonging to it is expired?

My idea is to store the directories together with the files. This would
solve a lot of problems.

Best regard
Bernhard Unold

Andy Raibeck schrieb:
> 
> For those of you who remember ADSM Versions 2 or earlier, our
> backup-archive GUI used to obtain a list of *all* backup versions before
> displaying the restore tree. Because the list of all backup versions can
> grow extremely large (i.e. millions of files), this presented two problem
> areas: memory shortages on the client machine (to retain the list of files
> in memory) and performance (because it takes a long time to build the
> list).
> 
> Starting with Version 3, we took a different approach to how we get file
> backup information. Rather than try to obtain a list of all backup versions
> initially, we only obtain a list of the objects that you can immediately
> see on the screen. For example, when you crack open (click on the + sign)
> the "File level" in the restore tree, we just show the available
> filespaces, so that is the only information we get from the server. When
> you click on one of the file spaces, we get a list of files that are in the
> root directory of that filespace, which is then displayed on the right-hand
> side of the GUI. When you crack open the filespace, we get a list of
> available directories directly beneath that filespace. When you click on a
> directory, we get the list of files immediately under that directory. And
> so on and so forth.
> 
> Because we are only getting lists of files for what you can see on the
> screen, the list is much smaller, so the GUI performance is vastly
> improved.
> 
> The problem you are seeing with PIT restores via the GUI is in order to see
> the files, you first need to be able to see the directories (so you can
> click on them to see the files or crack them open to view their
> subdirectories). But if there are no directories that were backed up prior
> to your PIT specification, then there is no directory that can be
> displayed. Thus if there is no displayed directory, there is nothing for
> you to click on or crack open.
> 
> The command line interface does not rely on the ability to display
> directories before it can display its files and subdirectories, so this is
> why it does not have the problem.
> 
> Directories are bound to the management class (within the policy domain)
> that has the highest "Retain only version" (RETONLY) setting, without
> regard to the number of versions that are kept. If two management classes
> have the same RETONLY setting, then you can not predict which class will be
> used.
> 
> If the management class with the largest RETONLY setting maintains only 1
> version, this will still be the class to which directories are bound. Call
> this management class CLASS1 On the other hand, you might have files that
> are bound to another management class, say, CLASS2, with a lower RETONLY
> setting but maintains, say, 10 versions if the file exists (number of
> versions when file is deleted is not pertinent here).
> 
> So here is a scenario:
> 
> Day 1: File C:\MyDir\MyFile.txt is backed up. MyDir is bound to CLASS1 and
> MyFile.txt is bound to CLASS2.
> 
> Day2: File C:\MyDir\MyFile.txt is changed. The MyDir directory is also
> changed. When the backup runs, MyDir will be backed up. Because only 1
> version is kept, the version that was created on Day 1 is deleted.
> MyFile.txt is also backed up and bound to CLASS2. There are now 2 versions
> of MyFile.txt.
> 
> Now you need to do a PIT restore back to Day 1. However, since there is
> only one backup version of MyDir, created on Day 2, it will not be
> displayed in the GUI when you PIT criteria specifies Day 1.
> 
> The key for PIT restores from the GUI, then, is to ensure that each
> directory has a backup version that is at least as old as the oldest file
> or subdirectory contained within that directory.
> 
> I don't think there is any great way to ensure that you can *always* do a
> PIT restore from the GUI unless you have a management class for directories
> that basically keeps all versions of directory backups forever (NOLIMIT).
> Depending on how often your directories change, this could potentially
> impact the size of our TSM database.
> 
> 

ANS4028E Error processing 'D:': cannot create file/directory entry

2001-03-08 Thread Michael Hack

I get this error by an incremental backup!!!

The schedule status is failed, but the client session backuped 1,77 GB.
Has anybody an idea what file or directory not backuped?
And what can I do to solve this problem?

Thanks.
Michael

--

ANS4028E Error processing 'filespace namepath-namefile-name': cannot create
 file/directory entry.

Explanation: The directory path for files being restored or retrieved
cannot be created.
System Action: File skipped.
User Response: Ensure that you have the proper authorization to create the
directory for file being restored or retrieved.

--

03/03/2001 01:03:34 Expiring-->   39,936 D:\Daten\1.xls [Sent]
03/03/2001 01:03:34 Expiring-->   36,864 D:\Daten\2.xls [Sent]
03/03/2001 01:03:34 Expiring-->   18,432 D:\Daten\3.DOC [Sent]
03/03/2001 01:03:34 Expiring-->   25,600 D:\Daten\4.DOC [Sent]
03/03/2001 01:03:34 Expiring-->   12,800 D:\Daten\5.doc [Sent]
03/03/2001 01:03:34 Expiring-->   52,224 D:\Daten\6.doc [Sent]
03/03/2001 01:03:34 Expiring-->1,769,472 D:\Daten\7.DOC [Sent]
03/03/2001 01:03:34 Expiring-->   16,896 D:\Daten\8.DOC [Sent]
03/03/2001 01:03:34 Expiring-->   20,480 D:\Daten\9.doc [Sent]
03/03/2001 01:03:34 Expiring-->   19,968 D:\Daten\10.doc [Sent]
03/03/2001 01:03:34 Expiring-->   19,968 D:\Daten\11.doc [Sent]
03/03/2001 01:03:34 Expiring-->   16,384 D:\Daten\12.DOC [Sent]
03/03/2001 01:03:34 Expiring-->7,168 D:\Daten\13.DOC [Sent]
03/03/2001 01:03:34 Expiring-->   52,224 D:\Daten\14.doc [Sent]
03/03/2001 01:03:34 Expiring-->   38,912 D:\Daten\15.doc [Sent]
03/03/2001 01:03:36 ANS1898I * Processed   348,000 files *
03/03/2001 01:03:41 ANS1898I * Processed   348,500 files *
03/03/2001 01:03:45 ANS1898I * Processed   349,000 files *
03/03/2001 01:03:50 ANS1898I * Processed   349,500 files *
03/03/2001 01:03:55 ANS1898I * Processed   350,000 files *
03/03/2001 01:04:01 ANS1898I * Processed   350,500 files *
03/03/2001 01:04:06 ANS1898I * Processed   351,000 files *
03/03/2001 01:04:12 ANS4028E Error processing 'D:': cannot create
file/directory entry
03/03/2001 01:04:12 --- SCHEDULEREC OBJECT END FFM11580 03/02/2001 23:05:00
03/03/2001 01:04:12 --- SCHEDULEREC STATUS BEGIN FFM11580 03/02/2001
23:05:00
03/03/2001 01:04:12 Total number of objects inspected:  351,440
03/03/2001 01:04:12 Total number of objects backed up:8,351
03/03/2001 01:04:12 Total number of objects updated:  0
03/03/2001 01:04:12 Total number of objects rebound:  0
03/03/2001 01:04:12 Total number of objects deleted:  8,366
03/03/2001 01:04:12 Total number of objects failed:   0
03/03/2001 01:04:12 Total number of bytes transferred: 1.77 GB
03/03/2001 01:04:12 Data transfer time:2,745.66 sec
03/03/2001 01:04:12 Network data transfer rate:  679.71 KB/sec
03/03/2001 01:04:12 Aggregate data transfer rate:275.74 KB/sec
03/03/2001 01:04:12 Objects compressed by:0%
03/03/2001 01:04:12 Elapsed processing time:   01:52:48
03/03/2001 01:04:12 --- SCHEDULEREC STATUS END FFM11580 03/02/2001 23:05:00
03/03/2001 01:04:12 ANS1512E Scheduled event 'FFM11580' failed.  Return
code = 4.
03/03/2001 01:04:12 Sending results for scheduled event 'FFM11580'.
03/03/2001 01:04:12 Results sent to server for scheduled event 'FFM11580'.



Re: Point-In-Time Restore

2001-03-08 Thread Andy Raibeck


Hi Bernhard,

A policy domain may have many management classes, each with different file
retention criteria. Within a given directory, it is possible to have some
files bound to one management class, other files bound to another
management class, still other files bound to a third management class, etc.
Maybe one class says to keep files for 14 days, another says to keep files
for 180 days, and the third management class says to keep files for 365
days. Also, you may add files to the directory over time, and set your
INCLUDE processing to bind them to different management classes.

Directories are bound to the management class with the longest RETONLY
setting so that, after you delete the directory (and its files and
subdirectories) from the client machine, you can at least recover the most
recent backup copy and the directory in which that file resided.

TSM has no way of knowing that you intend to keep *all* files on a
particular machine, now and forever, for only 14 days, so it can not decide
to use a management class for directories with a smaller RETONLY setting.

But if *you* know you intend to keep all files on a particular machine for
only 14 days (like in the example you gave), you can do a couple of things:

1) Use the DIRMC option to bind directories to the same management class
that your files use.

2) Create a new policy domain to which the machine's node will belong that
has only the one management class with the 14-day criterion.

Regarding your questions about what is in a directory and what is stored
for a directory: in older file systems like like the DOS FAT file system,
directories were indeed nothing more than just a mechanism for organizing
how files are stored withing the file system. But for other file systems
like on UNIX and NetWare, and newer Windows file systems like NTFS,
directories now have attributes (like security, ownership, etc.) associated
with them that files stored withing those directories can inherit. So it is
important that TSM be able to back up and restore the attributes for the
directories as well.

In answer to some of your specific questions that may not have been covered
above:

Q: What can I do with the directory if the files belonging to it are
expired?

A: Techincally you could restore the directory itself, although that would
most likely be of little practical use. But other than that, there is
nothing more you can really do with it.

Q: What is the state or content of the directory restored PIT to day 1
(from the example below)?

A: Whatever the state or content was at the time it was backed up. (I am
not trying to be "flip" here, but I am not sure what the point of this
question is.)

Q: Is it possible to restore MyFile.txt to the version of day 1 if the
directory backup belonging to it is expired?

A: Yes, with the exception that you can not restore that version with the
PIT restore feature from the GUI.

Regards,

Andy

Andy Raibeck
IBM Tivoli Systems
Tivoli Storage Manager Client Development
e-mail: [EMAIL PROTECTED]
"The only dumb question is the one that goes unasked."


Bernhard Unold <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
03/08/2001 02:17:33 AM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: Point-In-Time Restore



Hello, Andy
I understand your arguments, but want to know the benefits of the method
to store the directory not in the same manner as the files. For Example
from a certain computer we do a archive, we want to keep for 14 days.
But the directories are kept for 365 days. So storage is wasted on tape
and in the db. For special purpose we have a mgmtclass without any
limits. Now some clients store their directories there at incremental
backup. No good idea!!

What is stored for a directory entry? What can i do with the directory,
if the files belonging to it are expired? In your example: What is the
state or content of the directory restored PIT to day 1? Is it possible
to restore Myfile to the version of day 1 if the directory backup
belonging to it is expired?

My idea is to store the directories together with the files. This would
solve a lot of problems.

Best regard
Bernhard Unold

Andy Raibeck schrieb:
>
> For those of you who remember ADSM Versions 2 or earlier, our
> backup-archive GUI used to obtain a list of *all* backup versions before
> displaying the restore tree. Because the list of all backup versions can
> grow extremely large (i.e. millions of files), this presented two problem
> areas: memory shortages on the client machine (to retain the list of
files
> in memory) and performance (because it takes a long time to build the
> list).
>
> Starting with Version 3, we took a different approach to how we get file
> backup information. Rather than try to obtain a list of all backup
versions
> initially, we only obtain a list of the objects that you can immediately
> see on the screen. For example, when you c

Compression / No Compression ???

2001-03-08 Thread Roy Lake

Hi Chaps,

Just wanted to share my findings with you with regards to TSM compression/no 
compression.

We have 3575-L18 tape library. We use 3570-C Format tapes. We used to have CLIENT 
compression set to YES when doing backups, with DRIVE compression OFF. Most of the 
data on our systems is Oracle.  When we had client compression set to YES, each 
cartridge would take about 5GB. 

I have done some testing and found that when I switched compression OFF, we managed to 
get around 21GB on each cart, and also the backups were a LOT quicker.

IBM recommend (and I quote:) "Oracle databases are normally full of white space, so 
compression is required. Either h/w or client compression." 

Could someone please explain WHY compression is required if we get more on tape with 
it switched OFF, and the backups are quicker?.

In our environment, TSM has its own 10Meg a sec network, and 99.9% of the backups are 
done overnight, so there is no problem with performance issues.

Am I missing something here, or is it REALLY a better idea to forget about compression 
totally?.


Kind Regards,

Roy Lake
TBG European IT
Tel: 0208 526 8883
E-Mail: [EMAIL PROTECTED]



** IMPORTANT INFORMATION **
This message is intended only for the use of the person(s) ("the Intended Recipient")
to whom it is addressed. It may contain information which is privileged and 
confidential
within the meaning of applicable law. Accordingly any dissemination, distribution, 
copying
or other use of this message or any of its content by any person other than the 
Intended
Recipient may constitute a breach of civil or criminal law and is strictly prohibited.

The views in this message or it's attachments are that of the sender.

If you are not the Intended Recipient please contact the sender and dispose of this 
email
as soon as possible. If in doubt contact the Tibbett & Britten European IT Helpdesk
on 0870 607 6777 (UK) or +0044 870 607 6777 (Non UK).



Re: Point-In-Time Restore

2001-03-08 Thread Bernhard Unold

good explained, thanks.

Andy Raibeck schrieb:
> 
> Hi Bernhard,
> 
> A policy domain may have many management classes, each with different file
> retention criteria. Within a given directory, it is possible to have some
> files bound to one management class, other files bound to another
> management class, still other files bound to a third management class, etc.
> Maybe one class says to keep files for 14 days, another says to keep files
> for 180 days, and the third management class says to keep files for 365
> days. Also, you may add files to the directory over time, and set your
> INCLUDE processing to bind them to different management classes.
> 
> Directories are bound to the management class with the longest RETONLY
> setting so that, after you delete the directory (and its files and
> subdirectories) from the client machine, you can at least recover the most
> recent backup copy and the directory in which that file resided.
> 
> TSM has no way of knowing that you intend to keep *all* files on a
> particular machine, now and forever, for only 14 days, so it can not decide
> to use a management class for directories with a smaller RETONLY setting.
> 
> But if *you* know you intend to keep all files on a particular machine for
> only 14 days (like in the example you gave), you can do a couple of things:
> 
> 1) Use the DIRMC option to bind directories to the same management class
> that your files use.
> 
> 2) Create a new policy domain to which the machine's node will belong that
> has only the one management class with the 14-day criterion.
> 
> Regarding your questions about what is in a directory and what is stored
> for a directory: in older file systems like like the DOS FAT file system,
> directories were indeed nothing more than just a mechanism for organizing
> how files are stored withing the file system. But for other file systems
> like on UNIX and NetWare, and newer Windows file systems like NTFS,
> directories now have attributes (like security, ownership, etc.) associated
> with them that files stored withing those directories can inherit. So it is
> important that TSM be able to back up and restore the attributes for the
> directories as well.
> 
> In answer to some of your specific questions that may not have been covered
> above:
> 
> Q: What can I do with the directory if the files belonging to it are
> expired?
> 
> A: Techincally you could restore the directory itself, although that would
> most likely be of little practical use. But other than that, there is
> nothing more you can really do with it.
> 
> Q: What is the state or content of the directory restored PIT to day 1
> (from the example below)?
> 
> A: Whatever the state or content was at the time it was backed up. (I am
> not trying to be "flip" here, but I am not sure what the point of this
> question is.)
> 
> Q: Is it possible to restore MyFile.txt to the version of day 1 if the
> directory backup belonging to it is expired?
> 
> A: Yes, with the exception that you can not restore that version with the
> PIT restore feature from the GUI.
> 
> Regards,
> 
> Andy
> 
> Andy Raibeck
> IBM Tivoli Systems
> Tivoli Storage Manager Client Development
> e-mail: [EMAIL PROTECTED]
> "The only dumb question is the one that goes unasked."
> 
> Bernhard Unold <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
> 03/08/2001 02:17:33 AM
> 
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 
> Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 
> To:   [EMAIL PROTECTED]
> cc:
> Subject:  Re: Point-In-Time Restore
> 
> Hello, Andy
> I understand your arguments, but want to know the benefits of the method
> to store the directory not in the same manner as the files. For Example
> from a certain computer we do a archive, we want to keep for 14 days.
> But the directories are kept for 365 days. So storage is wasted on tape
> and in the db. For special purpose we have a mgmtclass without any
> limits. Now some clients store their directories there at incremental
> backup. No good idea!!
> 
> What is stored for a directory entry? What can i do with the directory,
> if the files belonging to it are expired? In your example: What is the
> state or content of the directory restored PIT to day 1? Is it possible
> to restore Myfile to the version of day 1 if the directory backup
> belonging to it is expired?
> 
> My idea is to store the directories together with the files. This would
> solve a lot of problems.
> 
> Best regard
> Bernhard Unold
> 
> Andy Raibeck schrieb:
> >
> > For those of you who remember ADSM Versions 2 or earlier, our
> > backup-archive GUI used to obtain a list of *all* backup versions before
> > displaying the restore tree. Because the list of all backup versions can
> > grow extremely large (i.e. millions of files), this presented two problem
> > areas: memory shortages on the client machine (to retain the list of
> files
> > in memory) and performance (because it takes a long time to build the
> > list).
>

Re: Compression / No Compression ???

2001-03-08 Thread John Naylor

Hi Roy,

My opinion is unless you have specific reasons (possibly network constraints)
do not run with client compression.
Requires extra clock cycles on the client, and if the client is not particularly
powerful
can significantly increase backup elapsed time.
Also restores will take longer, because of the decompression.
As far as the amount of data recorded on the tape, you will often see large
differences when you back up software compressed data against non software
compressed data.
But in TSM terms there may not be a significant difference, in the true amount
of client data
on a tape.
John




Roy Lake <[EMAIL PROTECTED]> on 03/08/2001 11:29:44 AM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: John Naylor/HAV/SSE)
Subject:  Compression / No Compression ???



Hi Chaps,

Just wanted to share my findings with you with regards to TSM compression/no
compression.

We have 3575-L18 tape library. We use 3570-C Format tapes. We used to have
CLIENT compression set to YES when doing backups, with DRIVE compression OFF.
Most of the data on our systems is Oracle.  When we had client compression set
to YES, each cartridge would take about 5GB.

I have done some testing and found that when I switched compression OFF, we
managed to get around 21GB on each cart, and also the backups were a LOT
quicker.

IBM recommend (and I quote:) "Oracle databases are normally full of white space,
so compression is required. Either h/w or client compression."

Could someone please explain WHY compression is required if we get more on tape
with it switched OFF, and the backups are quicker?.

In our environment, TSM has its own 10Meg a sec network, and 99.9% of the
backups are done overnight, so there is no problem with performance issues.

Am I missing something here, or is it REALLY a better idea to forget about
compression totally?.


Kind Regards,

Roy Lake
TBG European IT
Tel: 0208 526 8883
E-Mail: [EMAIL PROTECTED]



** IMPORTANT INFORMATION **
This message is intended only for the use of the person(s) ("the Intended
Recipient")
to whom it is addressed. It may contain information which is privileged and
confidential
within the meaning of applicable law. Accordingly any dissemination,
distribution, copying
or other use of this message or any of its content by any person other than the
Intended
Recipient may constitute a breach of civil or criminal law and is strictly
prohibited.

The views in this message or it's attachments are that of the sender.

If you are not the Intended Recipient please contact the sender and dispose of
this email
as soon as possible. If in doubt contact the Tibbett & Britten European IT
Helpdesk
on 0870 607 6777 (UK) or +0044 870 607 6777 (Non UK).






**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric and Southern Electric are trading names of
Scottish and Southern Energy Group.
**



Re: Point-In-Time Restore

2001-03-08 Thread Richard L. Rhodes

Thanks Andy,

We've tried to get an answer to this question since we first put in
TSM (3rd qtr last year), but couldn't find an answer, including from
support.

Your example exactly describes our situation, but, our systems don't
appear to act like your describing.

We have a domain called "AIX" with the following mgt classes:

vde rev vdd rto
aix 14  14  14  14   (dflt)
aix1m   32  32  32  32
aix2m   62  62  62  62
aixorac 2   1   1   90   (oracle backups)

All our directories go into the last class aixorac with the largest
RetainOnlyVersion, just as you describe.  When I fire up the gui,
changed the date to point back a month, point to a filespace that
is bound to aix2m (2 month PIT), and go to a directory that changes
every day - every thing looks OK!  The directory shows up in the
gui with all it's files.  I don't seem to be able to duplicate the
situation you describe!

We've wondered about this directory issues, but didn't think there
was a problem given that it appeared to be working.  I fugured it
created the directory from the files that exist in it.

Any thoughts?

Rick




On 7 Mar 2001, at 7:58, Andy Raibeck wrote:
> Day 1: File C:\MyDir\MyFile.txt is backed up. MyDir is bound to CLASS1 and
> MyFile.txt is bound to CLASS2.
>
> Day2: File C:\MyDir\MyFile.txt is changed. The MyDir directory is also
> changed. When the backup runs, MyDir will be backed up. Because only 1
> version is kept, the version that was created on Day 1 is deleted.
> MyFile.txt is also backed up and bound to CLASS2. There are now 2 versions
> of MyFile.txt.
>
> Now you need to do a PIT restore back to Day 1. However, since there is
> only one backup version of MyDir, created on Day 2, it will not be
> displayed in the GUI when you PIT criteria specifies Day 1.



Re: ANS4028E Error processing 'D:': cannot create file/directory entry

2001-03-08 Thread Richard Sims

>03/03/2001 01:04:12 ANS4028E Error processing 'D:': cannot create
>file/directory entry

Michael - This is an old problem (search in www.adsm.org on the message number
  and you will see much discussion about it).  You didn't specify what
client level you are using, but it's probably old, and should be upgraded.
The error circuitously refers to having encountered a file with "garbage"
characters in its name, somewhere in the volume.

  Richard Sims, BU



Runaway dsmserv

2001-03-08 Thread Richard L. Rhodes

Were having a strange problem.

Over a period of several weeks we saw the cpu utilization of dsmserv rise to
the point it was running our AIX server at 100% utilization.  It was running
at 100% utilization even whan nothing was happening on the server - no
backups, migration, reclamation, etc.  We called support - they suggested we
reboot our server, which was our idea also.

After the reboot, everything seemed back to normal.  Now, a week after the
reboot, dsmserv is running a constant 50% of our server, reguardless of
what's happening.  We're going to cycle dsmserv this afternoon after batch
processing.  Then, call support, again.

Is anyone else seeing this kind of behavior?

Rick



Portable Barcode Scanner for 3590?

2001-03-08 Thread Shawn M. Drew

I just had an unpleasant job of performing an
inventory of our offsite storage.  The barcode scanner
I borrowed for the job could not read our barcode
labels, so we 10-keyed 5000+tapes in a 12hour stretch
(2 people)

I remember vaguely reading something about this a
while ago on this list, but can't find it anymore.

Basically I need to find a model of Barcode reader
that can read the labels of our 3590 Tri-optic,
"vibrant" labeled tapes.

I looked all over the place and found references that
this is a code 39 type label, but our code 39 scanner
could not scan it.  I found later that there is
another spec called "code 39 tri-optic" which I
figured was the answer, but after sending a sample of
our labels to Symbol ( http://www.symbol.com/ ), they
said they have no barcode readers to handle this.

Can someone tell me a model that will work?

shawndo

__
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
http://personal.mail.yahoo.com/



restore error

2001-03-08 Thread Shekhar Dhotre

Good morning all ,

i am restoring  some files from  old adsm server

and getting following errors ..any idea?


> ANS4029E Error processing '/psoft.PRD/app/oracle': unable to build a
> directory p
> ath; a file exists with the same name as a directory
>
> ANS4029E Error processing '/psoft.PRD/app/TUXEDO': unable to build a
> directory p
> ath; a file exists with the same name as a directory
>
>
> ANS4029E Error processing '/fs35': unable to build a directory path; a
> file exis
> ts with the same name as a directory
>
> ANS4029E Error processing '/home': unable to build a directory path; a
> file exis
> ts with the same name as a directory
>



Re: Portable Barcode Scanner for 3590?

2001-03-08 Thread Richard Sims

>I looked all over the place and found references that
>this is a code 39 type label, but our code 39 scanner
>could not scan it.

Shawn - From info I compiled in http://people.bu.edu/rbs/ADSM.QuickFacts:

3590 barcodeIs formally "Automation Identification
Manufacturers Uniform Symbol Description
Version 3", otherwise known as Code 39.
It runs across the full width of the
label. The two recognized vendors:
Engineered Data Products (EDP) Tri-Optic
Wright Line Tri-Code
Ref: Redbook "IBM Magstar Tape Products
Family: A Practical Guide", topic
Cartridge Labels and Bar Codes.

I should think that those two vendors could either supply or guide you to a
supplier of such a barcode reader.

  Richard Sims, BU



Re: Portable Barcode Scanner for 3590?

2001-03-08 Thread Palmadesso Jack

I am not sure if this is the right place to do this but I'll do it anyway.

Every once in a while some NT admin clobbers permissions of the root of some
gigantic directory.  What they always end up asking me is "Can I restore
just the ACLs of the affected directories instead of the entire structure?"
My answer is always no you have to restore everthing or you have to change
it all back manually.  Niether of these answers pleases them much.
Obviously its thier fault so its just their tough luck but I think that this
would be a great addition to the product.  Are there any plans to do so?

Or am I completely wrong and there is a way to restore individual acls?  In
that case I'll go back into my corner and be quiet.

Thanks

Jack



Antwort: Re: Swapping Tape Drive

2001-03-08 Thread Holger Bitterlich

Hi,
IBM says that you can use the Greens and the Reds mixed. But depends on the
Firmware (IBM Terminology Microcode).
Regards
HBit



Bitte antworten an "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

An:[EMAIL PROTECTED]
Kopie:  (Blindkopie: Holger Bitterlich/BK/SK-Koeln)
Thema: Re: Swapping Tape Drive



Hi Miles,
I did replaced the tape drive without much changes, I changed Recording
Format  to 3570C instead of what we had
which was 3570, till I got your e-mail made me wonder, Why it should be set
to 'drive' and not 3570C? we are using
the green tapes.  Another question, can we use mixture of red and green
tapes for backup/restore?
Appreciate your input,
Thanks, Salam

> -Original Message-
> From: Miles Purdy [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, February 27, 2001 6:03 PM
> To:   [EMAIL PROTECTED]
> Subject:  Re: Swapping Tape Drive
>
> Hi Salam,
> make sure you set the capacity of the drive(s) to 'drive', if your going
> to be using the green (5gb) and red (7gb) tapes.
> Modify the '3570' device class and set format to 'drive' not '3570B' nor
> '3570C'.
>
> Note that the drives are not replaced, it is just a chip and firmware that
> get installed.
>
> Miles
>
> --
> -
> Miles Purdy
> System Manager
> Farm Income Programs Directorate
> Winnipeg, MB, CA
> [EMAIL PROTECTED]
> ph: (204) 984-1602 fax: (204) 983-7557
> --
> -
>
> >>> [EMAIL PROTECTED] 27-Feb-01 6:02:55 AM >>>
> Salam, I don't think there is any problem swapping
> the drive, but let see what others have to say.
> This what we might be doing to in the near future,
>
> Any idea?
>
> Alyn
> --- ABDULSALAM ABDULLA <[EMAIL PROTECTED]> wrote:
> > > We are on ADSM Server for AIX-RS/6000 - Version 3,
> > Release 1, Level 2.40
> > > we want to replace Magstar 3570B02 to 3570C02, do
> > we need to do any
> > > configuration change or any thing has to be done
> > > prior to swap.
> > > Thanks,
> > > Salam
> > >
> > > Salam R. Abdulla
> > > Project Leader
> > > Dubai Aluminum Company
> > > P. O. Box: 3627
> > > Dubai, UAE
> > >
> > >
>
>
> __
> Do You Yahoo!?
> Get email at your own domain with Yahoo! Mail.
> http://personal.mail.yahoo.com/








__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer die
   Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.



Re: Runaway dsmserv

2001-03-08 Thread Sharp, Neil (London)

Are you sure it's dsmserv? Stop it and then run 'vmstat'. Maybe it the
problem persists then run 'PerfPMR'

Good luck

> -Original Message-
> From: Richard L. Rhodes [SMTP:[EMAIL PROTECTED]]
> Sent: Thursday, March 08, 2001 9:10 AM
> To:   [EMAIL PROTECTED]
> Subject:  Runaway dsmserv
>
> Were having a strange problem.
>
> Over a period of several weeks we saw the cpu utilization of dsmserv rise
> to
> the point it was running our AIX server at 100% utilization.  It was
> running
> at 100% utilization even whan nothing was happening on the server - no
> backups, migration, reclamation, etc.  We called support - they suggested
> we
> reboot our server, which was our idea also.
>
> After the reboot, everything seemed back to normal.  Now, a week after the
> reboot, dsmserv is running a constant 50% of our server, reguardless of
> what's happening.  We're going to cycle dsmserv this afternoon after batch
> processing.  Then, call support, again.
>
> Is anyone else seeing this kind of behavior?
>
> Rick



Patch 4.1.2.12 - Experiences

2001-03-08 Thread Reinhard Mersch

I just applied patch 4.1.2.12 to one of our Windows clients having lots
of "umlaut" files. The cleanup utility worked as described, with the
exception of needing THREE runs, until no more "Incorrectly cased object"
messages occured in the sequencing view run. The cleanups issued lots of
ANS1228E and ANS1304W messages. Is this normal behaviour?

--
Reinhard MerschWestfaelische Wilhelms-Universitaet
Zentrum fuer Informationsverarbeitung - ehemals Universitaetsrechenzentrum
Roentgenstrasse 9-13, D-48149 Muenster, Germany  Tel: +49(251)83-31583
E-Mail: [EMAIL PROTECTED]   Fax: +49(251)83-31653



Management Class Question

2001-03-08 Thread Jeff Rankin

We are looking at doing a massive reorganization of our naming
standards within our TSM servers.  During this reorganization, some
management class we have will go away and new ones will be created.
Others will just be renamed.

I know that in order to bind data to a specific management class I can
just use the include statement for the clients and that will start
backing up or archive files to the new management class assignments,
but my question is what happens to the old data that is already backed
up to the TSM server?  Will it be rebound to the new management classes
or will it fall into the grace period retention of the policy domains?
If the policy domain is what controls the data retention in this case,
would it just be safer to leave all of the existing management classes
in place and just start backing up with the new management classes and
let the old data expire off?

We are running TSM 4.1.2. Any help would be greatly appreciated.

--
Jeff Rankin
Associate Technical Analyst, Excel Corporation
Phone:   316-291-2903
Fax: 316-266-4415
Email:   [EMAIL PROTECTED]



extend problem

2001-03-08 Thread Robert Stephenson (STEPHRF @ GBNUHO)

 Date: 08-03-2001 03:23:10 PM
 Telephone: 684983
 Subject:  extend problem



 Hi

 we are running   Tivoli Storage Manager for MVS  Version 3, Release 7,
 Level 3.0
 when we try to extend we get :

 ANR0252E Error writing logical page 8407040 (physical page 400640) to
 database
 ANR0252E volume
 SYSA.VP0ADSM.DBASE56.
 ANRD DBFMT(419): Error formatting database space map page 8210.

 Therefore, we added another volume and tried extended into it. It carried
 the
 problems over into the new volume.

 ANR0252E Error writing logical page 8061232 (physical page 54832) to
 database
 ANR0252E volume
 SYSA.VP0ADSM.DBASE57.
 ANR0240E Deletion process failed for database volume
 SYSA.VP0ADSM.DBASE56.
 ANR0988I Process 6 for DATABASE VOLUME DELETION running in the
 BACKGROUND
 ANR0988I processed 1,254,096,896 bytes with a completion state of FAILURE
 at

 Any help ideas would be appreciated.

 Thanks Rob Stephenson
 email: [EMAIL PROTECTED]



TSM NT Server 4.1.2 not available for download

2001-03-08 Thread Ruddy STOUDER

TSM NT Server 4.1.2 is not available anymore on the WEB site.
Maintenance and patch sites are not allowing 4.1.2 Server download. Does
anybody know why ?


 Ruddy Stouder
System Engineer
I.R.I.S.
Rue du Bosquet 10 - Parc Scientifique de Louvain-La-Neuve
B- 1435 Mont-Saint-Guibert
[EMAIL PROTECTED]
http://www.irislink.com
Tel: +32 (0)10 48 75 10  -  Fax: +32 (0)10 48 75 40



Re: Portable Barcode Scanner for 3590?

2001-03-08 Thread Cook, Dwight E

But I don't think they have a standard start or stop code/char...
We ran into the same problem about a year or two ago.
Good luck in finding something that will read it 'cause we couldn't
Dwight

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 2001 8:42 AM
To: [EMAIL PROTECTED]
Subject: Re: Portable Barcode Scanner for 3590?


>I looked all over the place and found references that
>this is a code 39 type label, but our code 39 scanner
>could not scan it.

Shawn - From info I compiled in http://people.bu.edu/rbs/ADSM.QuickFacts:

3590 barcodeIs formally "Automation
Identification
Manufacturers Uniform Symbol
Description
Version 3", otherwise known as Code
39.
It runs across the full width of the
label. The two recognized vendors:
Engineered Data Products (EDP)
Tri-Optic
Wright Line Tri-Code
Ref: Redbook "IBM Magstar Tape
Products
Family: A Practical Guide", topic
Cartridge Labels and Bar Codes.

I should think that those two vendors could either supply or guide you to a
supplier of such a barcode reader.

  Richard Sims, BU



Re: Question about versions ans retention

2001-03-08 Thread Prather, Wanda

No.  When a backup version expires out of the primary pool, it expires out
of the copy pool as well.

-Original Message-
From: Jane Doe [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 07, 2001 5:25 PM
To: [EMAIL PROTECTED]
Subject: Question about versions ans retention


Can the following scenario be done?

Given the following:
Node A with 1 GB of data

Can I send all data via incremental backup to one storage pool and a MC with
7 versions, but have the data copied to an offsite copypool retaining 14
versions?

Thanks
Jane





___
Send a cool gift with your E-Card
http://www.bluemountain.com/giftcenter/



Antwort: TSM NT Server 4.1.2 not available for download

2001-03-08 Thread Holger Bitterlich



No, that`s  not true,

under
ftp://ftp.boulder.ibm.com/storage/tivoli-storage-management/patches/client/v4r1/windows/v412

I`m currently downloading the files (it`s quite slow, but ...)
or you can use this path (if you can read this bitmap...)
(Embedded image moved to file: pic23056.pcx)
regards
HBit



Bitte antworten an "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

An:[EMAIL PROTECTED]
Kopie:  (Blindkopie: Holger Bitterlich/BK/SK-Koeln)
Thema: TSM NT Server 4.1.2 not available for download



TSM NT Server 4.1.2 is not available anymore on the WEB site.
Maintenance and patch sites are not allowing 4.1.2 Server download. Does
anybody know why ?


 Ruddy Stouder
System Engineer
I.R.I.S.
Rue du Bosquet 10 - Parc Scientifique de Louvain-La-Neuve
B- 1435 Mont-Saint-Guibert
[EMAIL PROTECTED]
http://www.irislink.com
Tel: +32 (0)10 48 75 10  -  Fax: +32 (0)10 48 75 40








__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer die
   Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.


 pic23056.pcx


Re: FIVE questions for TSM 4.1.2

2001-03-08 Thread asr

=> On Tue, 27 Feb 2001 10:44:04 +1100, Carl Makin <[EMAIL PROTECTED]> said:

> I do think our D40 drawer (5 x 36gb disks) was setup poorly.  I've been over
> the hardware config with our IBM engineer and he thinks it's probably not
> running at the full loop speed (SSA160).  We have a mixture of 40's and 160s
> on the same loop which is crossed over in a HA configuration between two
> serialraid 160 adaptors.  Fast-Write is disabled, I just checked.

Well, then that's quite a few factors that will give you sub-par SSA
performance.  I'd like to address one factor I haven't seen hit on yet: your
RAID setup:


This is of course affected by your environment, but I would reccomend that you
_not_ raid your disk pools: Most times, the disk pools are only holding unique
copies of data for a very brief period: From the beginning of a backup until
the end of a copy.  This is not a very high exposure.

If you split out your RAID, you may not only increase your availiable space by
33% (or 25% if you have ommitted a hot spare), but you can make sure each disk
volume occupies only one spindle.  (not necessarily one volume per spindle:
36G volumes may be too big for your taste)

If you do that, you can actually watch (on e.g. monitor) TSM stripe disk
accesses between the spindles.  It's heartwarming.

Further, Most SSA adapters have a pair of loops: I'd reccomend you split your
storage loop from your DB/Log loop.



I would be ...very surprised... if you found that a well-configured SSA
install did anything but leave a Shark twitching on the sand.



- Allen S. Rout
- NERDC TSM type.



Re: Compression / No Compression ???

2001-03-08 Thread Cook, Dwight E

It all boils down to where your bottle neck is...
If your client(s) have hugh amounts of data and big enough engines to
compress it down and it compresses nicely (such as oracle DB's) then you
will find that you can compress the data AND send it in less time than you
can send the data uncompressed.
Or if you don't have much network band width and your users go across the
same network as your backups then you might want to compress the data to
minimize the traffic.

Example... we have an SAP instance that is about 2.4 TB on an E1 with a
dozen (or more) processors and an independent network for  backups... with
compression turned on we can now back this DB up in about 16 hours which
basically keeps the 100 Mb/sec fast ethernet interface maxed out... and
because we can keep it maxed out, if we sent the data uncompressed it would
take 4 times as long because it would be sending 4 times the bytes (we see a
pretty good 4/1 compression)

So I get to say my favorite thing
It depends !
later,
Dwight

-Original Message-
From: Roy Lake [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 2001 5:30 AM
To: [EMAIL PROTECTED]
Subject: Compression / No Compression ???


Hi Chaps,

Just wanted to share my findings with you with regards to TSM compression/no
compression.

We have 3575-L18 tape library. We use 3570-C Format tapes. We used to have
CLIENT compression set to YES when doing backups, with DRIVE compression
OFF. Most of the data on our systems is Oracle.  When we had client
compression set to YES, each cartridge would take about 5GB.

I have done some testing and found that when I switched compression OFF, we
managed to get around 21GB on each cart, and also the backups were a LOT
quicker.

IBM recommend (and I quote:) "Oracle databases are normally full of white
space, so compression is required. Either h/w or client compression."

Could someone please explain WHY compression is required if we get more on
tape with it switched OFF, and the backups are quicker?.

In our environment, TSM has its own 10Meg a sec network, and 99.9% of the
backups are done overnight, so there is no problem with performance issues.

Am I missing something here, or is it REALLY a better idea to forget about
compression totally?.


Kind Regards,

Roy Lake
TBG European IT
Tel: 0208 526 8883
E-Mail: [EMAIL PROTECTED]



** IMPORTANT INFORMATION **
This message is intended only for the use of the person(s) ("the Intended
Recipient")
to whom it is addressed. It may contain information which is privileged and
confidential
within the meaning of applicable law. Accordingly any dissemination,
distribution, copying
or other use of this message or any of its content by any person other than
the Intended
Recipient may constitute a breach of civil or criminal law and is strictly
prohibited.

The views in this message or it's attachments are that of the sender.

If you are not the Intended Recipient please contact the sender and dispose
of this email
as soon as possible. If in doubt contact the Tibbett & Britten European IT
Helpdesk
on 0870 607 6777 (UK) or +0044 870 607 6777 (Non UK).



Re: TSM Monitoring: Crystal Reports v TEC....

2001-03-08 Thread Mark S.

"Warren, Matthew James" wrote:
>Has anybody looked at using Tivoli Decision Support with TSM?
>
>>From: Jager Frederic [mailto:[EMAIL PROTECTED]]
>>I am currently evaluating a TSM KM by OTL software with
>>Patrol client and
>>console (you can get an evaluation copy). And I do think this is worth
>>investing.

Yes. TDS has some nice bells and whistles, and can output
grpahically-based or text reports in multiple formats, including Crystal
Reports, ODBC, comma-delimited, and Excel. It's not much more than
sophisticated select statements, but its trends reports are pretty good.

TDS is NOT a trivial install. It is a long haul, with Tivoli support
almost certainly a necessity.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: Swapping Tape Drive

2001-03-08 Thread William Boyer

Don't forget to update the drive definitions. Probably would be best to
delete the DRIVEs and re-define them. You could also do an UPD DRIVE to
force ADSM to re-query the drive for it's attributes and supported options.

Since you're AIX, probably would be best to delete the drives from ADSM,
then rmdev /d them from AIX then do a cfgmgr to re-aquire the drives to AIX,
then define them back in ADSM. That way both the AIX system and ADSM will be
current on the drives.

Bill Boyer
DSS, Inc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
ABDULSALAM ABDULLA
Sent: Wednesday, March 07, 2001 11:01 PM
To: [EMAIL PROTECTED]
Subject: Re: Swapping Tape Drive


Hi Miles,
I did replaced the tape drive without much changes, I changed Recording
Format  to 3570C instead of what we had
which was 3570, till I got your e-mail made me wonder, Why it should be set
to 'drive' and not 3570C? we are using
the green tapes.  Another question, can we use mixture of red and green
tapes for backup/restore?
Appreciate your input,
Thanks, Salam

> -Original Message-
> From: Miles Purdy [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, February 27, 2001 6:03 PM
> To:   [EMAIL PROTECTED]
> Subject:  Re: Swapping Tape Drive
>
> Hi Salam,
> make sure you set the capacity of the drive(s) to 'drive', if your going
> to be using the green (5gb) and red (7gb) tapes.
> Modify the '3570' device class and set format to 'drive' not '3570B' nor
> '3570C'.
>
> Note that the drives are not replaced, it is just a chip and firmware that
> get installed.
>
> Miles
>
> --
> -
> Miles Purdy
> System Manager
> Farm Income Programs Directorate
> Winnipeg, MB, CA
> [EMAIL PROTECTED]
> ph: (204) 984-1602 fax: (204) 983-7557
> --
> -
>
> >>> [EMAIL PROTECTED] 27-Feb-01 6:02:55 AM >>>
> Salam, I don't think there is any problem swapping
> the drive, but let see what others have to say.
> This what we might be doing to in the near future,
>
> Any idea?
>
> Alyn
> --- ABDULSALAM ABDULLA <[EMAIL PROTECTED]> wrote:
> > > We are on ADSM Server for AIX-RS/6000 - Version 3,
> > Release 1, Level 2.40
> > > we want to replace Magstar 3570B02 to 3570C02, do
> > we need to do any
> > > configuration change or any thing has to be done
> > > prior to swap.
> > > Thanks,
> > > Salam
> > >
> > > Salam R. Abdulla
> > > Project Leader
> > > Dubai Aluminum Company
> > > P. O. Box: 3627
> > > Dubai, UAE
> > >
> > >
>
>
> __
> Do You Yahoo!?
> Get email at your own domain with Yahoo! Mail.
> http://personal.mail.yahoo.com/



Re: Runaway dsmserv

2001-03-08 Thread Jeff J Coskey

Rick,

I too have seen this occuring at many of my clients. What I can surmise is
that you get runaway sessions appearing with a ? when you do a 'q sess'. If
you can cancel these, then you will see that the dsmserv process CPU
utilization drops back down dramatically. I have played around a little
with idletimeout but I'm not sure if this is the correct solution.

Can someone from Tivoli provide feedback on this one? I've seen the CPU
shoot up to 100% even on SP nodes and 4-way S7A machines with lots of
memory. It will cause the machine to start thrashing. There should be a
cleaner, more automated solution instead of having to reboot the server or
manually canceling the runaway sessions.

Thanks,

Jeff Coskey
IBM Global Services
Server and Storage Solutions
3109 W. Dr. Martin L. King Jr. Blvd, Tampa, FL  33607
Phone: (813) 801-3868  T/L: 427-3868
Cell: (813) 495-6923
Pager:  (800) 759- pin: 1201907
Email: [EMAIL PROTECTED]


"Richard L. Rhodes" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
03/08/2001 04:10:04 AM

Please respond to [EMAIL PROTECTED]

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  Runaway dsmserv


Were having a strange problem.

Over a period of several weeks we saw the cpu utilization of dsmserv rise
to
the point it was running our AIX server at 100% utilization.  It was
running
at 100% utilization even whan nothing was happening on the server - no
backups, migration, reclamation, etc.  We called support - they suggested
we
reboot our server, which was our idea also.

After the reboot, everything seemed back to normal.  Now, a week after the
reboot, dsmserv is running a constant 50% of our server, reguardless of
what's happening.  We're going to cycle dsmserv this afternoon after batch
processing.  Then, call support, again.

Is anyone else seeing this kind of behavior?

Rick



Re: Runaway dsmserv

2001-03-08 Thread Steve Schaub

We saw this in our shop also - it ended up being a problem with win95/nt clients using 
IE5.0 to access the web gui producing the "?" sessions.  Three or more of these would 
drive the cpu to 100% and leave it there.  Upgrading IE to 5.5 or using win/2000 
solved the problem for us.

Steve Schaub
Haworth, Inc
email: [EMAIL PROTECTED]

>>> [EMAIL PROTECTED] 03/08 9:28 AM >>>
Rick,

I too have seen this occuring at many of my clients. What I can surmise is
that you get runaway sessions appearing with a ? when you do a 'q sess'. If
you can cancel these, then you will see that the dsmserv process CPU
utilization drops back down dramatically. I have played around a little
with idletimeout but I'm not sure if this is the correct solution.

Can someone from Tivoli provide feedback on this one? I've seen the CPU
shoot up to 100% even on SP nodes and 4-way S7A machines with lots of
memory. It will cause the machine to start thrashing. There should be a
cleaner, more automated solution instead of having to reboot the server or
manually canceling the runaway sessions.

Thanks,

Jeff Coskey
IBM Global Services
Server and Storage Solutions
3109 W. Dr. Martin L. King Jr. Blvd, Tampa, FL  33607
Phone: (813) 801-3868  T/L: 427-3868
Cell: (813) 495-6923
Pager:  (800) 759- pin: 1201907
Email: [EMAIL PROTECTED] 


"Richard L. Rhodes" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
03/08/2001 04:10:04 AM

Please respond to [EMAIL PROTECTED] 

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED] 
cc:
Subject:  Runaway dsmserv


Were having a strange problem.

Over a period of several weeks we saw the cpu utilization of dsmserv rise
to
the point it was running our AIX server at 100% utilization.  It was
running
at 100% utilization even whan nothing was happening on the server - no
backups, migration, reclamation, etc.  We called support - they suggested
we
reboot our server, which was our idea also.

After the reboot, everything seemed back to normal.  Now, a week after the
reboot, dsmserv is running a constant 50% of our server, reguardless of
what's happening.  We're going to cycle dsmserv this afternoon after batch
processing.  Then, call support, again.

Is anyone else seeing this kind of behavior?

Rick



WindowsNT Disk Imaging.

2001-03-08 Thread Doug Thorneycroft

Quite awhile ago, Tivoli announced plans for Tivol Data Protection for
Workgroups
to support sending a disk image to, and retrieve a disk image from the TSM
server.
If I remember right, this was to be available shortly after the release of
Version 4 server.

Does anyone know if this is still in the works?



TDP for oracle and Oracle OPS system

2001-03-08 Thread Shen Ou

Does any know can Tivoli Data Protection for Oracle work with Oracle 8.x
OPS system ? if can , which will be advertent in implementation ? Thanks

   Shen Ou
   IBM China Shanghai Software Center



email appliances and TSM

2001-03-08 Thread Bob Booth - CCSO

Hi all!

I need some help!

We are presently looking for a 'NAS' type solution for email delivery.  At
this time, we use a large number of UNIX boxen to relay, route, deliver, and
store *vast* amounts of email for 48,000 students/staff on our campus.  We have
been approached by several different companies that provide 'dedicated
appliances', that do everything from scoop to nuts when it comes to email.

When I sit in the meetings with these folks, I ask the hard question... How
do we do backups?  There answer is 'duh.. don't you use Legato or Veritas?  We
support those natively'  'Grrr', I say.  'No, we use TSM.' 'oh', they say.

Then, we start getting technical, and they tell me that they do 'expose NDMP'
and TSM can do that.  Well, not exactly true.  Tivoli has not GA'd an NDMP
client yet, and NDMP is not exactly a pretty protocol for backup/restore.

What I am looking for are others, especially Universities, that are using
ADSM/TSM to back up email, that may be interested in looking into integrated
email solutions.  If I can actually tell these vendors that TSM is widely used
by universities (and of course, others) for this type of application, they
might actually think about supporting it.  I am sorry, but Veritas is *not*
the only enterprise backup software solution.  NAS vendors should start
looking for better ways to do backup/restore in large environments.

I would also be interested in hearing what others are doing in large capacity
email shops.  Are people using appliances?  Do they support TSM?  Anyone doing
NDMP?

Feel free to reply to me directly. I will not give out names or email
addresses to sales people either!

Thanks in advance!

Bob Booth
University of Illinois - Urbana



Re: restore error

2001-03-08 Thread Nicholas Cassimatis

Shekkar,

That looks like the filesystems in question are already mounted, and you're
trying to restore the mount points.  You'll either need to unmount the
filesystems in question before the restore (then remount them before you
restore data to them) or just ignore the error.  I'd just ignore it - it's
saying it couldn't overwrite something that was already there, and since
you need it there, I'd say it's OK.

Nick Cassimatis
[EMAIL PROTECTED]



Re: Compression / No Compression ???

2001-03-08 Thread Richard L. Rhodes

Oracle db's are highly compressable.  We run our Oracle backups
through the unix compress utility.  I've seen tablespace files on a
newly created instance (no data loaded yet) compress from 1gb down
to 10mb.  A normal tablespace file full of data will tipically
compress about 3-to-1.

In general, data can only be compressed once.  If you compress via
sftw, like the unix compress utility or TSM's client then the drive
hdwr compressions won't add anything.  In this case you would
basically get the native capacity of the tape drive onto a tape.  We
use 3590E drives with tapes that have 40gb native capacity.  Our
tapes that hold oracle backups generally end up with right around
40gb.  Client side compression accomplishes the same thing.

When hdwr compression is turned on, the tape drive tries to compress
the datastream it receives from the tsm server.  When not using
client compression and not backing up already compressed files, the
tape drive will attempt to compress the datastream.  On the tapes
with this kind of backups we get anywhere from 50gb up to 120gb.
120gb on a 40gb tape is a 3:1 compression ratio.  An ORacle db will
compress around 3 to 1.

Client side compression takes cpu cycles and in general will result
in a much slower backup but uses much less network bandwidth.  Hdwr
compression in the tape drive is very fast, much faster than client
side compression (usually).

The big argument is usually whether you should run your tape drive in
compressed mode even if you send already compressed data to it
(client side compression or just backing up .Z or .zip files).  If
you compress a datastream that is already compressed, the datastream
will actually get bigger.  Go ahead, run a unix compress on an
existing .Z file.  My answer is to always leave it ON.  Modern
compression chips used in tape drives can detect when data received
by the drive is uncompressable, and will stop compressing the data.
AIT drives are like this.  I've got to believe that IBM 3590 drives
are at least that smart!!!  For that matter, the TSM client can also
do this!!! That's the purpose of the "compressalways" command for the
client side dsm.opt file.  When running "compression yes" and
"compressalways no", the client will attempt to compress files.  If
the client detects that a file is uncompressable,  the client stops
compression and just send the file.

The one place I've found client side compression very usefull is when
backing up remote systems on a wan.  If I run the backup without
client compression I destroy response time for all wan users.  By
using client side compression I throttle the backup.  The client
systems can't compress/send the data fast enough to dominate the wan
link. Oh for a client side bandwidth parm like Veritas has . . . . .

Rick



On 8 Mar 2001, at 11:29, Roy Lake wrote:
> Hi Chaps,
>
> Just wanted to share my findings with you with regards to TSM compression/no 
>compression.
>
> We have 3575-L18 tape library. We use 3570-C Format tapes. We used to have CLIENT 
>compression set to YES when doing backups, with DRIVE compression OFF. Most of the 
>data on our systems is Oracle.  When we had client compression set to YES, each 
>cartridge would take about 5GB.
>
> I have done some testing and found that when I switched compression OFF, we managed 
>to get around 21GB on each cart, and also the backups were a LOT quicker.
>
> IBM recommend (and I quote:) "Oracle databases are normally full of white space, so 
>compression is required. Either h/w or client compression."
>
> Could someone please explain WHY compression is required if we get more on tape with 
>it switched OFF, and the backups are quicker?.
>
> In our environment, TSM has its own 10Meg a sec network, and 99.9% of the backups 
>are done overnight, so there is no problem with performance issues.
>
> Am I missing something here, or is it REALLY a better idea to forget about 
>compression totally?.
>
>



TSM 4.1 licensing issues (was: Windows Client - Time Change Bug)

2001-03-08 Thread Prather, Wanda

In an earlier post I complained about problems getting price
quotes/licensing info for TSM 4.1 configurations under the new Tivoli
licensing scheme.

For those of you WITH A SUPPORT CONTRACT THAT GIVES YOU ACCESS TO IBMLINK,
this week they posted a new version of the configurator that includes TSM
4.1.  You can work out an initial configuration for licensing/points/pricing
yourself.

 The configurator works no matter whether you plan to run your TSM server on
an RS6000, Windows, or OS/390.

*   Log in to IBMLINK
*   Go to the ESD section
*   Select RS6000CONFIGS
*   Select RS6000Win95
*   Download all the parts to a directory on your desktop and follow the
install instructions (you just click on self-extracting files, it's very
fast)

Before you run the configurator, you should also:

*   Read the TSM licensing section at the beginning of Chap. 16 in the
TSM 4.1 Admin Guide.  This explains all the different licenses you may need.

*   Download from IBMLINK the TSM 4.1.1 announcement letter 000-245.

From the announcement letter, you will need the explanation of the
Tier scheme on P. 3.
You will also need the product/feature numbers (e.g., 5698-TSM) from
the "Ordering Information" section.  This section also has some additional
clarification of the licensing rules.



Re: restore error

2001-03-08 Thread Shekhar Dhotre


Restore stops after this error messages .. so i can`t  ignore it .. something is
going on with symbolic links..
I am looking atfollowsymbolic  option just suggested by  Richard ..


> ANS4029E Error processing '/psoft.PRD/app/oracle': unable to build a
> directory p
> ath; a file exists with the same name as a directory
>
> ANS4029E Error processing '/psoft.PRD/app/TUXEDO': unable to build a
> directory p
> ath; a file exists with the same name as a directory
>
>
> ANS4029E Error processing '/fs35': unable to build a directory path; a
> file exis
> ts with the same name as a directory
>
> ANS4029E Error processing '/home': unable to build a directory path; a
> file exis
> ts with the same name as a directory




Nicholas Cassimatis <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 03/08/2001 01:48:06 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: restore error


Shekkar,

That looks like the filesystems in question are already mounted, and you're
trying to restore the mount points.  You'll either need to unmount the
filesystems in question before the restore (then remount them before you
restore data to them) or just ignore the error.  I'd just ignore it - it's
saying it couldn't overwrite something that was already there, and since
you need it there, I'd say it's OK.

Nick Cassimatis
[EMAIL PROTECTED]


 $RFC822.eml


Re: restore error

2001-03-08 Thread Shekhar Dhotre


found something exactlly what is happening with my restore ..
http://msgs.adsm.org/cgi-bin/get/adsm97/1604.html



shekhar




Nicholas Cassimatis <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 03/08/2001 01:48:06 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: restore error


Shekkar,

That looks like the filesystems in question are already mounted, and you're
trying to restore the mount points.  You'll either need to unmount the
filesystems in question before the restore (then remount them before you
restore data to them) or just ignore the error.  I'd just ignore it - it's
saying it couldn't overwrite something that was already there, and since
you need it there, I'd say it's OK.

Nick Cassimatis
[EMAIL PROTECTED]


 $RFC822.eml


Re: Runaway dsmserv

2001-03-08 Thread Richard L. Rhodes

THAT's IT

There were a bunch of sesions as described below.  By killing the the
cpu went back to where we would expect it.

Thanks for all the Help!

Rick

On 8 Mar 2001, at 12:43, Steve Schaub wrote:
> We saw this in our shop also - it ended up being a problem with win95/nt clients 
>using IE5.0 to access the web gui producing the "?" sessions.  Three or more of these 
>would drive the cpu to 100% and leave it there.  Upgrading IE to 5.5 or using 
>win/2000 solved the problem for us.
>
>
> I too have seen this occuring at many of my clients. What I can surmise is
> that you get runaway sessions appearing with a ? when you do a 'q sess'. If
> you can cancel these, then you will see that the dsmserv process CPU
> utilization drops back down dramatically. I have played around a little
> with idletimeout but I'm not sure if this is the correct solution.
>
> Can someone from Tivoli provide feedback on this one? I've seen the CPU
> shoot up to 100% even on SP nodes and 4-way S7A machines with lots of
> memory. It will cause the machine to start thrashing. There should be a
> cleaner, more automated solution instead of having to reboot the server or
> manually canceling the runaway sessions.
>
>



Re: Compression / No Compression ???

2001-03-08 Thread Prather, Wanda

I agree with Richard and Dwight.  It depends.  We have client compression
on, I did a bit of testing, and sending client-compressed data on through
the tape drive compression generally doesn't hurt us, but doesn't help much,
either.  In some simple tests I ran, we got at most an additional 10%
compression on 3490 tape drives.

The problem with TSM and figuring out what compression is doing, is that TSM
only tells you what the CLIENT reports sending to it.  It doesn't KNOW what
the hardware compression is doing.  You can't necessarily rely on the
CAPACITY figures it reports for each volume.

Assume, for the sake of simplicity, that the compression ratio is 2:1 for
either client software compression, or your tape hardware compression.  And
assume your "native", or raw physical tape cartridge capacity is 20 GB.

- On a client that has 40 GB of data to send, if compression is ON at the
client, it will compress the data down to 20 GB, report 20 GB sent to the
server, and the server will report to YOU that it sent 20 GB to the tape,
and you will have 1 full tape.  That tape volume will show "est capacity" at
20 GB.

- On a client that has 40 GB of data to send, if compression is OFF at the
client, it will report 40 GB sent to the server, and the server will tell
you that it sent 40 GB to the tape, and you will still have exactly 1 full
tape, since the hardware will compress the 40 GB down to 20 GB.  That tape
volume will show "est capacity" at 40 GB.

If you have a mixture of clients compressing/ not compressing, you can't
look at the "capacity" figures for your tape  volumes and tell a darn thing.
All you can do is make some controlled tests where you work with a specific
client to send a specific set of data, and see how much data you can send to
a tape before it fills up.  Then you can assume you will get the same
compression ratios on clients with similar data.













-Original Message-
From: Richard L. Rhodes [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 2001 9:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Compression / No Compression ???


Oracle db's are highly compressable.  We run our Oracle backups
through the unix compress utility.  I've seen tablespace files on a
newly created instance (no data loaded yet) compress from 1gb down
to 10mb.  A normal tablespace file full of data will tipically
compress about 3-to-1.

In general, data can only be compressed once.  If you compress via
sftw, like the unix compress utility or TSM's client then the drive
hdwr compressions won't add anything.  In this case you would
basically get the native capacity of the tape drive onto a tape.  We
use 3590E drives with tapes that have 40gb native capacity.  Our
tapes that hold oracle backups generally end up with right around
40gb.  Client side compression accomplishes the same thing.

When hdwr compression is turned on, the tape drive tries to compress
the datastream it receives from the tsm server.  When not using
client compression and not backing up already compressed files, the
tape drive will attempt to compress the datastream.  On the tapes
with this kind of backups we get anywhere from 50gb up to 120gb.
120gb on a 40gb tape is a 3:1 compression ratio.  An ORacle db will
compress around 3 to 1.

Client side compression takes cpu cycles and in general will result
in a much slower backup but uses much less network bandwidth.  Hdwr
compression in the tape drive is very fast, much faster than client
side compression (usually).

The big argument is usually whether you should run your tape drive in
compressed mode even if you send already compressed data to it
(client side compression or just backing up .Z or .zip files).  If
you compress a datastream that is already compressed, the datastream
will actually get bigger.  Go ahead, run a unix compress on an
existing .Z file.  My answer is to always leave it ON.  Modern
compression chips used in tape drives can detect when data received
by the drive is uncompressable, and will stop compressing the data.
AIT drives are like this.  I've got to believe that IBM 3590 drives
are at least that smart!!!  For that matter, the TSM client can also
do this!!! That's the purpose of the "compressalways" command for the
client side dsm.opt file.  When running "compression yes" and
"compressalways no", the client will attempt to compress files.  If
the client detects that a file is uncompressable,  the client stops
compression and just send the file.

The one place I've found client side compression very usefull is when
backing up remote systems on a wan.  If I run the backup without
client compression I destroy response time for all wan users.  By
using client side compression I throttle the backup.  The client
systems can't compress/send the data fast enough to dominate the wan
link. Oh for a client side bandwidth parm like Veritas has . . . . .

Rick



On 8 Mar 2001, at 11:29, Roy Lake wrote:
> Hi Chaps,
>
> Just wanted to share my findings with you with regards to TSM

Re: Compression / No Compression ???

2001-03-08 Thread Remeta, Mark

Well I for one want to know exactly what my clients are backing up. The only
way to tell that is with compession off or as Wanda said, it will only
report the actual amount of compressed data that is backed up. Also, we have
AIT tape drives and I have found that reclamation and tape to tape copies
take extraordinary amounts of time if the data is already compressed(client
compression). It is for these reasons that I force compression off at the
server for all my clients.


My .02,
Mark


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 2001 3:13 PM
To: [EMAIL PROTECTED]
Subject: Re: Compression / No Compression ???


I agree with Richard and Dwight.  It depends.  We have client compression
on, I did a bit of testing, and sending client-compressed data on through
the tape drive compression generally doesn't hurt us, but doesn't help much,
either.  In some simple tests I ran, we got at most an additional 10%
compression on 3490 tape drives.

The problem with TSM and figuring out what compression is doing, is that TSM
only tells you what the CLIENT reports sending to it.  It doesn't KNOW what
the hardware compression is doing.  You can't necessarily rely on the
CAPACITY figures it reports for each volume.

Assume, for the sake of simplicity, that the compression ratio is 2:1 for
either client software compression, or your tape hardware compression.  And
assume your "native", or raw physical tape cartridge capacity is 20 GB.

- On a client that has 40 GB of data to send, if compression is ON at the
client, it will compress the data down to 20 GB, report 20 GB sent to the
server, and the server will report to YOU that it sent 20 GB to the tape,
and you will have 1 full tape.  That tape volume will show "est capacity" at
20 GB.

- On a client that has 40 GB of data to send, if compression is OFF at the
client, it will report 40 GB sent to the server, and the server will tell
you that it sent 40 GB to the tape, and you will still have exactly 1 full
tape, since the hardware will compress the 40 GB down to 20 GB.  That tape
volume will show "est capacity" at 40 GB.

If you have a mixture of clients compressing/ not compressing, you can't
look at the "capacity" figures for your tape  volumes and tell a darn thing.
All you can do is make some controlled tests where you work with a specific
client to send a specific set of data, and see how much data you can send to
a tape before it fills up.  Then you can assume you will get the same
compression ratios on clients with similar data.













-Original Message-
From: Richard L. Rhodes [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 2001 9:24 AM
To: [EMAIL PROTECTED]
Subject: Re: Compression / No Compression ???


Oracle db's are highly compressable.  We run our Oracle backups
through the unix compress utility.  I've seen tablespace files on a
newly created instance (no data loaded yet) compress from 1gb down
to 10mb.  A normal tablespace file full of data will tipically
compress about 3-to-1.

In general, data can only be compressed once.  If you compress via
sftw, like the unix compress utility or TSM's client then the drive
hdwr compressions won't add anything.  In this case you would
basically get the native capacity of the tape drive onto a tape.  We
use 3590E drives with tapes that have 40gb native capacity.  Our
tapes that hold oracle backups generally end up with right around
40gb.  Client side compression accomplishes the same thing.

When hdwr compression is turned on, the tape drive tries to compress
the datastream it receives from the tsm server.  When not using
client compression and not backing up already compressed files, the
tape drive will attempt to compress the datastream.  On the tapes
with this kind of backups we get anywhere from 50gb up to 120gb.
120gb on a 40gb tape is a 3:1 compression ratio.  An ORacle db will
compress around 3 to 1.

Client side compression takes cpu cycles and in general will result
in a much slower backup but uses much less network bandwidth.  Hdwr
compression in the tape drive is very fast, much faster than client
side compression (usually).

The big argument is usually whether you should run your tape drive in
compressed mode even if you send already compressed data to it
(client side compression or just backing up .Z or .zip files).  If
you compress a datastream that is already compressed, the datastream
will actually get bigger.  Go ahead, run a unix compress on an
existing .Z file.  My answer is to always leave it ON.  Modern
compression chips used in tape drives can detect when data received
by the drive is uncompressable, and will stop compressing the data.
AIT drives are like this.  I've got to believe that IBM 3590 drives
are at least that smart!!!  For that matter, the TSM client can also
do this!!! That's the purpose of the "compressalways" command for the
client side dsm.opt file.  When running "compression yes" and
"compressalways no", the cl

Re: Compression / No Compression ???

2001-03-08 Thread Walker, Lesley R

Roy Lake wrote:
> We used to have CLIENT compression set to YES when doing backups, with
DRIVE
> compression OFF. Most of the data on our systems is Oracle.  When we had
> client compression set to YES, each cartridge would take about 5GB.
>
> ...when I switched compression OFF, we managed to get around 21GB on each
> cart, and also the backups were a LOT quicker...
>
> Could someone please explain WHY compression is required if we get more
> on tape with it switched OFF, and the backups are quicker?.

Are you SURE drive compression is [still] turned off?  If it is off, you
should get the same amount of data on the same type of tape, no matter what.
So I reckon your drives ARE compressing the data.

The backups could be quicker because (a) the client isn't using CPU cycles
to do compression, and/or (b) maybe your network hardware is doing some
compression.

--
Lesley Walker
Unix Engineering, EDS New Zealand
[EMAIL PROTECTED]
"I feel that there is a world market for as many as five computers"
Thomas Watson, IBM corp. - 1943



Re: Offsite Storage w/out DRM

2001-03-08 Thread Demler, Debra

We looked into alternatives to DRM when we installed TSM/AIX over a year ago
mainly because of the cost.  I really did not want to write and maintain our
own scripts for the DR functions.  We installed AutoVault
(www.coderelief.com) for far less $$.  It has worked nicely for us and they
have continued to add features like backupset vaulting.  I don't think I
would have taken or had the time to add the new features on my own.

Deb Demler
Database and System Administrator
The Reading Hospital and Medical Center



>Offsite Storage w/out DRM
> Forum:   ADSM.ORG - ADSM / TSM Mailing List Archive
> Date:  Mar 06, 17:01
> From:  John Marquart <[EMAIL PROTECTED]>
>
>Hello all,
>I am working on setting up an offsite storage plan, and needed to
>get some clarification on my configuration.
>
>
>1)  Based on the _TSM for AIX: Admin Guide_ Ch. 20, I came up with the
>following schedule:
>
>
>backup stgpool prim_pool copy_pool
>update stgpool copy_pool reclaim=100
>update volume * access=offsite location="offsite" \
>wherestgpool=copy_pool whereaccess=readwrite,readonly \
>wherestatus=filling,full
>
>   
>   
>backup db type=full devclass=3590tape scratch=yes
>backup devconfig filename=dev.config
>backup volhistory filename=vol.history
>query volume stgpool=copy_pool access=offsite status=empty
>
>   
>   
>   
>
>
>2) But then, I heard something about a "move media" command - and after
>reading the _Admin Reference_ and checking out the archives I can up w/
>another 2 possibilities:
>
>A)
>
>backup stgpool prim_pool copy_pool
>update stgpool copy_pool reclaim=100
>move media * stgpool=copy_pool wherestate=mountableinlib \
>wherestatus=full,filling ovflocation="Offsite Location" \
>remove=yes cmd="update volume &vol access=offsite" \
>cmdfile=checkout.mac checklabel=no
>macro checkout.mac
>backup db type=full devclass=3590tape scratch=yes
>backup devconfig filename=dev.config
>backup volhistory filename=vol.history
>
>   
>
>query media stgpool=copy_pool wherestatus=empty \
>wherestate=mountablenotinlib cmd="checkin libvol 3494a &vol \
>status=private devclass=3590 checklabel=no &NL update vol &vol \
>access=readwrite" cmdfile=checkin.mac
>
>   
>
>macro checkin.mac
>
>
>B) begins the same, but after moving the checkout tapes offsite, finishes
>w/ the following instead of the query media, etc. command:
>
>   
>
>move media * stgpool=copy_pool wherestate=mountablenotinlib \
>wherestatus=empty cmd="checkin libvol 3494a &vol status=private \
>devclass=3590 &NL update volume &vol access=readwrite" \
>cmdfile=checkin.mac
>
>   
>
>macro checkin.mac
>
>
>
>
>While my example 1 is the "by the book" method, it seems from the
>varied posts concerning "move media vs. checkout" that the "move media"
>command is the preferred method.  Assuming that is the case, I completely
>do not understand why the "move media" command can check tapes out of the
>library, but not back in.  Given this restraint, is there any advantage of
>my 2B method over my 2A method?
>
>Also, w/ regards to the checkout procedure, is my "move media"
>version the simplest it can be?  The reason I ask, is that it befuddles me
>why I can't set access=offsite when i am using it to check tapes out, but
>rather have to do it via the cmd option.
>
>
>thanks in advance,
>-john marquart
>
>John "Jamie" Marquart   | This message posted 100% MS free.
>Digital Library SysAdmin|  Work: 812-856-5174   Pager: 812-334-6018
>Indiana University Libraries|  ICQ: 1131494 D'net Team:  6265
>irc.kdfs.net - come visit vaix



Re: email appliances and TSM

2001-03-08 Thread Shawn M. Drew

We have been struggling with this for about 2 years now.
Here is some information from my experience.

Our NT admins bought some auspex NAS servers (NS2k)
and told us about it after the fact.  Luckily, they
only used the NFS portion of the box. Until 2 months ago.
We have been NFS mounting to another "supported" server, then backing
up the data with DOMAIN statements in the dsm.opt.
The problem with this (which is affecting us now) is that
we only back up the the unix file permissions, and can't
get the NT ACLs.  without an NDMP supported client,
its one or the other, but not both.  This may not
be an issue with email delivery though.
We felt this was better than installing the old auspex
client on the server, just so NAS can do what it was made for,
servring files and not running some weird application

As to NDMP support in TSM.  I have talked to the Develepors
a number of times about this, and they promised me an NDMP
client by the middle of first quarter this year. Last I heard
it was middle of second quarter.  From what I can tell the
only benefit of NDMP has over NFS mounting/backing up
is that you can get both sets of ACLs.

shawn


___
Shawn M. Drew
[EMAIL PROTECTED]
ADSM/TSM Systems Administrator



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Bob Booth - CCSO
Sent: Thursday, March 08, 2001 12:46 PM
To: [EMAIL PROTECTED]
Subject: email appliances and TSM


Hi all!

I need some help!

We are presently looking for a 'NAS' type solution for email delivery.  At
this time, we use a large number of UNIX boxen to relay, route, deliver, and
store *vast* amounts of email for 48,000 students/staff on our campus.  We
have
been approached by several different companies that provide 'dedicated
appliances', that do everything from scoop to nuts when it comes to email.

When I sit in the meetings with these folks, I ask the hard question... How
do we do backups?  There answer is 'duh.. don't you use Legato or Veritas?
We
support those natively'  'Grrr', I say.  'No, we use TSM.' 'oh', they say.

Then, we start getting technical, and they tell me that they do 'expose
NDMP'
and TSM can do that.  Well, not exactly true.  Tivoli has not GA'd an NDMP
client yet, and NDMP is not exactly a pretty protocol for backup/restore.

What I am looking for are others, especially Universities, that are using
ADSM/TSM to back up email, that may be interested in looking into integrated
email solutions.  If I can actually tell these vendors that TSM is widely
used
by universities (and of course, others) for this type of application, they
might actually think about supporting it.  I am sorry, but Veritas is *not*
the only enterprise backup software solution.  NAS vendors should start
looking for better ways to do backup/restore in large environments.

I would also be interested in hearing what others are doing in large
capacity
email shops.  Are people using appliances?  Do they support TSM?  Anyone
doing
NDMP?

Feel free to reply to me directly. I will not give out names or email
addresses to sales people either!

Thanks in advance!

Bob Booth
University of Illinois - Urbana


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com




Re: Portable Barcode Scanner for 3590?

2001-03-08 Thread Shawn M. Drew

I talked with a guy over at EDP, who says this label
was code39, but modified at the request of STK.
when IBM made their barcode readers, they decided to
just support the STK standard.
They talked with Symbol a while back to develop barcode readers
that will work with these, and the symbology was called
"Tri-optic Code39"
So, we found a number of these scanners on Symbols web site
(considerable more expensive than I thought) that supported this
standard.

However, when we first started looking for this, a reseller
claimed they sent symbol a sample of our labels, and they said
they didn't have anything that will scan it

I am trying now to get a demo of the trioptic scanner to try myself.

shawn
___
Shawn M. Drew
[EMAIL PROTECTED]
ADSM/TSM Systems Administrator


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Thursday, March 08, 2001 9:09 AM
To: [EMAIL PROTECTED]
Subject: Re: Portable Barcode Scanner for 3590?


But I don't think they have a standard start or stop code/char...
We ran into the same problem about a year or two ago.
Good luck in finding something that will read it 'cause we couldn't
Dwight

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 2001 8:42 AM
To: [EMAIL PROTECTED]
Subject: Re: Portable Barcode Scanner for 3590?


>I looked all over the place and found references that
>this is a code 39 type label, but our code 39 scanner
>could not scan it.

Shawn - From info I compiled in http://people.bu.edu/rbs/ADSM.QuickFacts:

3590 barcodeIs formally "Automation
Identification
Manufacturers Uniform Symbol
Description
Version 3", otherwise known as Code
39.
It runs across the full width of the
label. The two recognized vendors:
Engineered Data Products (EDP)
Tri-Optic
Wright Line Tri-Code
Ref: Redbook "IBM Magstar Tape
Products
Family: A Practical Guide", topic
Cartridge Labels and Bar Codes.

I should think that those two vendors could either supply or guide you to a
supplier of such a barcode reader.

  Richard Sims, BU


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com




Re: Management Class Question

2001-03-08 Thread James Thompson

If the management class no longer exists, data will be expired based on the
default management class for the domain.

During your next full incremental on a given machine the following will
happen.

If the file or directory exists on both machine and the tsm server, then
that file or directory will get bound to the management class specified in
the include/exclude list or to the default management class.

There are a few scenarios you should consider.  Version that is inactive
with no actives, api data, etc... I would suggest creating some temporary
nodenames and testing this out yourself.  Backup some data using your
existing policies.  Then move these nodes over to a different domain.
Re-run the backups.  You can check the management class for the objects
using select queries.
select * from backups where node_name='testnode'
select * from archives where node_name='testnode'

James Thompson


>From: Jeff Rankin <[EMAIL PROTECTED]>
>Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: Management Class Question
>Date: Thu, 8 Mar 2001 09:43:15 -0600
>
>We are looking at doing a massive reorganization of our naming
>standards within our TSM servers.  During this reorganization, some
>management class we have will go away and new ones will be created.
>Others will just be renamed.
>
>I know that in order to bind data to a specific management class I can
>just use the include statement for the clients and that will start
>backing up or archive files to the new management class assignments,
>but my question is what happens to the old data that is already backed
>up to the TSM server?  Will it be rebound to the new management classes
>or will it fall into the grace period retention of the policy domains?
>If the policy domain is what controls the data retention in this case,
>would it just be safer to leave all of the existing management classes
>in place and just start backing up with the new management classes and
>let the old data expire off?
>
>We are running TSM 4.1.2. Any help would be greatly appreciated.
>
>--
>Jeff Rankin
>Associate Technical Analyst, Excel Corporation
>Phone:   316-291-2903
>Fax: 316-266-4415
>Email:   [EMAIL PROTECTED]

_
Get your FREE download of MSN Explorer at http://explorer.msn.com



Re: Longlasting tape-reclamation run

2001-03-08 Thread Geoff Fitzhardinge

Joe Faracchio wrote to me:

>Try using a disk-file area and one tape reclamation.
>
>... joe.f.
>
>Joseph A Faracchio,  Systems Programmer, UC Berkeley

Hello Joe,

Have you had good results with this technique to speed things up?

It would be nice if there was some documentation to explain the practical
differences in the behaviour of

(1) Reclaim (tape pool to itself)
(2) Move Data (tape pool to itself)
(3) Move Data (tape pool to disk pool) followed by Migrate (disk pool to
tape pool)

There has been discussion on this list, and some suggestions in Tivoli
documentation,
on differences between Reclaim and Move Data with respect to how much space
is
recovered - Reclaim removes empty space from within aggregates, Move Data
just
copies aggregates without reconstructing them.

What is important to me in the present context (relaiming tapes with Notes
Agent files
and large numbers of "collocation clusters") is the elapsed time taken by
the operation,
and also to some extent the resource consumption.

I find that (1) and (2) are similar and abysmally slow, but use very little
CPU or database
I/O (as I said in my original posting, most of the elapsed time is waiting
for input tape
positioning).

(3) on the other hand is MUCH faster (typically an hour or two for the Move
Data and
same again for the Migrate, although sometimes the Migrate bogs down a
bit).   CPU
consumption is quite significant.  It is noticeable that the Move Data to
disk does not
issue the ANR1142I messages (counting "clusters") which are issued both by
the
Reclaim and by Move Data to tape.

Why is the Move Data to disk so much faster than the Move Data to tape,
when the input
tape is the same?

I take it that performance using your suggestion (4), Reclaim to Disk File
then Migrate
back to tape pool, will be quite similar to my (3).

I haven't tried (4) because I am reluctant to invest more disk space for a
housekeeping
function, and my normal disk pool (uncached) is mostly free during the day.
Unfortunately
it is a bit manual, I initially hoped I could automate it by defining the
disk pool as the reclaim
pool, but found that this is not allowed to be a random access pool.  Your
method gets
around this.  On the other hand, I can overlap multiple Move Datas, but
only one tape will
Reclaim at a time.

Cheers,
Geoff



Re: Netware/NAV-CE

2001-03-08 Thread Suad Musovich

On Tue, Mar 06, 2001 at 07:40:56AM +0200, Mike Glassman - Admin wrote:
> What do you mean by something adverse?
>
> What symptoms ?
ABENDs. Attached is a sample for NAV below.

NAV runs happily without TSM, and visa-versa.  Put the 2 together, it
causes and ABEND on NAV or TSM freezes.

The third application that is suspected is Mercury (god knows why)

> Are you using the latest TSA files from Novell ?

Cheers, Suad
--

Server REGNOV1 halted Tuesday, March 6, 2001   8:22:27 am
Break 1: Server-4.11a: Page Fault Processor Exception (Error code )

Registers:
CS = 0008 DS = 0010 ES = 0010 FS = 0010 GS = 0010 SS = 0010
EAX = 0002 EBX =  ECX =  EDX = 
ESI = 1E8B36B0 EDI = 1A417010 EBP = 1E84B434 ESP = 1E84B42C
EIP = F163F6ED FLAGS = 00017246
F163F6ED 668B12 MOV DX,[EDX]= ?
EIP in NAVAPI.NLM at code start +A6EDh

Running process: NAV Process
Created by: RTVSCAN.NLM
Stack pointer: 1E84B3EC
Stack limit: 1E847030
Scheduling priority: 0
Wait state: 00
Stack: --001C  ?
   --01BEF010  ?
   --1E84B470  ?
   --1A417010  ?
   --01BEF010  ?
   --01953010  ?
   F164055C  (NAVAPI.NLM|NAVEmptyTypeAhead+AA66)
   --1E8B36B0  ?
   --1E84B450  ?
   --4D54483C  ?
   --0A0D3E4C  ?
   --2D2D213C  ?
   --57544920  ?
   --37312E32  ?
   --5720322E  ?
   --4F206465  ?
   --32207463  ?
   --1E84B40E  ?
   --1E84B810  ?
   --01BEF010  ?
   --0200  ?
   F1640315  (NAVAPI.NLM|NAVEmptyTypeAhead+A81F)
   --01BEF010  ?
   --1E84B48C  ?
   --4D54483C  ?
   --0A0D3E4C  ?
   --2D2D213C  ?
   --57544920  ?
   --37312E32  ?
   --5720322E  ?
   --4F206465  ?
   --32207463  ?

Additional Information:
The CPU encountered a problem executing code in NAVAPI.NLM.  The
problem may be in that module or in data passed to that module
by a process owned by RTVSCAN.NLM.

Loaded Modules:
SERVER.NLM   NetWare Server Operating System
  Version 4.11August 22, 1996
  Code Address: F800h  Length: 0010h
  Data Address: F060h  Length: 000C1000h
PSERVER.NLM  NetWare Print Server PTF v1.07 (990917)
  Version 5.00September 17, 1999
  Code Address: F1481000h  Length: 0001C906h
  Data Address: 19DEh  Length: 000A0298h
NAVAPI.NLM   NAVAPI
  Version 1.00August 25, 2000
  Code Address: F1635000h  Length: 0001BBE6h
  Data Address: 1A157000h  Length: 0B5Ch
I2_LDVP.NLM  Intel LANDesk Virus Protect Glue
  Version 7.50October 6, 2000
  Code Address: F1624000h  Length: 00010D42h
  Data Address: 1A89E000h  Length: 4374h
RTVSCAN.NLM  Norton AntiVirus Server
  Version 7.50October 9, 2000
  Code Address: F15D8000h  Length: 0004B7A2h
  Data Address: 1ED3B000h  Length: 00038914h
SNMPHNDL.NLM Intel LANDesk SNMP Trap Alert Handler
  Version 6.10August 9, 2000
  Code Address: F11AA000h  Length: 0CACh
  Data Address: 1ABA4000h  Length: 1560h
NLMXHNDL.NLM Intel LANDesk Load NLM Handler
  Version 6.10March 14, 2000
  Code Address: F118C000h  Length: 0568h
  Data Address: 1EBCC000h  Length: 02C8h
BCSTHNDL.NLM Intel LANDesk Broadcast Alert Handler
  Version 6.10March 14, 2000
  Code Address: F15D6000h  Length: 11E7h
  Data Address: 01B07000h  Length: 02D8h
IAO.NLM  Intel LANDesk Alert Originator
  Version 6.10August 9, 2000
  Code Address: F15CD000h  Length: 8929h
  Data Address: 01ACE000h  Length: 000367A8h
HNDLRSVC.NLM Intel LANDesk Handler Manager
  Version 6.10March 14, 2000
  Code Address: F159A000h  Length: 3839h
  Data Address: 1EE01000h  Length: 0658h
AMSLIB.NLM   Intel LANDesk AMS Library
  Version 6.10March 14, 2000
  Code Address: F1598000h  Length: 1FCDh
  Data Address: 1EE2h  Length: 03B4h
MSGSYS.NLM   Intel LANDesk Message System
  Version 6.10April 14, 2000
  Code Address: F15C5000h  Length: 778Dh
  Data Address: 1EDD6000h  Length: 1060h
PDS.NLM  Intel LANDesk Ping Discovery Service
  Version 6.10March 14, 2000
  Code Address: F1594000h  Length: 3779h
  Data Address: 011D8000h  Length: 1E06h
VPREG.NLMNorton AntiVirus Registry
  Version 7.50October 9, 2000
  Code Address: F158E000h  Length: 579Fh
  Data Address: 1EE1B000h  Length: 1E64h
BTRIEVE.NLM  Btrieve NLM
  Version 6.10f   May 3, 1996
  Code Address: F16C4000h  Length: 00029A80h
  Data Address: 1AC54000h  Length: 0EA0h
TUI.NLM  Textual User Interface IP0200.G01
  Version 1.05a   June 2, 1997
  Code Address: F159F000h  Length: 9ABEh
  Data Address: 1AC93000h  Length: 053Ch
CALNLM32.NLM NetWare NWCalls Runtime Library
  Version 5.04September 23, 1999
  Code Address: F16AE000h  Length: 00015FECh
  Data Address: 1EBAD000h  Length: 04A0h
DSMC.NLM Tivoli Distributed Storage Manager
  Version 4.01a   August 2, 2000
  Code Address: F14BE000h  Length: 000CF

Manually "inactivate" NT files

2001-03-08 Thread Shawn Drew

I wanted to get an idea of what people do when
clients are taken offline permanently.

Ideally, we would run the backup client one last time excluding everything
(so the files will follow the Management Class setting for Inactive files)

However, we are commonly told a client is offline after the fact.  So we
cannot
run the client one last time.
On unix, it seems (although I haven't done this yet) that is would just be
a matter
of reconfiguring my workstation to "imitate" the removed node, and running
an incremental (with exlusion settings) and it will expire everything.

On NT however, the filesspace name is named after the UNC name
(i.e \\ntserver\c$)  So when I reconfigure my workstation, it creates a new
filespace
with my workstations unc name.

I see 2 ways to possibly solve this (both of which are a little cumbersome
and ugly

- rename my workstation to the name of the removed node
- rename the filespace on the server to fit my unc name

Is there any server command or any other way to do this?



shawn


___
Shawn Drew
Tivoli IT - ADSM/TSM Systems Administrator
[EMAIL PROTECTED]



Re: Manually "inactivate" NT files

2001-03-08 Thread Stephen Mackereth

Hi Shawn,

Why don't you leave it alone for some period of time
like 3 months (if you not short on space) and then
delete the filespaces & the node permanently.

This depends on
1. how the backup copygroup is defined.
2. how long after the client has gone, are you required to restore anything.

q filesp {node_name}

del filesp {node_name} {filespace_name from above command}

rem node {node_name}

 Regards

 Stephen Mackereth
 Senior UNIX Consultant / Senior Storage Administrator (TSM)
 ITS Unix Systems Support
 Coles Myer Ltd.

-Original Message-
From: Shawn Drew [mailto:[EMAIL PROTECTED]]
Sent: Friday, 9 March 2001 13:11
To: [EMAIL PROTECTED]
Subject: Manually "inactivate" NT files


I wanted to get an idea of what people do when
clients are taken offline permanently.

Ideally, we would run the backup client one last time excluding everything
(so the files will follow the Management Class setting for Inactive files)

However, we are commonly told a client is offline after the fact.  So we
cannot
run the client one last time.
On unix, it seems (although I haven't done this yet) that is would just be
a matter
of reconfiguring my workstation to "imitate" the removed node, and running
an incremental (with exlusion settings) and it will expire everything.

On NT however, the filesspace name is named after the UNC name
(i.e \\ntserver\c$)  So when I reconfigure my workstation, it creates a new
filespace
with my workstations unc name.

I see 2 ways to possibly solve this (both of which are a little cumbersome
and ugly

- rename my workstation to the name of the removed node
- rename the filespace on the server to fit my unc name

Is there any server command or any other way to do this?



shawn


___
Shawn Drew
Tivoli IT - ADSM/TSM Systems Administrator
[EMAIL PROTECTED]

This email and any attachments may contain privileged and
confidential information and are intended for the named
addressee only.  If you have received this e-mail in error,
please notify the sender and delete this e-mail immediately.
Any confidentiality, privilege or copyright is not waived or
lost because this e-mail has been sent to you in error.  It
is your responsibility to check this e-mail and any
attachments for viruses.



Re: Longlasting tape-reclamation run

2001-03-08 Thread Joe Faracchio

High and low points of one-drive reclamation:

1) its not move data as far as I know.  It looks like this:
  Storage Pool Name: TAPEPOOL
  Storage Pool Type: Primary
  Device Class Name: 3590
Estimated Capacity (MB): 4,149,171.5

   Reclaim Storage Pool: RECLPOOL<  here's the kicker
 Maximum Size Threshold: No Limit
 Access: Read/Write
Description: the pool for all onsite data
  Overflow Location:
  Cache Migrated Files?:
 Collocate?: Filespace
  Reclamation Threshold: 19
Maximum Scratch Volumes Allowed: 1
  Delay Period for Volume Reuse: 6 Day(s)

2) there's a 'bug' where migration back to tapepool begins almost
immediately if I don't have RECLPOOL set to HI=100 low=99
So consequently I use a perl script that watches and does the right thing
(changes 100/99 to 0/0 when it sees *any* data stored in RECLPOOL.

3) the perl script proved to be extra helpful because I do other things
like:  1 - set tape to Read/Only if its still FILLING otherwise it chases
its tail in re-writing the data to the same tape until it can't.
   2 - add 1 or two scratch tapes to tape pool maxscratch count
everytime it migrates the data back.  Thereby ensuring a user gets his own
tape if he doesn't have one already.
   3 I use an 'are we busy' algorithm to allow other work and enabling
14 hours for reclamation over the course of the day.

4) You have to have mountret > 1 because in this setup a new tape is
written on a few times if the users don't have data on some other tape.

5) is it faster? I don't know , don't care because with only two tape
drives but using only one for reclamation  I can let it run noon to
midnight. (after expire inventory and db backup in the a.m.)  I average
about 5 or 6 reclams a day.

6) its true about collocation and cluster passes : the more users on a
tape the more passes the system makes.  But I think I'm saving the tape
read time since its coming off disk.


7) Using Wanda's pool cap average SQL select statement (THANKS WANDA!!!) I
get numbers like:

 STGPOOL_NAME   STATUS AVG_PCTU
-- -- 
COPYPOOL   FULL  89.03
TAPEPOOL   FILLING   51.80
TAPEPOOL   FULL  88.43
with 138 filling and 60 tapes full. ( I hope to get it to 150/50
respectively.

What more can I say?  I'm nuts! :-) but with 1000 1-Gig pc/mac users 
it works.
... joe.f.

On Fri, 9 Mar 2001, Geoff Fitzhardinge wrote:

> Joe Faracchio wrote to me:
>
> >Try using a disk-file area and one tape reclamation.
> >
> >... joe.f.
> >
> >Joseph A Faracchio,  Systems Programmer, UC Berkeley
>
> Hello Joe,
>
> Have you had good results with this technique to speed things up?
>
> It would be nice if there was some documentation to explain the practical
> differences in the behaviour of
>
> (1) Reclaim (tape pool to itself)
> (2) Move Data (tape pool to itself)
> (3) Move Data (tape pool to disk pool) followed by Migrate (disk pool to
> tape pool)
>
> There has been discussion on this list, and some suggestions in Tivoli
> documentation,
> on differences between Reclaim and Move Data with respect to how much space
> is
> recovered - Reclaim removes empty space from within aggregates, Move Data
> just
> copies aggregates without reconstructing them.
>
> What is important to me in the present context (relaiming tapes with Notes
> Agent files
> and large numbers of "collocation clusters") is the elapsed time taken by
> the operation,
> and also to some extent the resource consumption.
>
> I find that (1) and (2) are similar and abysmally slow, but use very little
> CPU or database
> I/O (as I said in my original posting, most of the elapsed time is waiting
> for input tape
> positioning).
>
> (3) on the other hand is MUCH faster (typically an hour or two for the Move
> Data and
> same again for the Migrate, although sometimes the Migrate bogs down a
> bit).   CPU
> consumption is quite significant.  It is noticeable that the Move Data to
> disk does not
> issue the ANR1142I messages (counting "clusters") which are issued both by
> the
> Reclaim and by Move Data to tape.
>
> Why is the Move Data to disk so much faster than the Move Data to tape,
> when the input
> tape is the same?
>
> I take it that performance using your suggestion (4), Reclaim to Disk File
> then Migrate
> back to tape pool, will be quite similar to my (3).
>
> I haven't tried (4) because I am reluctant to invest more disk space for a
> housekeeping
> function, and my normal disk pool (uncached) is mostly free during the day.
> Unfortunately
> it is a bit manual, I initially hoped I could automate it by defining the
> disk pool as the reclaim
> pool, but found that this is not allowed to be a random access pool

Re: Manually "inactivate" NT files

2001-03-08 Thread Joe Faracchio

that's what we do: 3 months and then delete filespace
but we have it as a perl program that runs overnight
because it takes a lot of time and cycles.
Come to think of it that's the worse time!!!
I need to change it from 3 am to 8 pm !!!

.. joe.f.

Joseph A Faracchio,  Systems Programmer, UC Berkeley


On Fri, 9 Mar 2001, Stephen Mackereth wrote:

> Hi Shawn,
>
> Why don't you leave it alone for some period of time
> like 3 months (if you not short on space) and then
> delete the filespaces & the node permanently.
>
> This depends on
> 1. how the backup copygroup is defined.
> 2. how long after the client has gone, are you required to restore anything.
>
> q filesp {node_name}
>
> del filesp {node_name} {filespace_name from above command}
>
> rem node {node_name}
>
>  Regards
>
>  Stephen Mackereth
>  Senior UNIX Consultant / Senior Storage Administrator (TSM)
>  ITS Unix Systems Support
>  Coles Myer Ltd.
>
> -Original Message-
> From: Shawn Drew [mailto:[EMAIL PROTECTED]]
> Sent: Friday, 9 March 2001 13:11
> To: [EMAIL PROTECTED]
> Subject: Manually "inactivate" NT files
>
>
> I wanted to get an idea of what people do when
> clients are taken offline permanently.
>
> Ideally, we would run the backup client one last time excluding everything
> (so the files will follow the Management Class setting for Inactive files)
>
> However, we are commonly told a client is offline after the fact.  So we
> cannot
> run the client one last time.
> On unix, it seems (although I haven't done this yet) that is would just be
> a matter
> of reconfiguring my workstation to "imitate" the removed node, and running
> an incremental (with exlusion settings) and it will expire everything.
>
> On NT however, the filesspace name is named after the UNC name
> (i.e \\ntserver\c$)  So when I reconfigure my workstation, it creates a new
> filespace
> with my workstations unc name.
>
> I see 2 ways to possibly solve this (both of which are a little cumbersome
> and ugly
>
> - rename my workstation to the name of the removed node
> - rename the filespace on the server to fit my unc name
>
> Is there any server command or any other way to do this?
>
>
>
> shawn
>
>
> ___
> Shawn Drew
> Tivoli IT - ADSM/TSM Systems Administrator
> [EMAIL PROTECTED]
>
> This email and any attachments may contain privileged and
> confidential information and are intended for the named
> addressee only.  If you have received this e-mail in error,
> please notify the sender and delete this e-mail immediately.
> Any confidentiality, privilege or copyright is not waived or
> lost because this e-mail has been sent to you in error.  It
> is your responsibility to check this e-mail and any
> attachments for viruses.
>



Re: Manually "inactivate" NT files

2001-03-08 Thread Shawn Drew

thats what we used to do, but manually.
So you have a script that periodically checks
for nodes that havent backed up in 3 months,
then automatically removes the filespaces? then the node?

Or are you saying that you peridically check for
nodes ready to be deleted, then the script does the rest?

shawn


___
Shawn Drew
Tivoli IT - ADSM/TSM Systems Administrator
[EMAIL PROTECTED]


Joe Faracchio <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 03/08/2001
09:42:14 PM

Please respond to Joe Faracchio <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: Manually "inactivate" NT files



that's what we do: 3 months and then delete filespace
but we have it as a perl program that runs overnight
because it takes a lot of time and cycles.
Come to think of it that's the worse time!!!
I need to change it from 3 am to 8 pm !!!

.. joe.f.

Joseph A Faracchio,  Systems Programmer, UC Berkeley


On Fri, 9 Mar 2001, Stephen Mackereth wrote:

> Hi Shawn,
>
> Why don't you leave it alone for some period of time
> like 3 months (if you not short on space) and then
> delete the filespaces & the node permanently.
>
> This depends on
> 1. how the backup copygroup is defined.
> 2. how long after the client has gone, are you required to restore
anything.
>
> q filesp {node_name}
>
> del filesp {node_name} {filespace_name from above command}
>
> rem node {node_name}
>
>  Regards
>
>  Stephen Mackereth
>  Senior UNIX Consultant / Senior Storage Administrator (TSM)
>  ITS Unix Systems Support
>  Coles Myer Ltd.
>
> -Original Message-
> From: Shawn Drew [mailto:[EMAIL PROTECTED]]
> Sent: Friday, 9 March 2001 13:11
> To: [EMAIL PROTECTED]
> Subject: Manually "inactivate" NT files
>
>
> I wanted to get an idea of what people do when
> clients are taken offline permanently.
>
> Ideally, we would run the backup client one last time excluding
everything
> (so the files will follow the Management Class setting for Inactive
files)
>
> However, we are commonly told a client is offline after the fact.  So we
> cannot
> run the client one last time.
> On unix, it seems (although I haven't done this yet) that is would just
be
> a matter
> of reconfiguring my workstation to "imitate" the removed node, and
running
> an incremental (with exlusion settings) and it will expire everything.
>
> On NT however, the filesspace name is named after the UNC name
> (i.e \\ntserver\c$)  So when I reconfigure my workstation, it creates a
new
> filespace
> with my workstations unc name.
>
> I see 2 ways to possibly solve this (both of which are a little
cumbersome
> and ugly
>
> - rename my workstation to the name of the removed node
> - rename the filespace on the server to fit my unc name
>
> Is there any server command or any other way to do this?
>
>
>
> shawn
>
>
> ___
> Shawn Drew
> Tivoli IT - ADSM/TSM Systems Administrator
> [EMAIL PROTECTED]
>
> This email and any attachments may contain privileged and
> confidential information and are intended for the named
> addressee only.  If you have received this e-mail in error,
> please notify the sender and delete this e-mail immediately.
> Any confidentiality, privilege or copyright is not waived or
> lost because this e-mail has been sent to you in error.  It
> is your responsibility to check this e-mail and any
> attachments for viruses.
>



Re: WindowsNT Disk Imaging.

2001-03-08 Thread Michael Bruewer

TDP for NT was a third party product and has been withdrawn due to
license problems. That's what was said on a Tivoli Road Show last week.
They announced that there will be an announcement of a new product
(third party again, I don't remember the name) for this purpose next
month, maybe available Q4/01.

Meanwhile you could use any Image Software (DriveImage, Ghost, ...)
and dump the Images to a TSM-HSM Partition shared via Samba.

Regards,

Michael Bruewer

On 8 Mar 2001, at 9:45, Doug Thorneycroft wrote:

> Quite awhile ago, Tivoli announced plans for Tivol Data Protection for
> Workgroups to support sending a disk image to, and retrieve a disk image
> from the TSM server. If I remember right, this was to be available shortly
> after the release of Version 4 server.
>
> Does anyone know if this is still in the works?




Dr. Michael Br"uwer
RZ der Univ. Hohenheim 70593 Stuttgart
[EMAIL PROTECTED]   www.uni-hohenheim.de/~bruewer
Fon: +49-711-459-3838  Fax: -3449
PGP Public Key:
   RSA: http://www.uni-hohenheim.de/~bruewer/pgpkey_V2