SV: Not seeing a console screen for the renamed DSMCAD.NLM

2004-06-29 Thread Hougaard.Flemming FHG
Hi Timothy

It seems to me you have an error in you DSM.OPT file... do you use the statement 
"NWWAITONERROR"?? If you do this could be your problem (seen that a lot ;o) )!

Regards
Flemming

-Oprindelig meddelelse-
Fra: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] vegne af
Timothy Hughes
Sendt: 28. juni 2004 19:56
Til: [EMAIL PROTECTED]
Emne: Not seeing a console screen for the renamed DSMCAD.NLM


 Hi folks,

In working on our (novell) cluster we had to rename our DSMCAD.NLM's
because of
a unloading of instances problem. (see below info I got off the IBM
site)

Yes, this can be prevented if there are multiple copies, each with a
different name, of the DSMC.NLM and/or DSMCAD.NLM.

For example, if there is a need for multiple schedules to be controlled
by DSMCAD.NLM then the following can be done:

1. Make a copy of the DSMCAD.NLM and give it a different name (that is.,
DSMCAD2.NLM)
2. Load the first instance with "LOAD DSMCAD
-OPTFILE=VolName:\path\to\dsm.opt"
3. Load the second instance with "LOAD DSMCAD2
-OPTFILE=VolName:\path\to\dsm2.opt"

When the first instance (DSMCAD) gets unloaded, the second instance
(DSMCAD2) will not be unloaded.

The problem is we don't see a console screen for the renamed DSMCAD.NLM.

Does anyone have any Ideas why we don't see a console screen?



Novell 6 sp3
TSM novell client 5.2.2

Thanks in Advance for any help!




___
www.kmd.dk   www.kundenet.kmd.dk   www.eboks.dk   www.civitas.dk   www.netborger.dk

Hvis du har modtaget denne mail ved en fejl vil jeg gerne, at du informerer mig og 
sletter den.
KMD skaber it-services, der fremmer effektivitet hos det offentlige, erhvervslivet og 
borgerne.

If you received this e-mail by mistake, please notify me and delete it. Thank you.
Our mission is to enhance the efficiency of the public sector and improve its service 
of the general public. 


DDS4 drive cleaning problem

2004-06-29 Thread Gottfried Scheckenbach
Hi all,
I have:
TSM-Server 5.2.2.5 on Win2003
tsmscsi.sys version 5.2.2.5123
DDS4 Autoloader (HP C5713A)
regulary checked in cleaning cartridge
On "clean drive" I get:
ANR8300E I/O error on library LBDDS4
(OP=8401C058, CC=306, KEY=02, ASC=3A, ASCQ=00,
SENSE=70.00.02.00.00.00.00.0E.0-
0.00.00.00.3A.00.00.00.00.F4.00.00.00.00.00.00.00.00.00.-
00.00., Description=Drive or media failure)
And in the event log it shows up (Source: TsmScsi):
A check condition error has occured on device \Device\lb5.1.0.3
during Move Medium with completion code DD_DRIVE_OR_MEDIA_FAILURE.
Refer to the device's SCSI reference for appropriate action.
Dump Data: byte 0x3E=KEY, byte 0x3D=ASC, byte 0x3C=ASCQ
Normal tape movements and read/write operations work without any error.
What's wrong here? Has anybody some hint?
Regards,
Gottfried

--
   ___
 \/ [EMAIL PROTECTED]
  \  / Consultant Open Systems <> Mobil 0172-6710891
   \/
   /\ Xtelligent IT Consulting GmbH
  /  \ Am Kalkofen 8 <> D-61206 Woellstadt
 /\ Tel./Fax. 0-700-98355443 <> http://www.xtelligent.de


smime.p7s
Description: S/MIME Cryptographic Signature


tcpadminport

2004-06-29 Thread Remco Post
Hi all,

I just thought I found _the_ solution in preventing admin access to our
tsm server from just any system that can connect to port 1500 by setting
tcpadminport on our server to something different from tcpport.

Well, great now we have 2!!! ports that allow admin connections (tcpport
and tcpadminport) and one (tcpport) that allows backup/restore style
client conenctions.

Did I miss something, or did the TSM server development team have
something different in mind when they thought up this option? I'd like
to have one port for client connections (tcpport) and one for admin
connections (tcpadminport) so I can actually limit access to our
admin-interface based on ip-address

Reading the manual entry for tcpport: "Using different port numbers for
the options TCPPORT and TCPADMINPORT enables you to create one set of
firewall rules for client sessions and another set for other session
types (administrative sessions, server-to-server sessions, SNMP subagent
sessions, storage agent sessions, library client sessions, managed
server sessions, and event server sessions)." TSM development did have
exactly what I want in mind, but when I read "By using the
SESSIONINITIATION parameter of REGISTER and UPDATE NODE, you can close
the port specified by TCPPORT at the firewall, and specify nodes whose
scheduled sessions will be started from the server." I get confused and
start to think that either I missed something or somebody else did ;-)



--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Speeding up my SQL statement

2004-06-29 Thread Loon, E.J. van - SPLXM
Hi *SM-ers!
I'm using the following SQL statement to retrieve obsolete Oracle backup
files:

select node_name, filespace_name, ll_name, date(backup_date) from backups
where ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This returns all Oracle backup files, created more than 100 days ago. These
should not exist anymore.
Since this statement scans ALL (millions!!) backup objects for a hit, it
runs for more than a day!
I'm looking for a way to reduce this, but I don't know how to do this.
If I would be able to limit the scan to only the objects belonging to Oracle
nodes (in our shop, the nodename ends with -ORC) it would finish much
quicker, but I don't know how.
Can anybody tell me if this is possible at all?
Thank you very much for any reply in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


Re: Speeding up my SQL statement

2004-06-29 Thread Lambelet,Rene,VEVEY,GLOBE Center CSC
hi Eric, you could add 

node_name like '%ORC' to the where clause...

best regards,

René LAMBELET
NESTEC  SA
GLOBE - Global Business Excellence
Central Support Center
SD/ESN
Av. Nestlé 55  CH-1800 Vevey (Switzerland) 
tél +41 (0)21 924'35'43   fax +41 (0)21 924'45'89   local
REL-5 01
mailto:[EMAIL PROTECTED]

This message is intended only for the use of the addressee
and may contain information that is privileged and
confidential.


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Tuesday,29. June 2004 12:52
To: [EMAIL PROTECTED]
Subject: Speeding up my SQL statement


Hi *SM-ers!
I'm using the following SQL statement to retrieve obsolete Oracle backup
files:

select node_name, filespace_name, ll_name, date(backup_date) from backups
where ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This returns all Oracle backup files, created more than 100 days ago. These
should not exist anymore.
Since this statement scans ALL (millions!!) backup objects for a hit, it
runs for more than a day!
I'm looking for a way to reduce this, but I don't know how to do this.
If I would be able to limit the scan to only the objects belonging to Oracle
nodes (in our shop, the nodename ends with -ORC) it would finish much
quicker, but I don't know how.
Can anybody tell me if this is possible at all?
Thank you very much for any reply in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Re: SV: Not seeing a console screen for the renamed DSMCAD.NLM

2004-06-29 Thread Timothy Hughes
Hi Hougaard,

We added the statement (nwexitnlmprompt no) yesterday it did not make a difference.
I believe the (nwexitnlmprompt option) replaced the nwwaitonerror


"Hougaard.Flemming FHG" wrote:

> Hi Timothy
>
> It seems to me you have an error in you DSM.OPT file... do you use the statement 
> "NWWAITONERROR"?? If you do this could be your problem (seen that a lot ;o) )!
>
> Regards
> Flemming
>
> -Oprindelig meddelelse-
> Fra: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] vegne af
> Timothy Hughes
> Sendt: 28. juni 2004 19:56
> Til: [EMAIL PROTECTED]
> Emne: Not seeing a console screen for the renamed DSMCAD.NLM
>
>  Hi folks,
>
> In working on our (novell) cluster we had to rename our DSMCAD.NLM's
> because of
> a unloading of instances problem. (see below info I got off the IBM
> site)
>
> Yes, this can be prevented if there are multiple copies, each with a
> different name, of the DSMC.NLM and/or DSMCAD.NLM.
>
> For example, if there is a need for multiple schedules to be controlled
> by DSMCAD.NLM then the following can be done:
>
> 1. Make a copy of the DSMCAD.NLM and give it a different name (that is.,
> DSMCAD2.NLM)
> 2. Load the first instance with "LOAD DSMCAD
> -OPTFILE=VolName:\path\to\dsm.opt"
> 3. Load the second instance with "LOAD DSMCAD2
> -OPTFILE=VolName:\path\to\dsm2.opt"
>
> When the first instance (DSMCAD) gets unloaded, the second instance
> (DSMCAD2) will not be unloaded.
>
> The problem is we don't see a console screen for the renamed DSMCAD.NLM.
>
> Does anyone have any Ideas why we don't see a console screen?
>
> Novell 6 sp3
> TSM novell client 5.2.2
>
> Thanks in Advance for any help!
>
> ___
> www.kmd.dk   www.kundenet.kmd.dk   www.eboks.dk   www.civitas.dk   www.netborger.dk
>
> Hvis du har modtaget denne mail ved en fejl vil jeg gerne, at du informerer mig og 
> sletter den.
> KMD skaber it-services, der fremmer effektivitet hos det offentlige, erhvervslivet 
> og borgerne.
>
> If you received this e-mail by mistake, please notify me and delete it. Thank you.
> Our mission is to enhance the efficiency of the public sector and improve its 
> service of the general public.


Re: Speeding up my SQL statement

2004-06-29 Thread Loon, E.J. van - SPLXM
Hi Rene!
I thought about that, but would that help? If TSM still has to scan every
object for a match, it wouldn't help much... That's the problem, I don't
know how SQL works...
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Lambelet,Rene,VEVEY,GLOBE Center CSC
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 13:21
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


hi Eric, you could add 

node_name like '%ORC' to the where clause...

best regards,

René LAMBELET
NESTEC  SA
GLOBE - Global Business Excellence
Central Support Center
SD/ESN
Av. Nestlé 55  CH-1800 Vevey (Switzerland) 
tél +41 (0)21 924'35'43   fax +41 (0)21 924'45'89   local
REL-5 01
mailto:[EMAIL PROTECTED]

This message is intended only for the use of the addressee
and may contain information that is privileged and
confidential.


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Tuesday,29. June 2004 12:52
To: [EMAIL PROTECTED]
Subject: Speeding up my SQL statement


Hi *SM-ers!
I'm using the following SQL statement to retrieve obsolete Oracle backup
files:

select node_name, filespace_name, ll_name, date(backup_date) from backups
where ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This returns all Oracle backup files, created more than 100 days ago. These
should not exist anymore.
Since this statement scans ALL (millions!!) backup objects for a hit, it
runs for more than a day!
I'm looking for a way to reduce this, but I don't know how to do this.
If I would be able to limit the scan to only the objects belonging to Oracle
nodes (in our shop, the nodename ends with -ORC) it would finish much
quicker, but I don't know how.
Can anybody tell me if this is possible at all?
Thank you very much for any reply in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Re: Speeding up my SQL statement

2004-06-29 Thread Richard Sims
>I thought about that, but would that help? If TSM still has to scan every
>object for a match, it wouldn't help much... That's the problem, I don't know
>how SQL works...

Eric - Your perception is correct: if you scan a table, it will traverse the
   whole thing.  Whereas the Backups table is the predominant (=huge)
table in a TSM system, it will take a long time.  Some optimization can be
had through well-formulated queries, but the opportunities for doing that are
rather rare.  The only thing that really helps SQL performance is indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.

Sometimes, the best thing to do is perform Query Backup from the client side,
where dedicated logic gets results faster.  It is often possible to accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last backup
time in filespaces, which helps narrow down the search arena.

   Richard Sims


Advice needed - different backup's from same node ?

2004-06-29 Thread Brian Ipsen
Hi,

 I need an advice on how to handle backup from a specific node... On weekdays, the 
backup should run "normally", but ignore specific file-types, e.g. .pst
The .pst files should be backup during the weekend instead... Is this possible without 
installing 2 instances of the scheduler - or do I have to register 2 nodes for this 
server, one for the weekday backups, and another for handling the pst files during 
saturday/sunday ??

 From a license point of view, I would assume, that registering 2 schedulers/nodes 
from the same host is to be considered as 1 CPU (as far as I remember, IBM calls it 
CPU instead of hosts)...

**
Mvh/Rgds.
Brian Ipsen
PROGRESSIVE IT A/S

Århusgade 88, 3.sal Tel: +45 3525 5070
DK-2100 København Ø Fax: +45 3525 5090
Denmark Dir. +45 3525 5080
Email:[EMAIL PROTECTED]

***



---
This mail was scanned for virus and spam by Progressive IT A/S.
---


Re: Speeding up my SQL statement

2004-06-29 Thread Loon, E.J. van - SPLXM
Hi Richard!
> Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.

True, but since the Oracle backups are made through the TDP client, a Q
BACKUP from a BA client will return zilch...

> Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.

Also true (and an implemented approach here) but that will only show you if
the backup is working. I'm trying to narrow things down to the database
backup level. I want to see if there are obsolete backup pieces or if Oracle
delete jobs are running fine. A quick scan showed a lot of backup pieces
dating back to February for a specific node. I bet I have to contact the
database guys for this and my guess is that the Oracle delete jobs are not
running on this machine... Wouldn't be the first time...
Thank you very much for your reply!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 14:03
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


>I thought about that, but would that help? If TSM still has to scan every
>object for a match, it wouldn't help much... That's the problem, I don't
know
>how SQL works...

Eric - Your perception is correct: if you scan a table, it will traverse the
   whole thing.  Whereas the Backups table is the predominant (=huge)
table in a TSM system, it will take a long time.  Some optimization can be
had through well-formulated queries, but the opportunities for doing that
are
rather rare.  The only thing that really helps SQL performance is indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.

Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.

   Richard Sims


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


File location

2004-06-29 Thread Joni Moyer
Hello!

If the options file for the tsm server is located at
/usr/tivoli/tsm/server/bin, then where would be the recommended location
for the device configuration and volume history file?  The database and
recovery log will be located at /tsmdev/log/** and /tsmdev/db/** and the
storage pools will be /tsmdev/stgpool/**.  Would this be a correct
location/setup for an AIX TSM server 5.2?  I am very new to this
environment and I am trying to set this up and move off of MVS.  Any
suggestions/ideas would be appreciated!  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



raw partitions

2004-06-29 Thread Joni Moyer
Hello all!

I was reading the performance tuning guide and it states that we should use
raw partitions for server db, log and disk storage pool volumes for an AIX
server and I was just wondering if this is true and what the benefits are
of configuring volumes in this manner?  As I understand it, if we configure
raw logical volumes, the AIX volume group will need to be applied to a raw
logical volume, as opposed to a standard UNIX filesytem.  When defining TSM
volumes, would we then need to define and format them to the raw logical
volume?!?


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: File location

2004-06-29 Thread Lloyd Dieter
Joni,

There probably isn't a "correct" location.

If using filesystems, I usually put my primary db copy under "/tsmdbpri",
with the mirror copy under "/tsmdbmir", and the recovery log under
"/tsmlogpri" and "/tsmlogmir" respectively.

I also have customers that stick them under "/usr/tivoli/tsm/server/db"
and "/usr/tivoli/tsm/server/log".

If you have mirrored storage for the DB and log, then you probably won't
use the TSM mirroring facilities.

And, of course, if you are running raw volumes, then there is no
corresponding mount point.

So...it depends.  You can put it where you like, but I'd recommend
something easy to type, because you will have to type the path in sooner
or later, and "/tsmsyspri/db0.dsm" is a lot easier to type (and remember)
than "/usr/tivoli/tsm/server/etc/etc/etc".

As to volhist and devconfig, I put them in a couple of different
locations, and on different disk devices, and make sure to get a copy that
winds up off of the machine, whether by e-mail, ftp, or a DRM plan file
that gets sent off each day.

HTH!

-Lloyd


On Tue, 29 Jun 2004 09:13:18 -0400
Joni Moyer <[EMAIL PROTECTED]> wrote thusly:

> Hello!
>
> If the options file for the tsm server is located at
> /usr/tivoli/tsm/server/bin, then where would be the recommended location
> for the device configuration and volume history file?  The database and
> recovery log will be located at /tsmdev/log/** and /tsmdev/db/** and the
> storage pools will be /tsmdev/stgpool/**.  Would this be a correct
> location/setup for an AIX TSM server 5.2?  I am very new to this
> environment and I am trying to set this up and move off of MVS.  Any
> suggestions/ideas would be appreciated!  Thanks!
>
> 
> Joni Moyer
> Highmark
> Storage Systems
> Work:(717)302-6603
> Fax:(717)302-5974
> [EMAIL PROTECTED]
> 
>


--
-
Lloyd Dieter-   Senior Technology Consultant
 Registered Linux User 285528
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-


Re: File location

2004-06-29 Thread Richard van Denzel
Joni,

Perferably you want to store de devconfig en volhist on multiple
locations, so you can put multiple stanzas in dsmserv.opt (one for de
deafult location and one for e.g. /tsmdev/db).

Also you don't want to have the TSM DB and TSM LOG on the same set of
disks. Put the log on a seperate set of disks.

Richard.





Joni Moyer <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
29-06-2004 15:13
Please respond to "ADSM: Dist Stor Manager"

To: [EMAIL PROTECTED]
cc:
Subject:File location


Hello!

If the options file for the tsm server is located at
/usr/tivoli/tsm/server/bin, then where would be the recommended location
for the device configuration and volume history file?  The database and
recovery log will be located at /tsmdev/log/** and /tsmdev/db/** and the
storage pools will be /tsmdev/stgpool/**.  Would this be a correct
location/setup for an AIX TSM server 5.2?  I am very new to this
environment and I am trying to set this up and move off of MVS.  Any
suggestions/ideas would be appreciated!  Thanks!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: Speeding up my SQL statement

2004-06-29 Thread Prather, Wanda
Hi Guys,

The SQL tables we have to play with in TSM are indeed indexed.

If you do select * from syscat.columns, you will see there is a field called
INDEX-KEYSEQ and INDEX-ORDER.

The BACKUPS table is indexed on NODE_NAME, then FILESPACE_NAME, then
FILESPACE-ID, then STATE, in that order.
Speaking from experience, I can tell you the query DOES run faster if you
select on an indexed field.
So if you could select on a specific NODE-NAME, you would do a lot better.

What I don't know is the effect of using a generic match like %ORC%; I don't
know if that negates the indexing or not.

What I have done in the past was to write a host script that generated the
list of node_names for me, then iteratively ran the SELECT on the backups
table with "where node_name=BLAH", sending the output to a file.

Running the individual queries against one node_name at a time finished in
about 3 hours, where running the entire backups table (as in your original
query) ran for over 24 (before I gave up and cancelled it!).

Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Tuesday, June 29, 2004 8:03 AM
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


>I thought about that, but would that help? If TSM still has to scan every
>object for a match, it wouldn't help much... That's the problem, I don't
know
>how SQL works...

Eric - Your perception is correct: if you scan a table, it will traverse the
   whole thing.  Whereas the Backups table is the predominant (=huge)
table in a TSM system, it will take a long time.  Some optimization can be
had through well-formulated queries, but the opportunities for doing that
are
rather rare.  The only thing that really helps SQL performance is indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.

Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.

   Richard Sims


Scheduled restore not working

2004-06-29 Thread Troy Frank
I've got a schedule that backups up certain "ServerA" directories at
noon.  Another schedule is supposed to then restore those directories to
"ServerB" at 1PM.  The restore seems to be going to ServerA , instead of
ServerB.  Server A & B are both Netware6.  However, serverA is
Traditional Volumes, and serverB is NSS volumes.

In the web administration for TSM the restore looks like this
(abbreviated)...

Action   - Restore

Options - -ifnewer -subdir=yes

Objects - "Vol1:\data\share\*" "serverB/vol1:\data\"


I also tried it with different variations of quotes/no quotes around
the Objects, which didn't seem to matter.  Both schedules, the backup
and the restore, are associated with ServerA.

Troy
UW Medical Foundation



Confidentiality Notice follows:

The information in this message (and the documents attached to it, if any)
is confidential and may be legally privileged. It is intended solely for
the addressee. Access to this message by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution
or any action taken, or omitted to be taken in reliance on it is
prohibited and may be unlawful. If you have received this message in
error, please delete all electronic copies of this message (and the
documents attached to it, if any), destroy any hard copies you may have
created and notify me immediately by replying to this email. Thank you.


Re: DDS4 drive cleaning problem

2004-06-29 Thread Joe Crnjanski
We had before IBM dds4 6 tape autoloader.
I remember the biggest challenge is to tell TSM that one of the tapes is
cleaning tape.
Are you sure when you checked in cleaner that TSM knows it is cleaning
tape.
If you issue "q libv" under status column for cleaning tape you should
see "Cleaner"


Maybe I'm on the wrong track , but this is one of the suggestions.

Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web:  www.infinitynetwork.com



-Original Message-
From: Gottfried Scheckenbach
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 5:19 AM
To: [EMAIL PROTECTED]
Subject: DDS4 drive cleaning problem


Hi all,

I have:
TSM-Server 5.2.2.5 on Win2003
tsmscsi.sys version 5.2.2.5123
DDS4 Autoloader (HP C5713A)
regulary checked in cleaning cartridge

On "clean drive" I get:
> ANR8300E I/O error on library LBDDS4
> (OP=8401C058, CC=306, KEY=02, ASC=3A, ASCQ=00,
> SENSE=70.00.02.00.00.00.00.0E.0-
> 0.00.00.00.3A.00.00.00.00.F4.00.00.00.00.00.00.00.00.00.-
> 00.00., Description=Drive or media failure)

And in the event log it shows up (Source: TsmScsi):
> A check condition error has occured on device \Device\lb5.1.0.3
> during Move Medium with completion code DD_DRIVE_OR_MEDIA_FAILURE.
> Refer to the device's SCSI reference for appropriate action.
>
> Dump Data: byte 0x3E=KEY, byte 0x3D=ASC, byte 0x3C=ASCQ

Normal tape movements and read/write operations work without any error.
What's wrong here? Has anybody some hint?

Regards,
Gottfried



--

___
  \/ [EMAIL PROTECTED]
   \  / Consultant Open Systems <> Mobil 0172-6710891
\/
/\ Xtelligent IT Consulting GmbH
   /  \ Am Kalkofen 8 <> D-61206 Woellstadt
  /\ Tel./Fax. 0-700-98355443 <> http://www.xtelligent.de


Summary database not containing all information

2004-06-29 Thread Cain, Jason (Corporate)
I have the Activity for Summary Retention period set to 30 days, however when I run an 
archive or backup the information does not show up.  It does however, show up 
sometimes.  Is there something I must run to update the summary table?  Also, what 
kind of extra space or overhead is needed to keep summary info for a year or so.  I 
can't seem to find any kind of calclulation to determine the extra overhead this may 
cause on the server.

Any help on answering the '2' questions would be greatly appreciated.

Thanks,
Jason Cain


From An ATL7100 to a P3000

2004-06-29 Thread Argeropoulos, Bonnie
Hello,

We are planning on changing our tape library from an ATL7100 to a P3000 and
from four DLT7000 tape drives
to  six DLT8000 tape drives on an F50 running AIX 4.3.3 and TSM 5.1.62.  We
are currently looking at putting our
process for this change together and have found we have a few questions.  We
thought we could delete the path to
the drives, path to the library, the drives and then the library.  We would
then connect the new library and physically
move the DLT'4 tapes over to the new library and redefine everything.  We
are now concerned that if we delete the
library will the database no longer know of the tapes?  Would anyone have
any suggestions or see any problems
with this plan?

Thanks for any help,

Bonnie

CONFIDENTIAL COMMUNICATION - PLEASE READ PRIVACY NOTICE
This communication is confidential and may be read only by its intended
recipient(s). It may contain legally privileged and protected information.
If you believe you have received this communication in error, please "Reply"
to the Sender and so indicate or call (603) 663-2800. Then, please promptly
"Delete" this communication from your computer. This communication, and any
information contained herein, may only be forwarded, printed, disclosed,
copied or disseminated by those specifically authorized to do so.
UNAUTHORIZED DISCLOSURE MAY RESULT IN LEGAL LIABILITY FOR THOSE PERSONS
RESPONSIBLE.


Remove

2004-06-29 Thread De La Uz, Jose
Please remove me from the list.

Thanks!

Jose E. de la Uz
LAN/Mac Administrator
Jackson National Life Insurance
Denver Office
303.846.3813
[EMAIL PROTECTED]


Re: From An ATL7100 to a P3000

2004-06-29 Thread Bos, Karel
Hi,

The database will still know the tapes (and the content of the tapes). But you will 
have to do a checkin libv stat=scratch before doing checkin libv stat=priv. ITSM will 
let you checkin empty volumes as private, but will not accept private tapes being 
checked in as scratch.

Regards,

Karel.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Argeropoulos, Bonnie
Sent: dinsdag 29 juni 2004 16:34
To: [EMAIL PROTECTED]
Subject: From An ATL7100 to a P3000


Hello,

We are planning on changing our tape library from an ATL7100 to a P3000 and
from four DLT7000 tape drives
to  six DLT8000 tape drives on an F50 running AIX 4.3.3 and TSM 5.1.62.  We
are currently looking at putting our
process for this change together and have found we have a few questions.  We
thought we could delete the path to
the drives, path to the library, the drives and then the library.  We would
then connect the new library and physically
move the DLT'4 tapes over to the new library and redefine everything.  We
are now concerned that if we delete the
library will the database no longer know of the tapes?  Would anyone have
any suggestions or see any problems
with this plan?

Thanks for any help,

Bonnie

CONFIDENTIAL COMMUNICATION - PLEASE READ PRIVACY NOTICE
This communication is confidential and may be read only by its intended
recipient(s). It may contain legally privileged and protected information.
If you believe you have received this communication in error, please "Reply"
to the Sender and so indicate or call (603) 663-2800. Then, please promptly
"Delete" this communication from your computer. This communication, and any
information contained herein, may only be forwarded, printed, disclosed,
copied or disseminated by those specifically authorized to do so.
UNAUTHORIZED DISCLOSURE MAY RESULT IN LEGAL LIABILITY FOR THOSE PERSONS
RESPONSIBLE.


Re: Summary database not containing all information

2004-06-29 Thread Prather, Wanda
Jason,

What level is your server and what level are your clients?

There was a bug that caused TSM V4 clients to not update the summary table,
or always record as 0 bytes.
Is it possible that your "sometimes" data is coming from later level
clients, and your "missing" information is from older clients?

When you get the server and the clients to TSM 5.2, the summary information
is recorded correctly again.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Cain, Jason (Corporate)
Sent: Tuesday, June 29, 2004 10:29 AM
To: [EMAIL PROTECTED]
Subject: Summary database not containing all information


I have the Activity for Summary Retention period set to 30 days, however
when I run an archive or backup the information does not show up.  It does
however, show up sometimes.  Is there something I must run to update the
summary table?  Also, what kind of extra space or overhead is needed to keep
summary info for a year or so.  I can't seem to find any kind of
calclulation to determine the extra overhead this may cause on the server.

Any help on answering the '2' questions would be greatly appreciated.

Thanks,
Jason Cain


Re: File location

2004-06-29 Thread Remco Post
On Tue, 29 Jun 2004 09:13:18 -0400
Joni Moyer <[EMAIL PROTECTED]> wrote:

> Hello!
>
> If the options file for the tsm server is located at
> /usr/tivoli/tsm/server/bin, then where would be the recommended
> location for the device configuration and volume history file?  The
> database and recovery log will be located at /tsmdev/log/** and
> /tsmdev/db/** and the storage pools will be /tsmdev/stgpool/**.  Would
> this be a correct location/setup for an AIX TSM server 5.2?  I am very
> new to this environment and I am trying to set this up and move off of
> MVS.  Any suggestions/ideas would be appreciated!  Thanks!
>
> 
> Joni Moyer
> Highmark
> Storage Systems
> Work:(717)302-6603
> Fax:(717)302-5974
> [EMAIL PROTECTED]
> 


What I just did on our new server:

/tsm/inst/  for start/stopscripts, dsmserv.opt and dsmserv.dsk


/tsm/dba/  for one mirror of the dbvols (n=1 to 8)
/tsm/dbb/  for the other side of the mirror
/tsm/data/ where m is 01 to 16 for diskstoragepools
/tsm/log{a|b}/ for the logvolumes, x matches the x in /tsm/inst/
  a and b are each one side of the mirror

-- each of these is a separate disk, as you may have guessed by now, we
   use TSM mirroring for db and logvolumes.

/tsm/scripts  for a load of serverscripts, macro's, monitoringscripts
  and what more

/var/tsm/server   for some extra stuf like an extra copy of the volhist
  and the devconf

The most important thing is that _you_ know what does what, and what goes where

an alternative set-up I have on a smaller test environment goes like

/tsm/inst  as a root for a TSM server instance
/tsm/inst/etc/ for start/stop scripts, devconf and volhist backups,
  dsmserv.opt and dsmserv.dsk
/tsm/inst/db{a|b}/  for databasevolumes
/tsm/inst/log{a|b}/for logvolumes

/tsm/data/ for storagepools

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams


Re: Scheduled restore not working

2004-06-29 Thread Stapleton, Mark
NetWare is notorious for not allowing files backed up with NSS or
compressed volumes to restore to volumes that are otherwise. Try running
the process manually and see what happens, and also take a look at the
schedule log dsmsched.log.

--
Mark Stapleton 

>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
>Behalf Of Troy Frank
>Sent: Tuesday, June 29, 2004 9:23 AM
>To: [EMAIL PROTECTED]
>Subject: Scheduled restore not working
>
>I've got a schedule that backups up certain "ServerA" directories at
>noon.  Another schedule is supposed to then restore those 
>directories to
>"ServerB" at 1PM.  The restore seems to be going to ServerA , 
>instead of
>ServerB.  Server A & B are both Netware6.  However, serverA is
>Traditional Volumes, and serverB is NSS volumes.
>
>In the web administration for TSM the restore looks like this
>(abbreviated)...
>
>Action   - Restore
>
>Options - -ifnewer -subdir=yes
>
>Objects - "Vol1:\data\share\*" "serverB/vol1:\data\"
>
>
>I also tried it with different variations of quotes/no quotes around
>the Objects, which didn't seem to matter.  Both schedules, the backup
>and the restore, are associated with ServerA.
>
>Troy
>UW Medical Foundation
>
>
>
>Confidentiality Notice follows:
>
>The information in this message (and the documents attached to 
>it, if any)
>is confidential and may be legally privileged. It is intended 
>solely for
>the addressee. Access to this message by anyone else is 
>unauthorized. If
>you are not the intended recipient, any disclosure, copying, 
>distribution
>or any action taken, or omitted to be taken in reliance on it is
>prohibited and may be unlawful. If you have received this message in
>error, please delete all electronic copies of this message (and the
>documents attached to it, if any), destroy any hard copies you may have
>created and notify me immediately by replying to this email. Thank you.
>
>


Backup copygroups

2004-06-29 Thread Moses Show
Hi people,
I have a burning question which hopefully somebody can provide an
answer to. Have been asked to backup SQL servers daily using TSM . The
details and instructions for how these backups are performed and any
characteristics are contained in the respective management class of the
policy set in the policy domain. I also need to backup these servers at
monthend and some every five weeks. What I am trying to find out is if it
is possible to have more than one backup copygroup under this management
class. The reason is because the period backups will have different
retention times to the dailys and possibly different retain only version
values.

Is it feasible to create two separate backup copygroups to manage this ?
If so how would you get the server to differentiate between if a backup is
run daily or at the end of periods.

Once again any help would be gratefully received, examples would be even
more gratefully received.
==
This communication, together with any attachments hereto or links contained herein, is 
for the sole use of the intended recipient(s) and may contain information that is 
confidential or legally protected. If you are not the intended recipient, you are 
hereby notified that any review, disclosure, copying, dissemination, distribution or 
use of this communication is STRICTLY PROHIBITED.  If you have received this 
communication in error, please notify the sender immediately by return e-mail message 
and delete the original and all copies of the communication, along with any 
attachments hereto or links herein, from your system.

==
The St. Paul Travelers e-mail system made this annotation on 06/29/2004, 11:56:58 AM.


Re: Summary database not containing all information

2004-06-29 Thread Cain, Jason (Corporate)
Verson 5.2.0.1, and most of our clients are at that level also.  Do you know what kind 
of degragation would be caused to set the summary info to around a year.  I don't know 
how much DB overhead is involved.

Jason 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Prather, Wanda
Sent: Tuesday, June 29, 2004 10:42 AM
To: [EMAIL PROTECTED]
Subject: Re: Summary database not containing all information


Jason,

What level is your server and what level are your clients?

There was a bug that caused TSM V4 clients to not update the summary table,
or always record as 0 bytes.
Is it possible that your "sometimes" data is coming from later level
clients, and your "missing" information is from older clients?

When you get the server and the clients to TSM 5.2, the summary information
is recorded correctly again.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Cain, Jason (Corporate)
Sent: Tuesday, June 29, 2004 10:29 AM
To: [EMAIL PROTECTED]
Subject: Summary database not containing all information


I have the Activity for Summary Retention period set to 30 days, however
when I run an archive or backup the information does not show up.  It does
however, show up sometimes.  Is there something I must run to update the
summary table?  Also, what kind of extra space or overhead is needed to keep
summary info for a year or so.  I can't seem to find any kind of
calclulation to determine the extra overhead this may cause on the server.

Any help on answering the '2' questions would be greatly appreciated.

Thanks,
Jason Cain


about db2 online restore

2004-06-29 Thread jianyu he
Hi,
Need your help!

When I restored the database online, I got the following information.

db2 => rollforward database hjy218 to end of logs and stop
SQL0956C  Not enough storage is available in the database heap to process the
statement.  SQLSTATE=57011

So, my database hjy218 always is rollforward pending. could you tell me how I can do?


Thanks very much

Andy



-
Post your free ad now! Yahoo! Canada Personals


Re: DDS4 drive cleaning problem

2004-06-29 Thread Gottfried Scheckenbach
Thanks Joe!
> If you issue "q libv" under status column for cleaning tape you should
> see "Cleaner"
Yes, the tape status is "cleaner"... I checked it in with "checkin
libvol lbdds4 clean stat=clean checklabel=no cleanings=50". Ok, I'll get
the tape checked tomorrow by someone who is onsite...
Regards,
Gottfried

Joe Crnjanski wrote:
We had before IBM dds4 6 tape autoloader.
I remember the biggest challenge is to tell TSM that one of the tapes is
cleaning tape.
Are you sure when you checked in cleaner that TSM knows it is cleaning
tape.
If you issue "q libv" under status column for cleaning tape you should
see "Cleaner"
Maybe I'm on the wrong track , but this is one of the suggestions.
Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web:  www.infinitynetwork.com

-Original Message-
From: Gottfried Scheckenbach
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 5:19 AM
To: [EMAIL PROTECTED]
Subject: DDS4 drive cleaning problem
Hi all,
I have:
TSM-Server 5.2.2.5 on Win2003
tsmscsi.sys version 5.2.2.5123
DDS4 Autoloader (HP C5713A)
regulary checked in cleaning cartridge
On "clean drive" I get:
ANR8300E I/O error on library LBDDS4
(OP=8401C058, CC=306, KEY=02, ASC=3A, ASCQ=00,
SENSE=70.00.02.00.00.00.00.0E.0-
0.00.00.00.3A.00.00.00.00.F4.00.00.00.00.00.00.00.00.00.-
00.00., Description=Drive or media failure)

And in the event log it shows up (Source: TsmScsi):
A check condition error has occured on device \Device\lb5.1.0.3
during Move Medium with completion code DD_DRIVE_OR_MEDIA_FAILURE.
Refer to the device's SCSI reference for appropriate action.
Dump Data: byte 0x3E=KEY, byte 0x3D=ASC, byte 0x3C=ASCQ

Normal tape movements and read/write operations work without any error.
What's wrong here? Has anybody some hint?
Regards,
Gottfried
--
   ___
 \/ [EMAIL PROTECTED]
  \  / Consultant Open Systems <> Mobil 0172-6710891
   \/
   /\ Xtelligent IT Consulting GmbH
  /  \ Am Kalkofen 8 <> D-61206 Woellstadt
 /\ Tel./Fax. 0-700-98355443 <> http://www.xtelligent.de


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Backup copygroups

2004-06-29 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Moses Show
>Have been asked to backup SQL servers daily using TSM . The
>details and instructions for how these backups are performed and any
>characteristics are contained in the respective management class of the
>policy set in the policy domain. I also need to backup these servers at
>monthend and some every five weeks. What I am trying to find 
>out is if it
>is possible to have more than one backup copygroup under this 
>management
>class. The reason is because the period backups will have different
>retention times to the dailys and possibly different retain 
>only version values.
>
>Is it feasible to create two separate backup copygroups to 
>manage this ?
>If so how would you get the server to differentiate between if 
>a backup is run daily or at the end of periods.

No, you can't have multiple active copygroups for a given management
class. What you'll need to do is create a second nodename (with its
corresponding option file which points at a non-default management
class).

However, you might want to consider short-term archives of data, rather
than backups. With that, you can use the ARCHMC flag to control archive
destinations and retention.

--
Mark Stapleton


Re: Backup copygroups

2004-06-29 Thread Del Hoobler
Mark,

Data Protection for SQL does not support ARCHIVE.
It only supports BACKUP.

You can use a second NODENAME like you suggested or...
you can also look into using SET type backups which get
a unique name and can be bound to a separate management class.

Thanks,

Del


"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 06/29/2004
12:24:56 PM:

> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
> Behalf Of Moses Show
> >Have been asked to backup SQL servers daily using TSM . The
> >details and instructions for how these backups are performed and any
> >characteristics are contained in the respective management class of the
> >policy set in the policy domain. I also need to backup these servers at
> >monthend and some every five weeks. What I am trying to find
> >out is if it
> >is possible to have more than one backup copygroup under this
> >management
> >class. The reason is because the period backups will have different
> >retention times to the dailys and possibly different retain
> >only version values.
> >
> >Is it feasible to create two separate backup copygroups to
> >manage this ?
> >If so how would you get the server to differentiate between if
> >a backup is run daily or at the end of periods.
>
> No, you can't have multiple active copygroups for a given management
> class. What you'll need to do is create a second nodename (with its
> corresponding option file which points at a non-default management
> class).
>
> However, you might want to consider short-term archives of data, rather
> than backups. With that, you can use the ARCHMC flag to control archive
> destinations and retention.
>
> --
> Mark Stapleton


Re: From An ATL7100 to a P3000

2004-06-29 Thread Argeropoulos, Bonnie
Hello,

I don't understand...why would I want to check the tapes in as scratch if
they already
have data on them...why wouldn't I just physcially move them and then run an
audit
library.  At first you say to checkin my data tapes as scratch, but then you
say that
private tapes cannot be checked in as scratch

Thanks,

Bonnie

-Original Message-
From: Bos, Karel [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 11:00 AM
To: [EMAIL PROTECTED]
Subject: Re: From An ATL7100 to a P3000


Hi,

The database will still know the tapes (and the content of the tapes). But
you will have to do a checkin libv stat=scratch before doing checkin libv
stat=priv. ITSM will let you checkin empty volumes as private, but will not
accept private tapes being checked in as scratch.

Regards,

Karel.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Argeropoulos, Bonnie
Sent: dinsdag 29 juni 2004 16:34
To: [EMAIL PROTECTED]
Subject: From An ATL7100 to a P3000


Hello,

We are planning on changing our tape library from an ATL7100 to a P3000 and
from four DLT7000 tape drives
to  six DLT8000 tape drives on an F50 running AIX 4.3.3 and TSM 5.1.62.  We
are currently looking at putting our
process for this change together and have found we have a few questions.  We
thought we could delete the path to
the drives, path to the library, the drives and then the library.  We would
then connect the new library and physically
move the DLT'4 tapes over to the new library and redefine everything.  We
are now concerned that if we delete the
library will the database no longer know of the tapes?  Would anyone have
any suggestions or see any problems
with this plan?

Thanks for any help,

Bonnie

CONFIDENTIAL COMMUNICATION - PLEASE READ PRIVACY NOTICE
This communication is confidential and may be read only by its intended
recipient(s). It may contain legally privileged and protected information.
If you believe you have received this communication in error, please "Reply"
to the Sender and so indicate or call (603) 663-2800. Then, please promptly
"Delete" this communication from your computer. This communication, and any
information contained herein, may only be forwarded, printed, disclosed,
copied or disseminated by those specifically authorized to do so.
UNAUTHORIZED DISCLOSURE MAY RESULT IN LEGAL LIABILITY FOR THOSE PERSONS
RESPONSIBLE.

CONFIDENTIAL COMMUNICATION - PLEASE READ PRIVACY NOTICE
This communication is confidential and may be read only by its intended
recipient(s). It may contain legally privileged and protected information.
If you believe you have received this communication in error, please "Reply"
to the Sender and so indicate or call (603) 663-2800. Then, please promptly
"Delete" this communication from your computer. This communication, and any
information contained herein, may only be forwarded, printed, disclosed,
copied or disseminated by those specifically authorized to do so.
UNAUTHORIZED DISCLOSURE MAY RESULT IN LEGAL LIABILITY FOR THOSE PERSONS
RESPONSIBLE.


Re: Strange LTO2 behaviour on Linux

2004-06-29 Thread Ben Bullock
Hmmm I'll hazard a guess...Seeing tapes fill up before they
should typically has to do with the way the drive is defined. We had
similar problems on an initial installation. I don't have an LTO library
at this site to compare, but I believe we had to set the read and write
formats to "ULTRIUMC".  

Do a "q drive f=d" and see what yours looks like.

Ben


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard van Denzel
Sent: Monday, June 28, 2004 2:21 AM
To: [EMAIL PROTECTED]
Subject: Strange LTO2 behaviour on Linux


Hi All,

Has anyone seen this behaviour before?

Archiving on LTO2 (in an IBM 3584) and when the tape is full a second
tape is assigned an the last file of the archive is written on that
tape. Right after that the tape gets the status full, nevertheless that
there is only around 15GB of data on it. Using LTO1 tapes with the same
operation (on the same drives in the same
library) works fine.

The server is a RH Linux 3.0 AS with TSM 5.2.2.0.

Can anyone tell if this is normal behaviour or if there is a fix for
this?

Regards,

Richard.


Re: raw partitions

2004-06-29 Thread asr
==> In article <[EMAIL PROTECTED]>, Joni Moyer <[EMAIL PROTECTED]> writes:


> Hello all!

> I was reading the performance tuning guide and it states that we should use
> raw partitions for server db, log and disk storage pool volumes for an AIX
> server and I was just wondering if this is true and what the benefits are of
> configuring volumes in this manner?

Simpler, faster, less space overhead.


Liabilities:

- Someone who doesn't understand TSM and LVs might _change_ the size of an LV,
  and that will bust it, as far as TSM is concerned

- LVs have 15 character name limits.  If you have several TSM instances on one
  piece of hardware, this can get confusing unless you think about it first.
  Here's my standard so far:

  /dev/rtglmaildblv01a
\\\ \   \\-  Mirror a
 \\\ \   \-  DB volume 1
  \\\ \- Database, as opposed to log or data
   \\\-  Instance name
\\-  'TSM lv'
 \-  raw LV.

  This breaks down if you want to label the LV according to instance -and-
  domain.  So far, this hasn't been too much of a problem for me, and will
  only really be an issue for data volumes.



> As I understand it, if we configure raw logical volumes, the AIX volume
> group will need to be applied to a raw logical volume, as opposed to a
> standard UNIX filesytem.

I can't parse this.

> When defining TSM volumes, would we then need to define and format them to
> the raw logical volume?!?


You don't need to format them (which is the single most emotionally important
reason to use them, as far as I'm concerned: speed of execution).  You can:

root# mklv -y 'tfoobardblv01a' [volume group] [# PP's] [hdisk of residence]

then you can immediately

YOUR_SERV> def dbvol /dev/rtfoobardblv01a


- Allen S. Rout


automation for drive cleaning

2004-06-29 Thread Luc Beaudoin
Hi everybody

I have TSM 5.2.0.2 server on  win2k.
I have a IBM library 3583-R72 with 6 drives .

Is there a easy way to schedule a job to let say clean all drives every 2
weeks or 

Thanks

Luc Beaudoin
Network Administrator/SAN/TSM
Hopital General Juif S.M.B.D.
Tel: (514) 340-8222 ext:8254


Re: automation for drive cleaning

2004-06-29 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Luc Beaudoin
>I have TSM 5.2.0.2 server on  win2k.
>I have a IBM library 3583-R72 with 6 drives .
>
>Is there a easy way to schedule a job to let say clean all 
>drives every 2
>weeks or 

3583 libraries are designed to be self-cleaning. Please consult the
latest version of the 3583 operator's guide on how to properly check in
a cleaning tape, and how to set up the library to use it.

--
Mark Stapleton


Re: automation for drive cleaning

2004-06-29 Thread Coats, Jack
sure,

put a schedued administrative commands in for each drive whenever you want.
the command is

clean drive LIBRARY TAPEDRIVE0

Make sure and do them so there is enough time between commands for the drive
to clean.

Or put them in a command script with a wait (or 10 minutes of expiration :)
between each
clean command.

-Original Message-
From: Luc Beaudoin [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 2:20 PM
To: [EMAIL PROTECTED]
Subject: automation for drive cleaning


Hi everybody

I have TSM 5.2.0.2 server on  win2k.
I have a IBM library 3583-R72 with 6 drives .

Is there a easy way to schedule a job to let say clean all drives every 2
weeks or 

Thanks

Luc Beaudoin
Network Administrator/SAN/TSM
Hopital General Juif S.M.B.D.
Tel: (514) 340-8222 ext:8254


TSM CLUSTER?

2004-06-29 Thread Tae Kim
Hi guys I was wondering if any of you are clustering your tsm servers. Other
HA solution you guys using other than HACMP in AIX?


Re: Scheduled restore not working

2004-06-29 Thread Troy Frank
Yeah, I've run into that before.  It's actually not really netware's
fault.  They have a switch in the SMS modules so that backup software
can tell SMS to force an uncompress of files before they get backed up.
That way, the files can get restored anywhere.  Arcserve, for instance,
supports this.

It would be nice **HINT HINT IBM**, if the TSM netware client also
supported it.  As it stands now, in a disaster situation, we'd have to
remember what type of volumes each server had (NSS, TFS, compressed,
uncompressed), and recreate them EXACTLY, before we'd be able to restore
anything.   Yes, it's in our documentation, but still.Yuck.

At any rate, I don't think that's the problem here, as the files I'm
interested in are not getting compressed.  Compression's turned on at
the volume level, but only 244K is actually being compressed on the
entire drive.  The dsmsched.log, by the way, just says "File
blah\blah.bla exists, skipping"  for everything.  It completes, and
seems to think everything is fine.

--Troy

>>> [EMAIL PROTECTED] 6/29/2004 10:54:22 AM >>>
NetWare is notorious for not allowing files backed up with NSS or
compressed volumes to restore to volumes that are otherwise. Try
running
the process manually and see what happens, and also take a look at the
schedule log dsmsched.log.

--
Mark Stapleton

>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
>Behalf Of Troy Frank
>Sent: Tuesday, June 29, 2004 9:23 AM
>To: [EMAIL PROTECTED]
>Subject: Scheduled restore not working
>
>I've got a schedule that backups up certain "ServerA" directories at
>noon. Another schedule is supposed to then restore those
>directories to
>"ServerB" at 1PM. The restore seems to be going to ServerA ,
>instead of
>ServerB. Server A & B are both Netware6. However, serverA is
>Traditional Volumes, and serverB is NSS volumes.
>
>In the web administration for TSM the restore looks like this
>(abbreviated)...
>
>Action - Restore
>
>Options - -ifnewer -subdir=yes
>
>Objects - "Vol1:\data\share\*" "serverB/vol1:\data\"
>
>
>I also tried it with different variations of quotes/no quotes around
>the Objects, which didn't seem to matter. Both schedules, the backup
>and the restore, are associated with ServerA.
>
>Troy
>UW Medical Foundation
>
>
>
>Confidentiality Notice follows:
>
>The information in this message (and the documents attached to
>it, if any)
>is confidential and may be legally privileged. It is intended
>solely for
>the addressee. Access to this message by anyone else is
>unauthorized. If
>you are not the intended recipient, any disclosure, copying,
>distribution
>or any action taken, or omitted to be taken in reliance on it is
>prohibited and may be unlawful. If you have received this message in
>error, please delete all electronic copies of this message (and the
>documents attached to it, if any), destroy any hard copies you may
have
>created and notify me immediately by replying to this email. Thank
you.
>
>


Confidentiality Notice follows:

The information in this message (and the documents attached to it, if any)
is confidential and may be legally privileged. It is intended solely for
the addressee. Access to this message by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution
or any action taken, or omitted to be taken in reliance on it is
prohibited and may be unlawful. If you have received this message in
error, please delete all electronic copies of this message (and the
documents attached to it, if any), destroy any hard copies you may have
created and notify me immediately by replying to this email. Thank you.


Re: From An ATL7100 to a P3000

2004-06-29 Thread Bos, Karel
Hi,

If you delete a library all volumes are deleted from the inventory (no library=no 
inventory). If you define your new library, this will be an empty one (new library=no 
inventory). Audit library will not checkin volumes, it will only remove missing 
volumes from the library inventory.

Now you have an empty library (in ITSM) and your volumes are put in the new library 
(mixed scratch and private).
- If you give a audit library, no volumes will be checked in in ITSM;
- If you give a  both your Private tapes and 
your scratch tapes will be checked in as Private, leaving you with a new library and 
no scratch tapes available. Don't think you will want to to this...;
- If you do a , ITSM will not accept the 
Private tapes as Scratch and ONLY checkin the scratch tapes as Scratch. Then run 
another checkin this time with  for the 
remaining tapes (the private once).

An other way is manual sort your tapes and check them in as either Scr or Priv. 

Hope above will help.

Regard,

Karel

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Argeropoulos, Bonnie
Sent: dinsdag 29 juni 2004 19:09
To: [EMAIL PROTECTED]
Subject: Re: From An ATL7100 to a P3000


Hello,

I don't understand...why would I want to check the tapes in as scratch if
they already
have data on them...why wouldn't I just physcially move them and then run an
audit
library.  At first you say to checkin my data tapes as scratch, but then you
say that
private tapes cannot be checked in as scratch

Thanks,

Bonnie

-Original Message-
From: Bos, Karel [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 11:00 AM
To: [EMAIL PROTECTED]
Subject: Re: From An ATL7100 to a P3000


Hi,

The database will still know the tapes (and the content of the tapes). But
you will have to do a checkin libv stat=scratch before doing checkin libv
stat=priv. ITSM will let you checkin empty volumes as private, but will not
accept private tapes being checked in as scratch.

Regards,

Karel.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Argeropoulos, Bonnie
Sent: dinsdag 29 juni 2004 16:34
To: [EMAIL PROTECTED]
Subject: From An ATL7100 to a P3000


Hello,

We are planning on changing our tape library from an ATL7100 to a P3000 and
from four DLT7000 tape drives
to  six DLT8000 tape drives on an F50 running AIX 4.3.3 and TSM 5.1.62.  We
are currently looking at putting our
process for this change together and have found we have a few questions.  We
thought we could delete the path to
the drives, path to the library, the drives and then the library.  We would
then connect the new library and physically
move the DLT'4 tapes over to the new library and redefine everything.  We
are now concerned that if we delete the
library will the database no longer know of the tapes?  Would anyone have
any suggestions or see any problems
with this plan?

Thanks for any help,

Bonnie

CONFIDENTIAL COMMUNICATION - PLEASE READ PRIVACY NOTICE
This communication is confidential and may be read only by its intended
recipient(s). It may contain legally privileged and protected information.
If you believe you have received this communication in error, please "Reply"
to the Sender and so indicate or call (603) 663-2800. Then, please promptly
"Delete" this communication from your computer. This communication, and any
information contained herein, may only be forwarded, printed, disclosed,
copied or disseminated by those specifically authorized to do so.
UNAUTHORIZED DISCLOSURE MAY RESULT IN LEGAL LIABILITY FOR THOSE PERSONS
RESPONSIBLE.

CONFIDENTIAL COMMUNICATION - PLEASE READ PRIVACY NOTICE
This communication is confidential and may be read only by its intended
recipient(s). It may contain legally privileged and protected information.
If you believe you have received this communication in error, please "Reply"
to the Sender and so indicate or call (603) 663-2800. Then, please promptly
"Delete" this communication from your computer. This communication, and any
information contained herein, may only be forwarded, printed, disclosed,
copied or disseminated by those specifically authorized to do so.
UNAUTHORIZED DISCLOSURE MAY RESULT IN LEGAL LIABILITY FOR THOSE PERSONS
RESPONSIBLE.


LTO estimated capacity drops

2004-06-29 Thread Gordon Woodward
Just analysing our TSM install today as tapes have been getting chewed up quicker then 
normal, despite no confuration changes being made at server or client side. We have 
client-side compression turned on (hardware off) and we were getting about 140-180Gb 
of data on each tape before they being marked as full. Looking today though it has 
practically halved in capacity (or more in some cases) dropping anywhere from 45-90Gb 
of data before being labeled full.

Any ideas what might cause this? The nature of our data hasn't changed at all, could 
dirty drives cause such a drastic reduction in capacity?

TIA!

Gordon


--

This e-mail may contain confidential and/or privileged information. If you are not the 
intended recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy this e-mail. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden.