Re: Stats

2003-07-22 Thread P Baines
This should also show you the amount of active data on a node in MB:

select sum(capacity*pct_util/100) from filespaces where node_name=''



-Original Message-
From: William Rosette [mailto:[EMAIL PROTECTED]
Sent: 21 July 2003 19:36
To: [EMAIL PROTECTED]
Subject: Re: Stats


Thanks,
Good one,
Bill Rosette
Data Center/IS/Papa Johns International
WWJD



  Richard Sims
  <[EMAIL PROTECTED]> To:
[EMAIL PROTECTED]
  Sent by: "ADSM:  cc:
  Dist StorSubject:  Re: Stats
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  07/21/2003 01:39
  PM
  Please respond to
  "ADSM: Dist Stor
  Manager"






>I am looking for a way to figure a total Gb of active files on a
particular
>node. ...

Hello, Bill - This question came up a few weeks ago, and the excellent
  suggestion was the expedient:

Active files, number and bytes  Do 'EXPort Node NodeName
FILESpace=FileSpaceName
FILEData=BACKUPActive
Preview=Yes'
Message ANR0986I will report the
number
of files and bytes.

(This is immortalized in http://people.bu.edu/rbs/ADSM.QuickFacts)

   Richard Sims, BU

Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: SQL for Node Total/Used space

2003-07-23 Thread P Baines
Maybe this:
SELECT nodes.NODE_NAME, sum(filespaces.CAPACITY) as capacity,
sum(filespaces.CAPACITY*filespaces.PCT_UTIL/100) AS "MB USED" FROM
NODES,FILESPACES WHERE NODES.NODE_NAME=FILESPACES.NODE_NAME AND
PLATFORM_NAME LIKE'W%NT' AND CAST(LASTACC_TIME AS DATE)>'07/01/2003' group
by nodes.node_name

-Original Message-
From: John Naylor [mailto:[EMAIL PROTECTED]
Sent: 23 July 2003 13:19
To: [EMAIL PROTECTED]
Subject: SQL for Node Total/Used space


People,
I have been playing aroung with some sql to extract for a particular
platform,
the total capacity /used per filespace for recently accessed clients.
What I have below does this .  I would also like to pull this out but summed
per
node instead of showing the individual filespaces and I cannot get it to
work
Suggestions gratefully received.
SELECT NODES.NODE_NAME,FILESPACE_NAME,CAPACITY, -
(CAPACITY*PCT_UTIL/100) AS "MB USED" -
FROM NODES,FILESPACES  -
WHERE NODES.NODE_NAME=FILESPACES.NODE_NAME -
AND PLATFORM_NAME LIKE'W%NT' -
AND CAST(LASTACC_TIME AS DATE)>'07/01/2003'

John





**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**

Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: select from actlog VS query actlog performance

2004-11-19 Thread P Baines
Rather the other way round. The SQL is being converted to a native
database call. I would presume most query commands would be quicker than
their equivalent "SQL" queries.

For tuning SQL queries you can look at the indexing of the columns in a
table:

select tabname, colname, colno, index_keyseq, index_order from columns
where tabname='ACTLOG'

TABNAME  COLNAME   COLNO   INDEX_KEYSEQ
INDEX_ORDER
--   --   --   
---
ACTLOG   DATE_TIME 1  1   A

ACTLOG   MSGNO 2

ACTLOG   SEVERITY  3

ACTLOG   MESSAGE   4

ACTLOG   ORIGINATOR5

ACTLOG   NODENAME  6

ACTLOG   OWNERNAME 7

ACTLOG   SCHEDNAME 8

ACTLOG   DOMAINNAME9

ACTLOG   SESSID   10

ACTLOG   SERVERNAME   11

Here you can see it is only indexed on DATE_TIME. Other tables have more
indexed columns. Running functions on where clause columns may well
cause the query to do a full table scan anyway (not using the index.)
but that's just a guess.

(I notice that you are using process=1234 in your where clause, so maybe
you have a later release of TSM, I'm on 5.1 and don't have that column!)

Remember as well that SQL queries use the free space in your database,
so make sure you have plenty if you're doing big queries.

Paul. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Warren, Matthew (Retail)
Sent: Friday 19 November 2004 15:55
To: [EMAIL PROTECTED]
Subject: select from actlog VS query actlog performance


Hello TSM'ers



I'm doing some scripting that is using actlog queries fairly heavily, I
have noticed that

Select * from actlog where cast(date_time as date)=current_date and
process=1234

Is a lot slower than

Q actlog begint=-08:00 se=1234 (say, its 8am in the morning...)


Although you need to be carefull you are actually getting what you want
with the latter version.


Is TSM doing anything internally to generate a SQL statement that works
quicker than mine but gives the same/similar result? - I am assuming
that internally TSM takes q actlog (and other q commands) and generates
a SQL statement it then processes against the TSM DB, formatting the
result to generate the query output as non-tables.


Thanks,

Matt.





___ Disclaimer Notice __
This message and any attachments are confidential and should only be
read by those to whom they are addressed. If you are not the intended
recipient, please contact us, delete the message from your computer and
destroy any copies. Any distribution or copying without our prior
permission is prohibited.

Internet communications are not always secure and therefore Powergen
Retail Limited does not accept legal responsibility for this message.
The recipient is responsible for verifying its authenticity before
acting on the contents. Any views or opinions presented are solely those
of the author and do not necessarily represent those of Powergen Retail
Limited. 

Registered addresses:

Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry,
CV4 8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax +44 (0) 2476 42 5432



Any e-mail message from the European Central Bank (ECB) is sent in good faith 
but shall neither be binding nor construed as constituting a commitment by the 
ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately 
via e-mail and delete this e-mail from your system.


Re: TSM DR Exercise Restore Ideas

2005-01-13 Thread P Baines
Hello Adam,

An idea might be to define for each client two node definitions,
NODENAME and NODENAME_ACTIVE. NODENAME would be your normal daily
backups keeping x versions for y months. NODENAME_ACTIVE would be a
separate incremental backup of the same client that you backup to a
management class that keeps one version of the backup (no inactive
versions). This has the disadvantage that you are doing two incremental
backups per day of the same client, the advantage is that the
NODENAME_ACTIVE restore would only search through active data, therefore
search times and number of mounts would be greatly reduced (search times
should be next to nothing for a full restore.)

I haven't implemented this, so I don't know how it works in practise,
but it might be an idea for you. If you are currently retaining backup
versions for a month or more, then it should use less library space than
you use now with full backups each week.

Regards,
Paul.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Adam Sulis
Sent: Thursday 13 January 2005 16:56
To: ADSM-L@VM.MARIST.EDU
Subject: TSM DR Exercise Restore Ideas


Hi All:

I am looking for suggestions to improve efficiency in our Backup
strategy:

Current:
We backup 200 nodes to a DASD diskpool (large enough for one day's
backup),
then migrate to a VTS, with an offiste copy going to 3490 (uncollocated,
of
course). To improve day-to-day restores, we implemented a weekly
full-backup
for our nodes. This means that restores come from a maximum of the last
1
week's worth of tapes (only need the active versions of the files).
Also,
these full backups make our Disaster Recovery restores easier to work
with
(when we get to the DR location, we call for tapes from the past
weekend,
and all incrementals since. The problem is that we only have the
city-to-city bandwidth to offer weekly full backups to the nodes
participating in the DR plan...

Proposed:
What I would like to do is remove the VTS, have the backups direct to
DASD,
then migrate to a collocated 3490, with an offiste copy same as before.
Having done this, we would have improved backup performace for all
nodes,
and provided great restore performance for all nodes.

Problem: Without weekly full backups, the offsite tapes required for a
restore would grow to a huge number (Operating System files would be on
a
very early tape, and every tape generated since would have incremental
data
for that node). Is there any way to make a shorter list of tapes
required
for DR? How is everyone else dealing with the offsite tapes? Again, we
are
only interested in the "active versions". I could aggressively reclaim
the
offsite pool to create newer tapes, but that's a lot of TSM thrashing
for
little gain (still cannot say to the vault "send me all tapes since last
Saturday". Daily backupsets do not work within the DR plan, as far as I
can
tell. (Unless I can generate a backupset to remain inside TSM, which
would
get copied to the offsite daily, and still be available at the DR
site...

I must be missing a command or a concept - how is everyone else doing
this?
Any thoughts appreciated.


Adam Sulis
DNIS6-5
2nd Floor, Tunney's Pasture L002
Tel (613) 998-9093
[EMAIL PROTECTED]



Any e-mail message from the European Central Bank (ECB) is sent in good faith 
but shall neither be binding nor construed as constituting a commitment by the 
ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately 
via e-mail and delete this e-mail from your system.


Re: How to get a tape drive to stream?

2005-01-26 Thread P Baines
Have a look at your dsmaccnt.log, in particular the MediaWait and
CommWait columns. If you see large figures in both then this suggests to
me that the network connection is not feeding the TSM server fast enough
to support streaming. You must either increase your network throughput
or backup to disk and migrate to tape. Some drives do compression, so
native streaming rate might not be equal to real world streaming rate (I
see between 15 MB/sec and 48 MB/sec for a 3590 fibre attached drive.)
You might try compressing on the client first?

Good luck,
Paul.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Coats, Jack
Sent: Wednesday 26 January 2005 17:16
To: ADSM-L@VM.MARIST.EDU
Subject: How to get a tape drive to stream?


But does someone have a pointer to some 'tips & tricks' or FAQ to help
me get these LTO tape drives streaming with TSM on Windows 2000? 
  
The drives seem to be doing about 3MB/sec per drive, no matter what is
going on in the server.  This is about the right speed for stop/start
programmed I/O processing.

 

My config is TSM 4.3 on Windows 2K server (2 1.2G Xenon, 1G RAM)

My SCSI attached LTO-1 drives are on Adaptec controllers (IBM branded),
with two tape drives per controller.  I have 3 identical SCSI
controllers with two drives each (one also has my 3583 library on it).

 

Disks (6 72G 10K drives, in two partitions) are attached to one RAID
card, and are RAID 5.

 

I can't get just running a single tape drive streaming, with nothing
going on in my system (db backup only,  no backups, no reclamation, no
expiration, etc)

 

Any advice/suggestions are appreciated. ...

 

... Desperate in Houston ... JC



Any e-mail message from the European Central Bank (ECB) is sent in good faith 
but shall neither be binding nor construed as constituting a commitment by the 
ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately 
via e-mail and delete this e-mail from your system.


Re: Poor performance with TSM Storage Agent on Solaris

2005-02-24 Thread P Baines
Have you run a client performance trace? This may give you an idea about where 
the client is spending it's time. What type of disk is the data stored on that 
you want to back-up from/restore to? Are these the same type of disks where you 
see 60MB/sec? (How many parallel sessions do you run to get 60MB/sec?) I also 
have problems getting good backup rates on LAN-Free and my investigations are 
leading me to believe that the bottle-neck is the client disk.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of David 
Hendén
Sent: Wednesday 23 February 2005 17:43
To: ADSM-L@VM.MARIST.EDU
Subject: Poor performance with TSM Storage Agent on Solaris


Dear all,

We are experiencing performance problems with the TSM Storage Agent for
Solaris.

This is regardless of if we are doing restores or backups. The problem
manifests itself mainly when restoring or backing data with DB2, but I get 
the
same poor performance when sending gig sized files from the file system.

Performance seems to be CPU bound, and each restore/backup session takes 
100%
of one CPU. So, on a 400mHz machine I can get around 10-15mb/s lanfree and 
on
the faster machines with 1200mHz CPUs we're seeing speeds of around 
20mb/s.
When specifying parallelism in the DB2 databases to use multiple sessions 
we
get 2 * 10-15mb/s and also 2 CPUs using 100%. Truss says that almost all 
of
this CPU time is spent in userland.

The native speed of the 9840C drives is 35mb/s and on AIX machines and 
Slowlaris machines with Oracle we see speeds of about 60mb/s per session 
over
the SAN.

At first I thought it could be the loopback interface but I didnt see any
performance gain when switching to shared memory. I have also tried all
the performance recommendations by IBM.

I am going to trace the storage agent tomorrow to see if I can shed some 
light
on what all the CPU time is spent on.

On to my questions:

Has anyone experienced the same extreme CPU load when using the storage 
agent
on Solaris?

Could it possibly be a patch related problem since the Solaris Oracle 
machines
are more heavily patched than the DB2 ditos?

The environment:

Serverside:
TSM server 5.2.3.2 on AIX 5.2.
16 StorageTek 9840C tape drives in powderhorn libraries using ACSLS.
Everything is SAN connected with Cisco directors.

Clientside:
Solaris 5.8 64bit kernel.
Gresham EDT 6.4.3.0 used to connect to the ACSLS.
Storage Agent 5.2.3.5 on Solaris 5.8.
TSM client 5.2.3.5.
A range of different SUN hardware: different machines, different HBAs 
(both
Sbus and PCI).

-David

--
David Hendén
Exist AB, Sweden
+46 70 3992759



Any e-mail message from the European Central Bank (ECB) is sent in good faith 
but shall neither be binding nor construed as constituting a commitment by the 
ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately 
via e-mail and delete this e-mail from your system.


Re: If we all complain, do you think they will add the WEB gui back?

2005-03-11 Thread P Baines
Yes I agree most stongly. Please IBM, give us access to the admin API.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Steven Harris
Sent: Friday 11 March 2005 01:49
To: ADSM-L@VM.MARIST.EDU
Subject: Re: If we all complain, do you think they will add the WEB gui
back?


Mark,

That is a great post from you, and yes ISC is Version 1 and will improve
with time and use.

But, if you will allow me to reiterate a previous post:
There is a  new DSMAPI admin api that is used by the ISC to perform TSM
comands.  One additional way forward would be to expose this api, so
that
the talented members of the user community can develop the interfaces
that
*they* desire using perl/python/PHP/java or whatever.

This is an almost zero cost solution for IBM that will make life very
much
easier for them and for us.  Look at the utilty of the standard TSM api.
The adsmpipe program that was written for version 2 still works well on
version 5.  Many applications have been built on top of this api and it
provides great functionality for both IBM (eg the i5OS BRMS interface)
and
others.  The admin api could be as good - to see the sort of thing that
is
do-able look at http://www.phpmyadmin.net/home_page/index.php , and
particlarly have a play under the demos tab.

Please IBM stop treating us like mushrooms and let in the light.

Steve

Steve Harris
TSM and AIX Admin
Between jobs at the moment
Brisbane Australia

- Original Message -
From: "Mark D. Rodriguez" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 11, 2005 4:19 AM
Subject: Re: [ADSM-L] If we all complain, do you think they will add the
WEB
gui back?


> Hi Everyone,
>
> I am an IBM Business Partner.  I have been listening to what everyone
> has been saying about the ISC/AC.  I, also, have some concerns about
> this since I not only have to use it I have to be able to sell it to
> others to use.  I have been talking with several IBM'ers in this
> regard.  The people I have been talking to are on both the TSM
> development side and on the channel (sales and marketing) side of the
> house.  Obviously the channel people are very concerned when anything
> might possibly effect the ability of IBM BP's to sell their products.
> As such, I have been seeking to get them to put pressure on the
> development side to get some sort of improvements made.  I have talked
> with the developers to help them see the issues that I see with my
> customers as well as what I have learned from all of you on this list.
> Also, you should know that IBM is listening and they are willing to
make
> the necessary changes to resolve these issues.  They are monitoring
this
> list all the time so the only real survey you need to do is keep
posting
> to the list!
>
> Now before I go to much further, I must make this statement (i.e. here
> comes the legal disclaimer), anything that I am about to disclose here
> is simply the results and/or contexts of conversation that I had with
> various IBM'ers and in no way implies any commitment on their or my
part
> to provide any of the things we discussed.  In other words we were
just
> talking, but they were not promising anything.  The biggest problem I
> see with the ISC/AC is not the application itself, change is
inevitable
> and in fact in this case somewhat overdue.  The problem with the
ISC/AC
> is that there is not any reasonable migration path from the Web Admin
> GUI to the ISC/AC.  They just flipped a switch and now you used ISC/AC
> and oh by the way it doesn't support any of your older TSM servers.
Not
> a good plan and I think they recognize it as well.  However, I will
> defend the developers to the point that there were very good reasons
for
> the decisions that they made and how we wound up where we are today.
> Given similar situation I would have made similar choices with the
> exception I would have spent the time and resources to have a better
> migration path.  As you all have probably guessed by now the ISC/AC
> isn't going away any time soon, nor should it.  We have been long
> overdue for a improved GUI admin interface.  The ISC/AC isn't perfect
by
> any stretch of the imagination, but I have every confidence that IBM
> will develop it into a very mature tool as quickly as possible.  I
will
> mention some of the "POSSIBLE" enhancements that are upcoming later in
> this note.
>
> The focus of my discussion with the IBM powers that be was around how
do
> we give the TSM community a better migration path to the ISC/AC
> environment.  The key issue we focused on for creating a better
> migration path was the re-release of the Web Admin GUI.  Obviously the
> the best thing would be to re-release it and have it support all of
the
> 5.3 enhancements, but that comes at a cost.  The trade off would be to
> take resources away from the ISC/AC development in order to uplift the
> Web Admin GUI.  I don't think that is in the best interests of the TSM
> community as a whole.  I suspect that what will h

Re: 4.2.2.12 to 5.1.0.0 upgrade and PATHS

2003-09-11 Thread P Baines
If you are upgrading library manager and clients you should be careful. The
upgrade process will define paths on all your servers, and it all looks and
works ok. Until you want to define a new drive (or delete/redefine a drive)
on a library client. Then you will discover that the 4.2 Admin Guide is
wrong (5.1 is correct). All drive and path definitions now need to be done
on the library manager. Only a library definition is done on the clients. If
you don't use library managers and clients the upgrade process should define
your paths and you should be OK. (My experience on AIX)

Paul.

-Original Message-
From: David Nicholson [mailto:[EMAIL PROTECTED]
Sent: 11 September 2003 19:17
To: [EMAIL PROTECTED]
Subject: Re: 4.2.2.12 to 5.1.0.0 upgrade and PATHS


My recollection is that we simply had to define the paths. However, SOP
says be prepared for anything. I also recall that we were told to remove
the DRIVE def's from all TSM library clients. The DRIVE's only need to be
defined to the library manager/owner in a library sharing environment now.
 The devices still need to be defined to the OS on the library client, but
TSM does not require the DRIVE to be defined to library clients. I am not
sure that it is required to delete the DRIVE def's from clients, but was
recommended.

Dave Nicholson
Whirlpool Corporation





Farren Minns <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
09/11/2003 11:34 AM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: 4.2.2.12 to 5.1.0.0 upgrade and PATHS


I'm assuming that the Library and Device definitions will still be in
place, so then it'll just be the paths that need defining. Or do I need to
define new drives and lib too ?

Farren
|++---|
||   Lawrence Clark   | |
||   <[EMAIL PROTECTED]| |
||   .US> |   To:       [EMAIL PROTECTED] |
||   Sent by: "ADSM: Dist Stor|           cc: |
||   Manager" |           Subject:        Re:
4.2.2.12|
||   <[EMAIL PROTECTED]>   |   to 5.1.0.0 upgrade and PATHS |
||| |
||   09/11/2003 04:07 PM  | |
||   Please respond to "ADSM: Dist| |
||   Stor Manager"| |
||| |
|++---|







On the other hand, we upgraded from 4.2 to 5.1, have two 3494 libraries,
and did not have to manually define the paths..

>>> [EMAIL PROTECTED] 09/11/03 10:22AM >>>
We made the same upgrade 4.2 to 5.1 on our AIX server. We also have
3494
library. We deffinetly had to manually define the paths after the
upgrade.


Dave Nicholson
Whirlpool Corporation





Farren Minns <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
09/11/2003 07:53 AM
Please respond to "ADSM: Dist Stor Manager"


To:     [EMAIL PROTECTED]
cc:
Subject:        4.2.2.12 to 5.1.0.0 upgrade and PATHS


Hi TSMers

Ok, my questions about the 4.2 to 5.1 upgrade are prolly getting
boring,
but I always like to make sure I've asked every silly question I can so
as
not to get caught out.

So, next week I'm going from 4.2.2.12 to 5.1.6.2 on a Solaris 2.7
node.

I understand that PATHS are a new feature (or complication) of 5.1 but
want
to know if, after upgrade I HAVE to define these. I was speaking to an
IBM
rep the other day and he said yes, but I have heard others say they
didn't
have to do it (sorry to keep hastling you Gary). Our setup is one tcp
attached 3494 library with two scsi 3590 tape drives. I'm surprised
that
the upgrade would not see these devices already set up and create the
paths
accordingly.

Anyone got a definte answer here. Also, looking at the quick-start
guide
on
the 5.1.0.0 media, it doesn't mention paths. The Admin guide does, but
not
the quick start.

mm

Anyway, thanks for any pointers

Farren Minns - John Wiley & Sons Ltd



*

This email transmission is confidential and intended for the person or
organisation it is addressed to. If you are not the intended
recipient,
you
must not copy, distribute, or disseminate the information, open any
attachment, or take any action in reliance of it. If you have received
this
message in error please notify the sender.

Any views expressed in this message are those of the individual
sender,
except where the sender specifically states otherwise.

Although this email has been scanned for viruses you should rely on
your
own virus check, as the sender takes no responsibility for any damage
arising out of any bug or virus infection.

*




*

This email transmission is confidential and intended for the 

Re: How to Change the status of a volume from private to sratch out of a storage pool

2003-11-14 Thread P Baines
You can use DELETE VOLUME command.

-Original Message-
From: ZENG Brian (800043) [mailto:[EMAIL PROTECTED]
Sent: 14 November 2003 15:59
To: [EMAIL PROTECTED]
Subject: How to Change the status of a volume from private to sratch out
of a storage pool


The TSM4.2 Adminguide states:

"the UPDATE LIBVOLUME command lets you change the status of a volume in an
automated lib from scratch to private, or private to scatch. However, you
cannot change the status of a volume from private to scratch if the volume
belongs to a storage pool."

Well, When there is real need to pull some private volume (empty) out of a
storage pool and re-use for other backup, how can you do that?



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: timestamps in select

2004-01-08 Thread P Baines
Hi Matthew,

something like this may help you:
where cast((current_timestamp - start_time)hours as integer) <= 4


Cheers,
Paul.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Warren, Matthew (Retail)
Sent: 08 January 2004 11:24
To: [EMAIL PROTECTED]
Subject: timestamps in select


Hallo,

I am using the following select statement;

select entity,((sum(bytes)/1024)/1024) as MB from summary where entity
in (select node_name from nodes where domain_name like 'DM%') and
start_time>timestamp(current_date - 1   days) and activity='BACKUP'
group by entity


I would like to be able to specify a period of hours preceding the
current date/time, rather than a whole number of days [
timestamp(current_date - 1   days)  ]. My SQL's not so hot, if anyone
could show me how to do it I would be very grateful.

Thanks,

Matt.


___ Disclaimer Notice __
This message and any attachments are confidential and should only be read
by those to whom they are addressed. If you are not the intended recipient,
please contact us, delete the message from your computer and destroy any
copies. Any distribution or copying without our prior permission is
prohibited.

Internet communications are not always secure and therefore the Powergen 
Group does not accept legal responsibility for this message. The recipient
is responsible for verifying its authenticity before acting on the 
contents. Any views or opinions presented are solely those of the author 
and do not necessarily represent those of the Powergen Group.

Registered addresses:

Powergen UK plc, 53 New Broad Street, London, EC2M 1SL
Registered in England & Wales No. 2366970

Powergen Retail Limited,  Westwood Way, Westwood Business Park,
Coventry CV4 8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax   +44 (0) 2476 42 5432



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: SQL / big numbers ?

2004-03-19 Thread P Baines
Hello Goran,

you named the column "attempts" but it's actually called affected. The number of 
attempts (for the db backup) would be a count() rather than a sum() fuction. The 
number you summed probably refers to the number of database pages backed up.

Paul.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
goran
Sent: 19 March 2004 15:59
To: [EMAIL PROTECTED]
Subject: SQL / big numbers ?


  hi all,

i have a script which shows some kinda percentage of weekly activity from
summary table but i'm confused by big numbers that come out, heres the sql >

select activity, cast(sum(affected) as integer)as "Attempts",
cast(sum(failed) as integer) as "Failed", cast((1-(cast(sum(failed) as
decimal(18,3))/sum(affected)))*100 as decimal(8,2)) as "% Success" from
summary where start_time>timestamp(current_date-8 day,'00:00:00') group by
activity

my output looks like that >

ACTIVITY  Attempts  Failed  % Success
-- --- --- --
ARCHIVE  48918   0 100.00
BACKUP 1423353  55  99.99
EXPIRATION 1223316   0 100.00
FULL_DBBACKUP 51254126   0 100.00
MIGRATION   754404   0 100.00
RECLAMATION 349309 189  99.94
RESTORE  15325   0 100.00
RETRIEVE 2   0 100.00
STGPOOL BACKUP  649593   0 100.00
ANR2947E Division by zero was detected for operator '/'.

|
 ...V...
 t(sum(failed) as decimal(18,3))/sum(affected)))*100 as decimal(


the error is okay. but for instance FULL_DBBACKUP  to have 51254126 attempts
? hmmm  surely the sql has to be rewritten ... huh ?

thanks on answers !

goran konjich
senior unix systems admin
vipnet
-
TSM 5.2.2.2
AIX 5.2 ML02
LTO3584
2109 FC



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: SQL / big numbers ?

2004-03-19 Thread P Baines
Well, the sum() would be correct for the DB backup "attempts", however what is stored 
in the summary table has a different context depending on what the activity is. It 
isn't really documented as far as I know, but affected in the DB backup is probably 
pages backed up, whilst affected in the client backup records is probably number of 
files backed up. Failed for client backups is probably failed files, but for the DB 
backup this doesn't make any sense and this column is probably zero in all cases for 
DB backup. I think you'll have to treat different activities in different ways with 
the summary table.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
goran
Sent: 19 March 2004 16:27
To: [EMAIL PROTECTED]
Subject: Re: SQL / big numbers ?


ok, thanks ... i replaced all "sum" with "count" and this is what i got ...

ACTIVITY  Attempts  Failed  % Success
-- --- --- --
ARCHIVE112 112   0.00
BACKUP 765 765   0.00
EXPIRATION   8   8   0.00
FULL_DBBACKUP   25  25   0.00
MIGRATION   78  78   0.00
RECLAMATION270 270   0.00
RESTORE 70  70   0.00
RETRIEVE 2   2   0.00
STGPOOL BACKUP  47  47   0.00
TAPE MOUNT 720 720   0.00

ithe numbers are prob ok for last 7/8 days ... but something in sql is wrong
obviously. nah ? it's friday and the spring has come  lets enjoy the
weekend and bother with this next week ...

have a nice weekend.

g.


- Original Message -
From: "P Baines" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, March 19, 2004 4:08 PM
Subject: Re: SQL / big numbers ?


Hello Goran,

you named the column "attempts" but it's actually called affected. The
number of attempts (for the db backup) would be a count() rather than a
sum() fuction. The number you summed probably refers to the number of
database pages backed up.

Paul.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
goran
Sent: 19 March 2004 15:59
To: [EMAIL PROTECTED]
Subject: SQL / big numbers ?


  hi all,

i have a script which shows some kinda percentage of weekly activity from
summary table but i'm confused by big numbers that come out, heres the sql >

select activity, cast(sum(affected) as integer)as "Attempts",
cast(sum(failed) as integer) as "Failed", cast((1-(cast(sum(failed) as
decimal(18,3))/sum(affected)))*100 as decimal(8,2)) as "% Success" from
summary where start_time>timestamp(current_date-8 day,'00:00:00') group by
activity

my output looks like that >

ACTIVITY  Attempts  Failed  % Success
-- --- --- --
ARCHIVE  48918   0 100.00
BACKUP 1423353  55  99.99
EXPIRATION 1223316   0 100.00
FULL_DBBACKUP 51254126   0 100.00
MIGRATION   754404   0 100.00
RECLAMATION 349309 189  99.94
RESTORE  15325   0 100.00
RETRIEVE 2   0 100.00
STGPOOL BACKUP  649593   0 100.00
ANR2947E Division by zero was detected for operator '/'.

|
 ...V...
 t(sum(failed) as decimal(18,3))/sum(affected)))*100 as decimal(


the error is okay. but for instance FULL_DBBACKUP  to have 51254126 attempts
? hmmm  surely the sql has to be rewritten ... huh ?

thanks on answers !

goran konjich
senior unix systems admin
vipnet
-
TSM 5.2.2.2
AIX 5.2 ML02
LTO3584
2109 FC



Any e-mail message from the European Central Bank (ECB) is sent in good
faith but shall neither be binding nor construed as constituting a
commitment by the ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above.
Any unauthorised disclosure, use or dissemination, either in whole or in
part, is prohibited.
If you have received this e-mail in error, please notify the sender
immediately via e-mail and delete this e-mail from your system.


Re: Select to find what tape a single file is on

2004-09-10 Thread P Baines
If the object_ID is nnn then issue the command:

SHOW BFO 0 nnn

Which will show you the volume name(s)


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
TSM_User
Sent: Thursday 09 September 2004 03:59
To: [EMAIL PROTECTED]
Subject: Select to find what tape a single file is on


We have an 60 GB Exchange server that we keep 90 versions of.  They want
me to pull out the tape that has the backup from 7/20 on it and a
database backup and send it to another location for restore.  Now I know
I could run an export with fromdate but that will run for 7 days produce
20 tapes with 3 TB's of data.

I see that the backups table has the file_name, backup_date and
something called object_ID.  I see the contents table has_file name and
file_hexname.

I know that when I use the TSM GUI to select a single file for restore
TSM determines what tape needs to be mounted in seconds.  So, where is
the table that helps link the file requested for restore to the tape
volume it is on.

Has anyone had to do this select before.  I know that many have
discussed at great length the long query you would need to run to
determine what tapes you would need to restore an entire server.  I'm
not asking for all that.  I have one specific file name. OK actually 6
but I know each name and I don't mind running a select on each one.




-
Do you Yahoo!?
Shop for Back-to-School deals on Yahoo! Shopping.



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: schedule of SQL LOG backup

2004-09-21 Thread P Baines
Hello Luc,

I think the only way to do this from the TSM scheduler is to define
three schedules, one with a start time of 00:00, one at 00:20 and one at
00:40. Then set the periodicity to one hour for each of the three
schedules.

Cheers,
Paul.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Luc Beaudoin
Sent: Tuesday 21 September 2004 22:33
To: [EMAIL PROTECTED]
Subject: Re: schedule of SQL LOG backup


Hi Mark ...
So with that setup ... worst case ... they will loose 4 hours of work
???

I'm working in a hospital ... so even 1 hours lost of lab result or
patient appointment can be kind of hell

anyway .. if there is no way of putting minutes  ... I will put the
minimum ... 1 hours ...

thanks a lot Mark

Luc





"Stapleton, Mark" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-09-21 04:28 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: schedule of SQL LOG backup


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of Luc Beaudoin
>I thought of doing Full backup every 8 hours and LOG backup
>every 20 minutes ...
>Is there a best pratice for SQL backup ??

What works best is whatever meets your business needs. Most of my
customers do a full backup of databases once a day, and periodic log
backups (every 4 hours, for example) throughout the day.

YMMV.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: Node data in wrong stgpool

2004-10-06 Thread P Baines
Hello,

Assuming for nodeX the "q occ nodex /filespace" output looks like this:

NODEX   Bkup/filespace  5   NEWPOOL-COPY1,047
278,526.2   278,526.2
NODEX   Bkup/filespace  5   NEWPOOL 1,047
278,526.2   278,526.2
NODEX   Bkup/filespace  5   OLDPOOL-COPY986
262,209.0   262,205.9

1. Upd stgp OLDPOOL-COPY collocate=fi

2. move nodedata nodex fromstgpool=oldpool-copy fi=/filespace

This will move all data in the old copy pool for that filespace to its
own tape. After the process ends, run:

3. select volume_name from volumeusage where node_name='NODEX' and
copy_type='BACKUP' and filespace_name='/filespace' and
stgpool_name='OLDPOOL-COPY'

The volume(s) listed should contain data only for the filespace you
moved. Sanity check this with:

4. select distinct(node_name) from contents where volume_name='VOLXYZ'

5. del vol VOLXYZ discarddata=yes

Now q occ should show:

NODEX   Bkup/filespace  5   NEWPOOL-COPY1,047
278,526.2   278,526.2
NODEX   Bkup/filespace  5   NEWPOOL 1,047
278,526.2   278,526.2

6. Upd stgp OLDPOOL-COPY collocate=no

(or whatever collocation was originally set to...)

This process is quite long winded and if your retention is only 30 days
(or whatever) its less effort to simply wait for the stuff to expire.

Cheers,
Paul.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Barnes, Kenny
Sent: Tuesday 05 October 2004 22:09
To: [EMAIL PROTECTED]
Subject: Re: Node data in wrong stgpool


We are not talking about any one client having a lot data, but rather
large number of clients with smaller filespaces that add up. 

Thanks, 

Kenny Barnes
Systems Analyst
GmacInsurance
336-770-8280
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Prather, Wanda
Sent: Tuesday, October 05, 2004 4:03 PM
To: [EMAIL PROTECTED]
Subject: Re: Node data in wrong stgpool

You can't exactly do that, but you don't really have to.
If you do:

backup stgpool correctonsitepool correctoffsitecopypool

TSM will copy the data for that node to the desired offsite copy pool,
because TSM is happy to put a node's data in multiple copy pools.

Then you are covered for DR of that client, which is, after all, the
highest
priority.

I don't know any way to get rid of the stuff that is already in the
wrong
copy pool except to:

1) bring back the tapes and mark them readonly
2) MOVE NODEDATA to force TSM to put the data for that node on a new
copy
pool tape
3) delete the new copy pool tape with discarddata=yes

Unless you are talking about a client with HUGE amounts of data, it's
not
worth your time to bother with the delete.

Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Barnes, Kenny
Sent: Tuesday, October 05, 2004 3:44 PM
To: [EMAIL PROTECTED]
Subject: Node data in wrong stgpool


How can I delete data stored in the wrong off-site copy_pool for a
certain node?  Move nodedata works for on-site copies, but I do not want
to move data for off-site copies.  I would rather delete the data and
copy data from the correct on-site pool to correct off-site copy_pool.

Any help appreciated.

Kenny Barnes
Systems Analyst
GmacInsurance
336-770-8280
[EMAIL PROTECTED]





--
Note:  The information contained in this message may be privileged and
confidential
and protected from disclosure.  If the reader of this message is not the
intended recipient,
or an employee or agent responsible for delivering this message to the
intended recipient,
you are hereby notified that any dissemination, distribution or copying
of
this communication
is strictly prohibited.  If you have received this communication in
error,
please notify us
immediately by replying to the message and deleting it from your
computer.
Thank you.


--



Any e-mail message from the European Central Bank (ECB) is sent in good faith but 
shall neither be binding nor construed as constituting a commitment by the ECB except 
where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately via 
e-mail and delete this e-mail from your system.


Re: To Library Share Or Not?

2004-11-12 Thread P Baines
Yes, this is true. IBMs take on this is that it is up to the customer to
make the library "master" server highly available. So cluster the TSM
server. If you've got fibre attached disks for your TSM DB, Log and disk
pools, then include both of your 630's in the same zone as the TSM disks
so that you could manually fail the second TSM server over in case of
problems.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Friday 12 November 2004 11:15
To: [EMAIL PROTECTED]
Subject: To Library Share Or Not?


Backup Env Info
2 TSM Servers Running on p630's TSM ver 5.2.x
Both Server using the same 3494Lib with 8x3592 FC Drives Zoned
to each TSM box , but robot is being shared by both TSM Servers 

We are looking at setting up Library Sharing between both Backup
Servers.  While it sounds great, the downside we are struggling with is
that if the TSM server that is the Library "Master" so to speak is not
available then the other TSM server sharing the library can not access
the library.

Is anyone else using library sharing?  If so what are your
experiences...

Thanks a Bunch!



Any e-mail message from the European Central Bank (ECB) is sent in good faith 
but shall neither be binding nor construed as constituting a commitment by the 
ECB except where provided for in a written agreement.
This e-mail is intended only for the use of the recipient(s) named above. Any 
unauthorised disclosure, use or dissemination, either in whole or in part, is 
prohibited.
If you have received this e-mail in error, please notify the sender immediately 
via e-mail and delete this e-mail from your system.