Disaster recovery - unable to restore

2001-06-29 Thread Praveen Kumar

Hi all,

We are having SAP 40B with Oracle 8.0.x as database. OS is solaris 7.
We installed TSM 4.1 and TDP for SAP R/3 2.7 for backup. Backup is working
without any problem. My problem is in restoring my production server backup
onto the disaster recovery server.

To restore the backup onto the disaster recovery server, first i have to
restore the summary and detailed backup logs. I am restoring these two files
using backfm utility of TDP. Then to start actual Oracle database
restoration we are using brrestore utility of SAP. While starting the
brrestore, it terminates after the selection of perticuler backup, with an
error saying "bad format in detail BRBACKUP log file, filename
/oracle/SID/sapbackup/bdfpmgin.anf". If i copy detailed backup log from
production server to DRServer, restoration starts without any problem. After
comparing backup logs restored from tapeloader and the ones from production
server, we found that the detailed backup log restored from tapeloader is
incomplete. Because backup of detailed log is happening not at the end of
the SAP backup process. After the actual datafile backup, TDP is backing up
SAP logs and profiles. During the backup of these logs and profiles, TDP
backs up detailed log first and then backs up summary log, .ora, .sap, .utl
files. So the detailed backup log doesn't contain complete backup info.

The one way to solve this problem is by backing up summary log and detailed
log using dsmc selective command after the completion of SAP backup. Is
there any other methods to get rid of this problem.

Thanks in advance
Regards

Pavikumar


_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.



Re: scripting client's data???

2001-06-29 Thread Lindsay Morris

Summing up LASTSESS_RECVD will miss some data, though.
If a node had two or more sessions in one day, only the last session will be
counted.
And many nodes do this: nodes using HSM, database nodes where the archive
logs are saved every hour ...

A more accurate way is to sum up the ANE4961 messages from the activity log.

Another accurate way is to dig through the
/dsmaccnt.log.
Its layout is described in the Admin Guide.


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Lawrence Clark
> Sent: Thursday, June 28, 2001 11:18 AM
> To: [EMAIL PROTECTED]
> Subject: Re: scripting client's data???
>
>
> #!/usr/bin/ksh
>
> # Define the location of the log.
> LOC=/home/root/tsmfiles
>
> # Get the total number of bytes received by the TSM server last night.
> tempv=`/bin/dsmadmc -password=admin -id=uoieax "select sum(LASTSESS_RECVD)
> as BytesYesterday from nodes" | /bin/grep -A2 BYTEYESTERDAY |
> /bin/tail -1 | /bi
> n/head -1`
>
> # Convert bytes to MB.  Use awk since it can handle decimal numbers
> bytes=`echo $tempv | /usr/bin/awk 'BEGIN {sum=0} {sum = $1 /
> 100} END {print
>  sum}'`
>
> # Create an entry in the log.
> print `/usr/bin/date +%m/%d/%Y` $bytes >>  $LOC/daily.log
>
> exit 0
>
>
> ~
> ~
> ~
> "tsmmegs" 18 lines, 572 characters
> [backup] /home/root/bin # tsmmegs
> grep: Not a recognized flag: A
> grep: Not a recognized flag: 2
> Usage: grep [-E|-F] [-c|-l|-q] [-insvxbhwy] [-p[parasep]] -e
> pattern_list...
> [-f pattern_file...] [file...]
>
>
> >>> [EMAIL PROTECTED] 06/28/01 10:44AM >>>
> Try this script.  It'll create a log called daily.log in the $LOG
> directory.  This log will contain two things: date and the number of MB
> received.  Import the log file into a spreadsheet and graph it.
>
> In the script, change the location of the directory (LOC) where you want
> to save the log file.  One thing the script does not do is prune the log
> after a certain number of days/months/years worth of entries.
>
> == Script Starts Here =
>
> #!/usr/bin/ksh
>
> # Define the location of the log.
> LOC=/whereever/bytesBackup
>
> # Get the total number of bytes received by the TSM server last night.
> tempv=`/bin/dsmadmc -password=admin -id=admin "select sum(LASTSESS_RECVD)
> as BytesYesterday \
>from nodes" | /usr/local/bin/grep -A2 BYTESYESTERDAY | /usr/bin/tail
> -1 | /usr/bin/head -1`
>
> # Convert bytes to MB.  Use awk since it can handle decimal numbers
> unlike ksh.
> bytes=`echo $tempv | /usr/bin/awk 'BEGIN {sum=0} {sum = $1 / 100} END
> {print sum}'`
>
> # Create an entry in the log.
> print `/usr/bin/date +%m/%d/%Y` $bytes >>  $LOC/daily.log
>
> exit 0
>
> === Script Ends Here =
>
> Add the script as a cron entry and you're done.  Hope this does it for
> you.
>
>
> Mahesh Tailor
> WAN Administrator
> Carilion Health System
> Voice: 540-224-3929
> Fax: 540-224-3954
>
> >>> [EMAIL PROTECTED] 06/28/01 08:36AM >>>
> Hi All-
>
> Every morning I check the activity log with the following query:
>
> query actlog begindate=today-1 begintime=20:00 enddate=today endtime=now
> search=ANE4961I originator=client
>
> This tells me total number of bytes transferred from each client the
> previous night.
>
> QUESTION:  How can I create a script to automate this?  I would like to
> keep this data in a spread sheet for trending purposes.
> Has anyone successfully done this?
>
> Thanks!
> Marc Levitan
>



Re: MS Exchange related question

2001-06-29 Thread Del Hoobler

Yahya,

The "Platform" field will change to the LAST client type that connects to
it.
It must be that you are using the same nodename to attach to it
with one of the baclient programs (the base backup/archive client
or maybe even the base client scheduler.)

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

"It's a beautiful day.  Don't let it get away."  -- Bono




Yahya Ilyas
  cc:
Sent by: Subject: MS Exchange related question
"ADSM: Dist
Stor Manager"
<[EMAIL PROTECTED]
IST.EDU>


06/28/2001
05:14 PM
Please respond
to "ADSM: Dist
Stor Manager"





I have installed TSM module for MS Exchange and Windows client on WinNT
machine. When I issue command query node for Exchange node it shows
platform
as WinNT, same as it shows for WinNT client.

Some time ago I installed MS Exchange module on a Win2000 machine and query
node command shows me platform as TDP MSExchg NT.  Is this the difference
because of different windows platforms or did I not install MS Exchange
properly on WinNT machines.

Thanks

>   -
>   Yahya Ilyas
>   Systems Programmer Senior
>   Systems Integration & Management
>   Information Technology
>   Arizona State University, Tempe, AZ 85287-0101
>
>   [EMAIL PROTECTED]
>   Phone: (480) 965-4467
>
>



Re: Disaster recovery - unable to restore

2001-06-29 Thread Nicholas Cassimatis

I had an older setup (when it was still called Backint), and ran the
incremental backups of the SAP system as the last command in the brbackup
script.  Basically the same as you're suggesting with the selective backup,
and it worked.  3 DR tests of this install under our belt, all 3
successful, so it at least works.  I don't know if there's a better way or
not.

Nick Cassimatis
[EMAIL PROTECTED]

Today is the tomorrow of yesterday.



total # of bytes by each client on a daily basis

2001-06-29 Thread Tony Jules

I am backing up about 100 nodes on a daily basis. Every morning, I run the
following command for each node at the server command line:

q act begind=today-1 search=nodename

How can I create a script to perform the same operation on all the nodes at
once and present it in a spreadsheet-like format?

Thank you


Tony Jules
ITS / Olympus America Inc.
631-844-5887
[EMAIL PROTECTED]



Creation of Multiple Domains

2001-06-29 Thread Joe Cascanette

I have to create multiple Domains in TSM v4.1 to keep my DRP servers
separate from the rest of the servers.

I currently have:
COPYPOOL1 tapes (offsite)   357pool1 tapes (onsite in library)
COPYPOOL_DRP tapes (offsite)357pool_DRP tapes (onsite in library)

I have no problems with this, all my DRP (disaster recovery plan) tapes are
separate from the rest of the backups.

Question is:

I am running my first archive since I added and moved the DRP servers and
what I want to achieve is to have all the servers (including the DRP ones)
in one ARCHIVE POOL.
Is this possible?

I have noticed that I will need to add the DRP servers into my STANDARD
Domain (which is the COPYPOOL1 and 357pool1), but will this effect the
normal daily backups?.

Is there any way to add the DRP Clients to this STANDARD domain just for
archives?

When I perform a DRP I use the copypool tapes. This saves me many hours
instead of first restoring a archive and going from there.

Thanks

Joe Cascanette



Comparing results from Q LIBV against Q VOL looking for percent u tilized tapes outside of library

2001-06-29 Thread Lu Ann Mezera

We are running TSM 4.1 and have a MagStar 3575 tape library.  Unfortunately,
we have had to dismount many of our private volume tapes due to increasing
storage needs (which will be resolved with the install of an additional tape
library next month).  It seems like the space reclamation process doesn't
ask for all of these dismounted private tapes after a certain time.

I would like to run a query of all tapes that the library knows about to see
the volumes that are below the threshold I want to have reclaimed.  I would
then like to compare these volumes to what is actually mounted in the
library.  We have been doing manual comparisons for the last few weeks and
this is very time consuming.  I am thinking we could do a SQL select
statement but I don't know what the syntax would be.

Basically, I want to do a Q VOLUME and look for any tapes that are below 40%
utilized and then compare the results to a Q LIBV 3575LIB command.  If I get
a hit in both queries, I want to know the tape number.  Any suggestions?

Thanks for your help.

Lu Ann Mezera
Data Center Supervisor
Lab Safety Supply
608-757-4909
608-757-4652 Fax
E-mail: [EMAIL PROTECTED]
www.labsafety.com



DLT 7000 ON HP-UX

2001-06-29 Thread Jorge Rodrmguez

I need to know if I can install tsm ver 4.1.3 on hp-ux using a DLT 7000. I
got information about the incompatibility between a DLT 7000 and TSM on
HP-UX.

Thanks in Advanced...

Jorge Rodriguez
Caracas, Vnezuela...
_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.



Re: Disaster recovery - unable to restore

2001-06-29 Thread Kauffman, Tom

My backup script runs a brarchive run followed by selective backups of
/oracle/SID/saparch and /oracle/SID/sapbackup after the brbackup completes.

In addition, to speed up the PIT restore of some of the SAP directories, I
run an archive run every Sunday of all the non-database filesystems to the
management classes (two copies) I use for the redo logs. At the D/R site we
retrieve the archives, run a PIT restore of /usr/sap/SID,
/oracle/SID/sapbackup, and /oracle/SID/saparch. Then we use backfm to pull
down the most current .anf or .aff for the backup and .svd for the redo
logs.

We've never run into the 'bad format' error. OTOH, the documentation for
backint does imply using TSM to capture these logs.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: Praveen Kumar [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 29, 2001 5:12 AM
> To: [EMAIL PROTECTED]
> Subject: Disaster recovery - unable to restore
>
>
> Hi all,
>
> We are having SAP 40B with Oracle 8.0.x as database. OS is solaris 7.
> We installed TSM 4.1 and TDP for SAP R/3 2.7 for backup.
> Backup is working
> without any problem. My problem is in restoring my production
> server backup
> onto the disaster recovery server.
>
> To restore the backup onto the disaster recovery server,
> first i have to
> restore the summary and detailed backup logs. I am restoring
> these two files
> using backfm utility of TDP. Then to start actual Oracle database
> restoration we are using brrestore utility of SAP. While starting the
> brrestore, it terminates after the selection of perticuler
> backup, with an
> error saying "bad format in detail BRBACKUP log file, filename
> /oracle/SID/sapbackup/bdfpmgin.anf". If i copy detailed
> backup log from
> production server to DRServer, restoration starts without any
> problem. After
> comparing backup logs restored from tapeloader and the ones
> from production
> server, we found that the detailed backup log restored from
> tapeloader is
> incomplete. Because backup of detailed log is happening not
> at the end of
> the SAP backup process. After the actual datafile backup, TDP
> is backing up
> SAP logs and profiles. During the backup of these logs and
> profiles, TDP
> backs up detailed log first and then backs up summary log,
> .ora, .sap, .utl
> files. So the detailed backup log doesn't contain complete
> backup info.
>
> The one way to solve this problem is by backing up summary
> log and detailed
> log using dsmc selective command after the completion of SAP
> backup. Is
> there any other methods to get rid of this problem.
>
> Thanks in advance
> Regards
>
> Pavikumar
>
>
> __
> ___
> Get Your Private, Free E-mail from MSN Hotmail at
http://www.hotmail.com.



Re: Comparing results from Q LIBV against Q VOL looking for percent u tilized tapes outside of library

2001-06-29 Thread Lindsay Morris

If David's answer didn't help you, here's a shell script that you can use as
a model.

(it's an old script - the usual caveats apply...)

The basic trick is to make two sorted lists of tapes, then use "comm" to
compare them.
"comm" is a great unix filter most people don't use - try "man comm" to read
about it.

Good luck...


#!/bin/ksh
# Sometimes a drive error will cause a perfectly good tape to be
# "marked as private to prevent re-access".
# sometimes errant scripts or people can leave scratch tapes in the library
# without having checked them in.

libs=`dsmadmc -id=adsm -pas=adsm -tabd q libr | cleanhdrs | awk '{print $1}'
`
dsmadmc -id=adsm -pas=adsm -tabd q vol | cleanhdrs | awk '{print $1}' | sort
>/tmp/findscr$$.vol
for i in $libs
do
dsmadmc -id=adsm -pas=adsm -tabd q libvol | cleanhdrs | awk '{print
$2}' | sort >/tmp/findscr$$.libv
comm -23 /tmp/findscr$$.libv /tmp/findscr$$.vol | awk '{print
"update libv $i $1"}' /tmp/findscr$$.mac
# uncomment if you want it to really do anything.  Otherwise just look at
/tmp/findscr$$.mac.
  # dsmadmc -id=admin -pas=`cat /.ap` -itemcommit macro
/tmp/findscr$$.mac
done



> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Lu Ann Mezera
> Sent: Friday, June 29, 2001 11:03 AM
> To: [EMAIL PROTECTED]
> Subject: Comparing results from Q LIBV against Q VOL looking for percent
> u tilized tapes outside of library
>
>
> We are running TSM 4.1 and have a MagStar 3575 tape library.
> Unfortunately,
> we have had to dismount many of our private volume tapes due to increasing
> storage needs (which will be resolved with the install of an
> additional tape
> library next month).  It seems like the space reclamation process doesn't
> ask for all of these dismounted private tapes after a certain time.
>
> I would like to run a query of all tapes that the library knows
> about to see
> the volumes that are below the threshold I want to have
> reclaimed.  I would
> then like to compare these volumes to what is actually mounted in the
> library.  We have been doing manual comparisons for the last few weeks and
> this is very time consuming.  I am thinking we could do a SQL select
> statement but I don't know what the syntax would be.
>
> Basically, I want to do a Q VOLUME and look for any tapes that
> are below 40%
> utilized and then compare the results to a Q LIBV 3575LIB
> command.  If I get
> a hit in both queries, I want to know the tape number.  Any suggestions?
>
> Thanks for your help.
>
> Lu Ann Mezera
> Data Center Supervisor
> Lab Safety Supply
> 608-757-4909
> 608-757-4652 Fax
> E-mail: [EMAIL PROTECTED]
> www.labsafety.com
>



Re: Comparing results from Q LIBV against Q VOL looking for percentu tilized tapes outside of library

2001-06-29 Thread David Longo

I have a 3575 library also with TSM 3.7.4.0 server on AIX.  How are you dismountinmg 
your tapes?  We use the MOVE MEDIA command with the DAYS option to take aout tapes 
that haven't been accessed in say 8 days.   Reclamation then issues request for these 
tapes when needed to reclaim them or for another tape.  You have to monitor *SM server 
for REQUESTS with : Q REQ to see if it is requesting tapes from your OVERFLOW area.  
Do you have an OVERFLOW area configured?

Note: when yopu move tapes with the MOVE MEDIA command out of library it changes the 
ACCESS to READONLY.  I simply do a:
q vol access=readonly
to see if any tapes there are EMPTY or have low utilization.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]


>>> [EMAIL PROTECTED] 06/29/01 11:02AM >>>
We are running TSM 4.1 and have a MagStar 3575 tape library.  Unfortunately,
we have had to dismount many of our private volume tapes due to increasing
storage needs (which will be resolved with the install of an additional tape
library next month).  It seems like the space reclamation process doesn't
ask for all of these dismounted private tapes after a certain time.

I would like to run a query of all tapes that the library knows about to see
the volumes that are below the threshold I want to have reclaimed.  I would
then like to compare these volumes to what is actually mounted in the
library.  We have been doing manual comparisons for the last few weeks and
this is very time consuming.  I am thinking we could do a SQL select
statement but I don't know what the syntax would be.

Basically, I want to do a Q VOLUME and look for any tapes that are below 40%
utilized and then compare the results to a Q LIBV 3575LIB command.  If I get
a hit in both queries, I want to know the tape number.  Any suggestions?

Thanks for your help.

Lu Ann Mezera
Data Center Supervisor
Lab Safety Supply
608-757-4909
608-757-4652 Fax
E-mail: [EMAIL PROTECTED] 
www.labsafety.com 



"MMS " made the following
 annotations on 06/29/01 11:18:31
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Bare Metal Restore of NT/2000 and Scheduled Tasks

2001-06-29 Thread Rushforth, Tim

Has anyone done a Bare Metal Restoper of NT or 2000 and also used the
Windows Scheduled Tasks?

After doing a full restore, the Scheduled Tasks seem to have lost their
password.
Re-entering the password allows the Scheduled Tasks to run again.

Has anyone run into this?  Does anyone know where the passwords are stored
for scheduled tasks?

Thanks,

Tim Rushforth
City of Winnipeg



Re: total # of bytes by each client on a daily basis

2001-06-29 Thread Cook, Dwight E

What exactly are you looking for out of your query ?

You might be better off turning accounting on and just look at accounting
records, there will be one for each session a client initiates.

It is real easy to pull accounting records onto a PC and load them into
Excel (or some similar spread sheet)

Dwight

-Original Message-
From: Tony Jules [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 9:53 AM
To: [EMAIL PROTECTED]
Subject: total # of bytes by each client on a daily basis


I am backing up about 100 nodes on a daily basis. Every morning, I run the
following command for each node at the server command line:

q act begind=today-1 search=nodename

How can I create a script to perform the same operation on all the nodes at
once and present it in a spreadsheet-like format?

Thank you


Tony Jules
ITS / Olympus America Inc.
631-844-5887
[EMAIL PROTECTED]



Re: Simple archival problem (I hope)

2001-06-29 Thread Lindsay Morris

create backupset for the archive...
delete association to stop the scheduled backups in the future...
and you COULD delete filespace - understand that you will NEVER AGAIN
RESTORE anything for MIS_10 if you delete filespace, with the exception of
restoring from the backupset you made earlier.


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Sheets, Jerald
> Sent: Friday, June 29, 2001 12:24 PM
> To: [EMAIL PROTECTED]
> Subject: Simple archival problem (I hope)
>
>
> Ladies and Gentlemen,
>
> I am encountering an issue that doesn't seem to be addressed
> (directly) in the documentation.  The scenario is simple, but I
> am not sure
> of the method to enact what I'm trying to do.
>
> We are divided into 2 teams here, and the other team removed and
> shipped off a box that was included in nightly backup.  This
> machine name is
> MIS_10.  I have automated reports firing daily, and one of the
> items I track
> is whether the backup was successful the previous night, or how
> long since a
> backup has occured.  Obviously, our MIS_10 is now gone, so it is
> not backing
> up anymore.  So, 3 things need to happen.  MIS_10 needs to be removed from
> the backup set, I need to archive the last good backup indefinitely, and I
> need to disable all future backups.
>
> Any ideas as to how to accomplish this?
>
>
> Jerald Sheets, Systems Analyst TIS
> Our Lady of the Lake Regional Medical Center
> 5000 Hennessy Blvd, Baton Rouge, LA 70808
> Ph.225.765.8734..Fax.225.765.8784
> E-mail: [EMAIL PROTECTED]
>



Simple archival problem (I hope)

2001-06-29 Thread Sheets, Jerald

Ladies and Gentlemen,

I am encountering an issue that doesn't seem to be addressed
(directly) in the documentation.  The scenario is simple, but I am not sure
of the method to enact what I'm trying to do.

We are divided into 2 teams here, and the other team removed and
shipped off a box that was included in nightly backup.  This machine name is
MIS_10.  I have automated reports firing daily, and one of the items I track
is whether the backup was successful the previous night, or how long since a
backup has occured.  Obviously, our MIS_10 is now gone, so it is not backing
up anymore.  So, 3 things need to happen.  MIS_10 needs to be removed from
the backup set, I need to archive the last good backup indefinitely, and I
need to disable all future backups.

Any ideas as to how to accomplish this?


Jerald Sheets, Systems Analyst TIS
Our Lady of the Lake Regional Medical Center
5000 Hennessy Blvd, Baton Rouge, LA 70808
Ph.225.765.8734..Fax.225.765.8784
E-mail: [EMAIL PROTECTED]



Re: Simple archival problem (I hope)

2001-06-29 Thread Cook, Dwight E

Uh by backup set are you just making reference to a nightly incremental
schedule ?
>From an administrative session try a
q sch * * node=MIS_10
for each schedule found perform a
delete assoc blah blahh MIS_10
That will clear any tsm server scheduled activity... (which might be your 1
& 3)
#2... you may leave the client registered on the tsm server as long as you
wish and as long as no node, under the same name, pushes any new
incrementals into the server, the latest copy of all "backed up" data will
remain, as long as that node remains registered.
If you want to clear that old junk out of your system, just export that
node... then once exported you could delete all the associated file spaces
of MIS_10 and remove the node.  If  you ever needed anything back, you would
have to import the node and then you could retrieve the data again. If you
export the node, make sure and keep a piece of paper with the tapes showing
what in the heck they are and the proper sequence for them (you will need
that for any future import you might have to do)

Dwight

-Original Message-
From: Sheets, Jerald [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 11:24 AM
To: [EMAIL PROTECTED]
Subject: Simple archival problem (I hope)


Ladies and Gentlemen,

I am encountering an issue that doesn't seem to be addressed
(directly) in the documentation.  The scenario is simple, but I am not sure
of the method to enact what I'm trying to do.

We are divided into 2 teams here, and the other team removed and
shipped off a box that was included in nightly backup.  This machine name is
MIS_10.  I have automated reports firing daily, and one of the items I track
is whether the backup was successful the previous night, or how long since a
backup has occured.  Obviously, our MIS_10 is now gone, so it is not backing
up anymore.  So, 3 things need to happen.  MIS_10 needs to be removed from
the backup set, I need to archive the last good backup indefinitely, and I
need to disable all future backups.

Any ideas as to how to accomplish this?


Jerald Sheets, Systems Analyst TIS
Our Lady of the Lake Regional Medical Center
5000 Hennessy Blvd, Baton Rouge, LA 70808
Ph.225.765.8734..Fax.225.765.8784
E-mail: [EMAIL PROTECTED]



Re: Disk pool size vs large file

2001-06-29 Thread Mark Stapleton

On Mon, 25 Jun 2001 09:26:51 -0500, you wrote:
>ADSM SUPPORT PEOPLE,
>
>How do we find out about all of these "FEATURES" of design
>that cause ADSM to break?  

I had to laugh when I read the above statement. I suddenly summoned
this mental image of a Tivoli level 2 support engineer reading it,
leaning over his cubicle wall to his neighbor, and asking, "Say,
Susan, I'm reading an email here that's asking for a list of all those
features we build into TSM to hose it. I can't find my copy--can I
photocopy yours?"

And really, Jeff and others (you know who you are), *please* trim your
email responses. The lengths of them are getting quite tiresome to
page through, and our European brethren and sistren who pay by the
minute for dial-up access will thank you profusely.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: Removing tapes for tape library move

2001-06-29 Thread David Longo

Yes, just take them out and put in new library and run "audit library". 
Of course you need to do config of (I guess) your new library.

Or are you just doing a physical move of the *SM system to a new location?
If it's just a real short move (next room) just leave tapes in and at new location 
open up and make sure tapes are all still securely in place.
I would still run an audit - it's good to do occaisionally anyhow.

David Longo

>>> [EMAIL PROTECTED] 06/29/01 01:22PM >>>
In a few weeks we will be moving an HP DLT8000 tape library.  I want to take
all of the tapes out for the move (205 of them).  I assume that I can simply
turn the system off and take the tapes out as long as I put them back into
the same place, but I don't want to keep track of the location of the tapes.
Can I simply run "audit library" when I put them back in?  Is there a
command that I should run before shutting the system down?  Is it better to
eject them all into the CAP (holds 20) using TSM and then read them in again
from the CAP?

HP-UX 11.0
TSM 3.7 R3.8
HP Surestore E DLT 8000 Library

Thanks
Scott Foley
NetVoyage Corp.



"MMS " made the following
 annotations on 06/29/01 14:41:27
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: dsmsched.log results in act log

2001-06-29 Thread Mark Stapleton

On Fri, 22 Jun 2001 14:21:37 +0100, you wrote:
>I'am running *SM 3.1.2.58 under AIX 4.3.3 and writing my restults of the
>scheduling in a log, but it seems that i the only way to collect valid information 
>about
>the command/macro i'am scheduling, in other words at the server it always tells me 
>that the
>command is completed but if i look in the dsmsched.log in end with a return code of 
>256.
>
>It must be possible, because if you schedule an Incremental backup, it returns 
>failed, if it fails.
>But commands and macros always says completed
>
>So is it possible to get that result in my activity log (or other places, where i can 
>get them from
>my server) ??

Keep in mind, sir, that the success message is derived from the
completion of the script or macro, not the success of what the script
or macro are doing.

Oh, and please adjust your margins for your email messages. Per
standards, they should be no more than 80 characters per line. Thanks.

--
Mark Stapleton ([EMAIL PROTECTED])



Removing tapes for tape library move

2001-06-29 Thread Scott Foley

In a few weeks we will be moving an HP DLT8000 tape library.  I want to take
all of the tapes out for the move (205 of them).  I assume that I can simply
turn the system off and take the tapes out as long as I put them back into
the same place, but I don't want to keep track of the location of the tapes.
Can I simply run "audit library" when I put them back in?  Is there a
command that I should run before shutting the system down?  Is it better to
eject them all into the CAP (holds 20) using TSM and then read them in again
from the CAP?

HP-UX 11.0
TSM 3.7 R3.8
HP Surestore E DLT 8000 Library

Thanks
Scott Foley
NetVoyage Corp.



Re: Disk pool size vs large file

2001-06-29 Thread Richard Sims

>...*please* trim your email responses...

Amen, Mark.  It's dismaying to see, like, a one-sentence response to an issue
which includes the entire thread of a discussion along with it, running for
hundreds of lines.  If you've visited www.adsm.org to like Browse Current Month,
you've seen how postings are nicely related by the email Subject and time.
There is no need to append everything that came before.

Eliminating needless bulk in email does a lot to help eliminate waste in
Internet traffic, Sendmail processing times, mail spool space, and human
travail.  If nothing else, think of all the Listserv archive space that your
gracious list host, Marist, is giving over to all this data.

   thanks,  Richard Sims, BU



filesystem sharing between TSM and UDB

2001-06-29 Thread Glass, Peter

Our DBAs are proposing that we move our TSM database and recovery logs into
the same filesystem with their udb databases. (Long story as to why.) This
is on an AIX node that they share in an SP complex.
Maybe this is OK, but something tells me that this may not be such a good
idea.
Does anybody have any insight as to the feasibility of this idea?
Thanks, in advance.

Peter Glass
Distributed Storage Management (DSM)
Wells Fargo Services Company
> * 612-667-0086  * 866-249-8568
> * [EMAIL PROTECTED]
>



Re: Disk pool size vs large file

2001-06-29 Thread Lisa Cabanas

This is a multipart message in MIME format.
--=_related 006B693286256A7A_=
Content-Type: multipart/alternative; boundary="=_alternative 006B693586256A7A_="


--=_alternative 006B693586256A7A_=
Content-Type: text/plain; charset="us-ascii"

And backing up ;-)

Sorry-- I couldn't resist.  I will not blindly reply with the entire
history heretofore.

lisa






Richard Sims <[EMAIL PROTECTED]>
06/29/2001 01:35 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc: (bcc: Lisa Cabanas/SC/MODOT)
Subject:Re: Disk pool size vs large file



>...*please* trim your email responses...

Amen, Mark.  It's dismaying to see, like, a one-sentence response to an
issue
which includes the entire thread of a discussion along with it, running
for
hundreds of lines.  If you've visited www.adsm.org to like Browse Current
Month,
you've seen how postings are nicely related by the email Subject and time.
There is no need to append everything that came before.

Eliminating needless bulk in email does a lot to help eliminate waste in
Internet traffic, Sendmail processing times, mail spool space, and human
travail.  If nothing else, think of all the Listserv archive space that
your
gracious list host, Marist, is giving over to all this data.

   thanks,  Richard Sims, BU


--=_alternative 006B693586256A7A_=
Content-Type: text/html; charset="us-ascii"


And backing up ;-)  

Sorry-- I couldn't resist.  I will not
blindly reply with the entire history heretofore.

lisa








Richard Sims <[EMAIL PROTECTED]>
06/29/2001 01:35 PM
Please respond to "ADSM: Dist Stor
Manager"

        
        To:    
   [EMAIL PROTECTED]
        cc:    
   (bcc: Lisa Cabanas/SC/MODOT)
        Subject:  
     Re: Disk pool size vs large file


cid:_1_2A28444C006B692F86256A7A>
>...*please* trim your email
responses...

Amen, Mark.  It's dismaying to see,
like, a one-sentence response to an issue
which includes the entire thread of a discussion along with it, running for
hundreds of lines.  If you've visited www.adsm.org to like Browse Current
Month,
you've seen how postings are nicely related by the email Subject and time.
There is no need to append everything that came before.

Eliminating needless bulk in email does a lot
to help eliminate waste in
Internet traffic, Sendmail processing times, mail spool space, and human
travail.  If nothing else, think of all the Listserv archive space that
your
gracious list host, Marist, is giving over to all this data.

   thanks,  Richard Sims,
BU


--=_alternative 006B693586256A7A_=--
--=_related 006B693286256A7A_=
Content-Type: image/gif
Content-ID: <_1_2A28444C006B692F86256A7A>
Content-Transfer-Encoding: base64

R0lGODlhOQIEAIAAACwAOQIEAEACtE6Oqct9AYH/+vfu7OUNAOABgUG5R2Lw4Uh/
U2AVwURIIyX9E11dfccKmR+TkPG4akmLzmCzaEKpWN9l85n7/VBC4v8IBiS/d/Q7PX+72Wl32XVO
m3OeX+ZCW3m4HkYAfMAgyBDIIAAQwDAxoaUR5DFUFGY0RvX0xDQVlVJ5mUkH0HjxCBD5NRnSial5
wklluQoq2s1X6oWaSjXFRLS548KbpfGDgOAx6COJu8zc7PwcEY06AAA7
--=_related 006B693286256A7A_=--



Re: filesystem sharing between TSM and UDB

2001-06-29 Thread David Longo

The SAME filesystem?  NO WAY!!  Tivoli's Standard recommendation is for DB and Logs to 
be on separate disks and you will see most users agree in practice.

Don't know your DBA's reason - disk space would be my only guess.  If it's that tight 
though, you've got problems anyhow.  

Only way I would consider this is if have a REALLY small server and REALLY small 
number of *SM clients/data.  AND had no money!



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]


>>> [EMAIL PROTECTED] 06/29/01 03:30PM >>>
Our DBAs are proposing that we move our TSM database and recovery logs into
the same filesystem with their udb databases. (Long story as to why.) This
is on an AIX node that they share in an SP complex.
Maybe this is OK, but something tells me that this may not be such a good
idea.
Does anybody have any insight as to the feasibility of this idea?
Thanks, in advance.

Peter Glass
Distributed Storage Management (DSM)
Wells Fargo Services Company
> * 612-667-0086  * 866-249-8568
> * [EMAIL PROTECTED] 
>



"MMS " made the following
 annotations on 06/29/01 15:57:40
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: total # of bytes by each client on a daily basis

2001-06-29 Thread William Boyer

You could also code a SELECT command to run against the SUMMARY table. I
have created several reports in Crystal Reports from this table. For client
sessions, seems to have the same information as the accounting records, but
this table also includes server processes. I produce reports for client
activity and server processes for a day from this table.

Bill Boyer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Friday, June 29, 2001 11:33 AM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


What exactly are you looking for out of your query ?

You might be better off turning accounting on and just look at accounting
records, there will be one for each session a client initiates.

It is real easy to pull accounting records onto a PC and load them into
Excel (or some similar spread sheet)

Dwight

-Original Message-
From: Tony Jules [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 9:53 AM
To: [EMAIL PROTECTED]
Subject: total # of bytes by each client on a daily basis


I am backing up about 100 nodes on a daily basis. Every morning, I run the
following command for each node at the server command line:

q act begind=today-1 search=nodename

How can I create a script to perform the same operation on all the nodes at
once and present it in a spreadsheet-like format?

Thank you


Tony Jules
ITS / Olympus America Inc.
631-844-5887
[EMAIL PROTECTED]



Re: total # of bytes by each client on a daily basis

2001-06-29 Thread Jeff Connor

Bill,

I've just started to experiment with Crystal and the summary table as well.
Do you use the Resourceutilization Client option at all?  If so, how do you
combine the
summary table entries for all the sessions generated during a given clients
nightly
incremental backup to produce the type of report Tony is looking for?
Can you send me a copy of your select statement?

Thanks,
Jeff Connor
Niagara Mohawk Power Corp
Syracuse, NY






William Boyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 06/29/2001 04:10:56 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: total # of bytes by each client on a daily basis


You could also code a SELECT command to run against the SUMMARY table. I
have created several reports in Crystal Reports from this table. For client
sessions, seems to have the same information as the accounting records, but
this table also includes server processes. I produce reports for client
activity and server processes for a day from this table.

Bill Boyer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Friday, June 29, 2001 11:33 AM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


What exactly are you looking for out of your query ?

You might be better off turning accounting on and just look at accounting
records, there will be one for each session a client initiates.

It is real easy to pull accounting records onto a PC and load them into
Excel (or some similar spread sheet)

Dwight

-Original Message-
From: Tony Jules [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 9:53 AM
To: [EMAIL PROTECTED]
Subject: total # of bytes by each client on a daily basis


I am backing up about 100 nodes on a daily basis. Every morning, I run the
following command for each node at the server command line:

q act begind=today-1 search=nodename

How can I create a script to perform the same operation on all the nodes at
once and present it in a spreadsheet-like format?

Thank you


Tony Jules
ITS / Olympus America Inc.
631-844-5887
[EMAIL PROTECTED]



Re: 3583 LTO Tape Library

2001-06-29 Thread Jeff Bach

The fix is now available.

Jeff Bach

-Original Message-
From:   Jeff Bach
Sent:   Wednesday, June 13, 2001 10:13 AM
To: 'ADSM: Dist Stor Manager'
Subject:RE: 3583 LTO Tape Library

They are writing some code.  No fix is available yet.
Jeff Bach
Home Office Open Systems Engineering
Wal-Mart Stores, Inc.

WAL-MART CONFIDENTIAL


-Original Message-
From:   Mahesh Tailor [SMTP:[EMAIL PROTECTED]]
Sent:   Wednesday, June 13, 2001 8:53 AM
To: [EMAIL PROTECTED]
Subject:Re: 3583 LTO Tape Library

zJeff,

What did you do to fix the problem?
TIA
Mahesh Tailor
WAN Administrator
Carilion Health System
Voice: 540-224-3929
Fax: 540-224-3954

>>> [EMAIL PROTECTED]
  06/13/01 08:38AM >>>
There is a problem with 1550.  It effects the 3rd and 4th
frame in the library.  The robot spazes out.  We found it last week ... I'd
take credit.
("Hey CE, Charles, is the robot supposed to do that???  Ya
... of course.
)
the next day I got a call.  "I found a problem with the
library microcode effecting the 3rd and 4th frames."
It was good for a laugh.
Jeff Bach
Home Office Open Systems Engineering
Wal-Mart Stores, Inc.

WAL-MART CONFIDENTIAL

-Original Message-
From:   Suad Musovich
[SMTP:[EMAIL PROTECTED]] 
Sent:   Wednesday, June 13, 2001 4:16 AM
To: [EMAIL PROTECTED]

Subject:Re: 3583 LTO Tape Library

The drives are the same as a standalone to a 3584.
There have been problems with firmware levels
One of the firmware levels gave me errors on a few tapes
that
did
not
go away. The drives ended up freezing on "3" and would not
go
away
until
they got power cycled.
Check you firmware level (latest is 1550)


http://ssddom01.storage.ibm.com/techsup/swtechsup.nsf/support/ultriumfmr_ftp



Cheers, Suad
--

On Tue, Jun 12, 2001 at 12:47:49PM -0500, Sam Schrage wrote:
> We purchased a IBM 3583 LTO in Jan, 2001.
We've had 2 tape
drives
replaced
> already and a third one that is acting up
occasionally.  The
last
drive
> failure 'ate' a tape that I just spent 30
hours creating from
an
import.
>
> Any others 3583 LTO users having similar
experiences?
>
> Sam Schrage
> TRW Systems
> 615-360-4716
> [EMAIL PROTECTED]




**
This email and any files transmitted with it are
confidential and intended solely for the individual or entity to whom they
are addressed.  If you have received this email in error destroy it
immediately.

**



Tape Volume full with UtilPct in 0% and status filling

2001-06-29 Thread Angelica Tulipano

Hi!
I have a TSM v3.1 on AS/400, and i'm having this problem, seems like the
tape volume is full, but is in a filling state with a util percentil of 0%,
and is imposible to be full, someone of you have any idea how to solve
this, also do you know where i can get what means a CPF?:

06/27/2001 02:17:40   ANR8328I 244: 3570 volume 03E99F mounted in drive
TAPMLB1
   (DRVRSRC).
06/27/2001 02:17:53   ANR8214E Session open with 10.1.1.26 failed due to
   connection refusal.
06/27/2001 02:18:02   ANR8341I End-of-volume reached for 3570 volume
03E99F.
06/27/2001 02:18:02   ANR7808W FLUSH TAPE 03E99F returned exception
CPF5386.
06/27/2001 02:18:03   ANR7808W CLOSE TAPE 03E99F returned exception
CPF4405.
06/27/2001 02:18:10   ANR8468I 3570 volume 03E99F dismounted from drive
TAPMLB1
   (DRVRSRC) in library TAPMLB1.


Ing. Angélica Tulipano
GBM de Panamá, S.A.
Phone (507) 263-9977 ext 202
Fax (507) 269-3604
e-mail: [EMAIL PROTECTED]


Re: total # of bytes by each client on a daily basis

2001-06-29 Thread Lindsay Morris

The parallel backup feature (set by resourceutilization) means that ONE
backup session makes SEVERAL accounting log entries / summary table entries.
This is indeed confusing.

Don't you wish there were a product that would take care of this? ;-}

--Lindsay Morris, at servergraph.com



> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Jeff Connor
> Sent: Friday, June 29, 2001 4:35 PM
> To: [EMAIL PROTECTED]
> Subject: Re: total # of bytes by each client on a daily basis
>
>
> Bill,
>
> I've just started to experiment with Crystal and the summary
> table as well.
> Do you use the Resourceutilization Client option at all?  If so,
> how do you
> combine the
> summary table entries for all the sessions generated during a
> given clients
> nightly
> incremental backup to produce the type of report Tony is looking for?
> Can you send me a copy of your select statement?
>
> Thanks,
> Jeff Connor
> Niagara Mohawk Power Corp
> Syracuse, NY
>
>
>
>
>
>
> William Boyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 06/29/2001 04:10:56 PM
>
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>
> Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>
>
> To:   [EMAIL PROTECTED]
> cc:
>
> Subject:  Re: total # of bytes by each client on a daily basis
>
>
> You could also code a SELECT command to run against the SUMMARY table. I
> have created several reports in Crystal Reports from this table.
> For client
> sessions, seems to have the same information as the accounting
> records, but
> this table also includes server processes. I produce reports for client
> activity and server processes for a day from this table.
>
> Bill Boyer
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Cook, Dwight E
> Sent: Friday, June 29, 2001 11:33 AM
> To: [EMAIL PROTECTED]
> Subject: Re: total # of bytes by each client on a daily basis
>
>
> What exactly are you looking for out of your query ?
>
> You might be better off turning accounting on and just look at accounting
> records, there will be one for each session a client initiates.
>
> It is real easy to pull accounting records onto a PC and load them into
> Excel (or some similar spread sheet)
>
> Dwight
>
> -Original Message-
> From: Tony Jules [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 29, 2001 9:53 AM
> To: [EMAIL PROTECTED]
> Subject: total # of bytes by each client on a daily basis
>
>
> I am backing up about 100 nodes on a daily basis. Every morning, I run the
> following command for each node at the server command line:
>
> q act begind=today-1 search=nodename
>
> How can I create a script to perform the same operation on all
> the nodes at
> once and present it in a spreadsheet-like format?
>
> Thank you
>
>
> Tony Jules
> ITS / Olympus America Inc.
> 631-844-5887
> [EMAIL PROTECTED]
>



Re: total # of bytes by each client on a daily basis

2001-06-29 Thread Rajesh Oak

William,
Can you post the Select statements, reports on this site ?

Rajesh Oak


--

On Fri, 29 Jun 2001 16:10:56
 William Boyer wrote:
>You could also code a SELECT command to run against the SUMMARY table. I
>have created several reports in Crystal Reports from this table. For client
>sessions, seems to have the same information as the accounting records, but
>this table also includes server processes. I produce reports for client
>activity and server processes for a day from this table.
>
>Bill Boyer
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
>Cook, Dwight E
>Sent: Friday, June 29, 2001 11:33 AM
>To: [EMAIL PROTECTED]
>Subject: Re: total # of bytes by each client on a daily basis
>
>
>What exactly are you looking for out of your query ?
>
>You might be better off turning accounting on and just look at accounting
>records, there will be one for each session a client initiates.
>
>It is real easy to pull accounting records onto a PC and load them into
>Excel (or some similar spread sheet)
>
>Dwight
>
>-Original Message-
>From: Tony Jules [mailto:[EMAIL PROTECTED]]
>Sent: Friday, June 29, 2001 9:53 AM
>To: [EMAIL PROTECTED]
>Subject: total # of bytes by each client on a daily basis
>
>
>I am backing up about 100 nodes on a daily basis. Every morning, I run the
>following command for each node at the server command line:
>
>q act begind=today-1 search=nodename
>
>How can I create a script to perform the same operation on all the nodes at
>once and present it in a spreadsheet-like format?
>
>Thank you
>
>
>Tony Jules
>ITS / Olympus America Inc.
>631-844-5887
>[EMAIL PROTECTED]
>


Get 250 color business cards for FREE!
http://businesscards.lycos.com/vp/fastpath/



Re: total # of bytes by each client on a daily basis

2001-06-29 Thread Sheets, Jerald

I'm a little confused here...

I was actually looking to figure out how to do this when you folks started
talking about it.  So, I pulled out my trusty data-extraction tools and
found out that I don't have a SUMMARY table.

Anybody wanna take a crack at that?

Jerald Sheets, Systems Analyst TIS
Our Lady of the Lake Regional Medical Center
5000 Hennessy Blvd, Baton Rouge, LA 70808
Ph.225.765.8734..Fax.225.765.8784
E-mail: [EMAIL PROTECTED]


-Original Message-
From: Jeff Connor [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 3:35 PM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


Bill,

I've just started to experiment with Crystal and the summary table as well.
Do you use the Resourceutilization Client option at all?  If so, how do you
combine the
summary table entries for all the sessions generated during a given clients
nightly
incremental backup to produce the type of report Tony is looking for?
Can you send me a copy of your select statement?

Thanks,
Jeff Connor
Niagara Mohawk Power Corp
Syracuse, NY






William Boyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 06/29/2001 04:10:56 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: total # of bytes by each client on a daily basis


You could also code a SELECT command to run against the SUMMARY table. I
have created several reports in Crystal Reports from this table. For client
sessions, seems to have the same information as the accounting records, but
this table also includes server processes. I produce reports for client
activity and server processes for a day from this table.

Bill Boyer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Friday, June 29, 2001 11:33 AM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


What exactly are you looking for out of your query ?

You might be better off turning accounting on and just look at accounting
records, there will be one for each session a client initiates.

It is real easy to pull accounting records onto a PC and load them into
Excel (or some similar spread sheet)

Dwight

-Original Message-
From: Tony Jules [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 9:53 AM
To: [EMAIL PROTECTED]
Subject: total # of bytes by each client on a daily basis


I am backing up about 100 nodes on a daily basis. Every morning, I run the
following command for each node at the server command line:

q act begind=today-1 search=nodename

How can I create a script to perform the same operation on all the nodes at
once and present it in a spreadsheet-like format?

Thank you


Tony Jules
ITS / Olympus America Inc.
631-844-5887
[EMAIL PROTECTED]



No Subject

2001-06-29 Thread Jeff Bach

Help,

I am running a 3.1.2.41 ADSM server on AIX 4.3.2.  After trying to
label a library volume with a "label libvol " command, the command hung.
Now " q pro", "q libr", and other commands hang.  Is there a solution other
that bouncing the ADSM instance? (the application)

Jeff Bach
Home Office Open Systems Engineering
Wal-Mart Stores, Inc.

WAL-MART CONFIDENTIAL



**
This email and any files transmitted with it are confidential
and intended solely for the individual or entity to
whom they are addressed.  If you have received this email
in error destroy it immediately.
**



DEFINE DBCOPY - why would this operation took >20hours to sync two 8gb volumes

2001-06-29 Thread Kent J. Monthei

We have a very large application server (Sun E10K, 12 processors, 8GB
physical memory) which runs a 24x7 Oracle data warehouse application.  The
server also runs TSM 3.7.3 Server for backups to a local/private tape
library.

A 'DEFINE DBCOPY' operation for an 8GB database volume mirror on this
server saturated an entire processor - hovered at 8% CPU according to 'top'
- for over 20 hours straight!  The really odd thing is that the same
operation was performed last week on the same volume-pair, and was 95%
complete after nearly 8 hours - but had to be terminated because it was
interfering with production.  Another oddity - that same day last week, two
other 4GB volume mirrors were also created (all 3 ran concurrently, in
fact) and those two finished in under 2 hours.  I already confirmed that
all 3 volume-pairs are on the same two controllers and that each of the
three 2-volume mirrors uses 1 volume from each controller.

Our internal debate is whether this is a volume configuration problem, a
TSM defect, or just resource contention between TSM and other applications.
And I'm still wondering why the same TSM operation that took 8 hours the
other day (and seemed excessive then), took over 20 hours overnight last
night.  Again, the TSM server was otherwise idle - no backups were run
during that 20-hour period.

Has anyone had similar experience using 'DEFINE DBCOPY' on a large volume ?
Anyone have any insights?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



Out of Office Response: Re: total # of bytes by each client on a daily basis

2001-06-29 Thread Luci Ziebart

Luci Ziebart will be away from Friday June 29, 2001 to Monday July 9, 2001.

Mail is being forwarded to jeff mours,pxl.



Re: 'label libvol' hung

2001-06-29 Thread Richard Sims

>I am running a 3.1.2.41 ADSM server on AIX 4.3.2.  After trying to
>label a library volume with a "label libvol " command, the command hung.
>Now " q pro", "q libr", and other commands hang.  Is there a solution other
>that bouncing the ADSM instance? (the application)

Jeff - I/O conflicts and problems can cause device-related commands to hang.
   In this case I'd go have a look at the physical device, first trying
any problem-clearing to make it interact with the server again, or resetting
it if nothing else works, to try to break the stalemate.
I've also seen very ugly conflicts and hangs created where multiple
Define Drive's were done for the same physical drives, with both trying
to be Online at the same time and the admin trying to execute TSM server
commands to deal with the mess.

   Richard Sims, BU

(p.s.: Please post with Subject identifiers so that postings track well
   in the List archives.  thanks)