migration vs move data

2008-02-26 Thread Keith Arbogast

What would be the consequential differences between using migration
versus using 'move data' to transfer backup files from one primary
tape pool to another? I see that with 'move data' I would specify
volumes names myself, whereas migration would pick the volumes to
move. Are there other differences that would favor one technique over
the other? A difference in throughput rate would be especially
interesting.

The files are moving between data centers, and are now in virtual
volumes. The target tape pool consists of physical 3592 cartridges.

This is all running on RHEL 4, TSM 5.4.0.3, over a 10 Gb network link.

Thank you,
Keith Arbogast
Indiana University


Re: Run expire inventory before TSM Server upgrade

2008-02-26 Thread Bell, Charles (Chip)
Cool, because we run it to completion daily, followed by reclamation
processing.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Monday, February 25, 2008 4:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Run expire inventory before TSM Server upgrade

The IBM advisory to run Expire Inventory prior to an upgrade is
probably to deal with customers who don't run them regularly, and who
may get flustered in the time taken by any database updating involved
in the upgrade.  I would imagine that all of us are running Expire
Inventory to completion as frequently as feasible, so it's a non-
instruction for us.

Richard Simsat Boston University

-
Confidentiality Notice:
The information contained in this email message is privileged and
confidential information and intended only for the use of the
individual or entity named in the address. If you are not the
intended recipient, you are hereby notified that any dissemination,
distribution, or copying of this information is strictly
prohibited. If you received this information in error, please
notify the sender and delete this information from your computer
and retain no copies of any of this information.


Orphaned DB2 data

2008-02-26 Thread Collins, Brenda
Hi Everyone!

I have some issues with DB2 (v8 & v9) data.  If I run a select statement to 
show backups on a particular node, it shows many more backups than what the DBA 
sees when they do a query through db2adutl.  When the DBA tried to delete 
backups older than 45 days, he received the following message:

The current delete transaction failed. You do not have
sufficient authorization. Attempting to deactivate
 backup image(s) instead...

Success.

I do know that over time, they have upgraded DB2 from v8 to v9 but I am not 
sure that is the reason why I have orphaned data.

Any ideas on how to determine what data is orphaned and how to get rid of it in 
DB2 would be greatly appreciated!

TSM Server: 5.4.0.2
OS = AIX 5.3
DB2 - V9 (at this time)
TSM Client 5.4.0.0

Thanks,
Brenda Collins

[CONFIDENTIALITY AND PRIVACY NOTICE]

Information transmitted by this email is proprietary to Medtronic and is 
intended for use only by the individual or entity to which it is addressed, and 
may contain information that is private, privileged, confidential or exempt 
from disclosure under applicable law. If you are not the intended recipient or 
it appears that this mail has been forwarded to you without proper authority, 
you are notified that any use or dissemination of this information in any 
manner is strictly prohibited. In such cases, please delete this mail from your 
records.

To view this notice in other languages you can either select the following link 
or manually copy and paste the link into the address bar of a web browser: 
http://emaildisclaimer.medtronic.com


How to schedule dsmc for "always backup

2008-02-26 Thread TSTLTDTD
What you need to do is ask the administrator why he wants to do a Selective 
backup.  Then you need to convince him that this in not necessary because of 
the way TSM works.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


Re: Orphaned DB2 data

2008-02-26 Thread Jacques Van Den Berg
Hi,

Does your sql select & query occ  give the same result?
We are running DB2 Ver8 & 9. I delete the backups with db2adutl. This
only marks them as inactive in tsm. Tsm expire inventory will the delete
them from tsm.

Regards,

Jacques van den Berg
TSM / Storage / SAP Basis Administrator
Pick 'n Pay IT
Email   : [EMAIL PROTECTED]
Tel  : +2721 - 658 1711
Fax : +2721 - 658 1676
Mobile  : +2782 - 653 8164 
 
Dis altyd lente in die hart van die mens wat God 
en sy medemens liefhet (John Vianney).
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Collins, Brenda
Sent: 26 February 2008 05:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Orphaned DB2 data

Hi Everyone!

I have some issues with DB2 (v8 & v9) data.  If I run a select statement
to show backups on a particular node, it shows many more backups than
what the DBA sees when they do a query through db2adutl.  When the DBA
tried to delete backups older than 45 days, he received the following
message:

The current delete transaction failed. You do not have
sufficient authorization. Attempting to deactivate
 backup image(s) instead...

Success.

I do know that over time, they have upgraded DB2 from v8 to v9 but I am
not sure that is the reason why I have orphaned data.

Any ideas on how to determine what data is orphaned and how to get rid
of it in DB2 would be greatly appreciated!

TSM Server: 5.4.0.2
OS = AIX 5.3
DB2 - V9 (at this time)
TSM Client 5.4.0.0

Thanks,
Brenda Collins

[CONFIDENTIALITY AND PRIVACY NOTICE]

Information transmitted by this email is proprietary to Medtronic and is
intended for use only by the individual or entity to which it is
addressed, and may contain information that is private, privileged,
confidential or exempt from disclosure under applicable law. If you are
not the intended recipient or it appears that this mail has been
forwarded to you without proper authority, you are notified that any use
or dissemination of this information in any manner is strictly
prohibited. In such cases, please delete this mail from your records.

To view this notice in other languages you can either select the
following link or manually copy and paste the link into the address bar
of a web browser: http://emaildisclaimer.medtronic.com


Read our disclaimer at: http://www.picknpay.co.za/pnp/view/pnp/en/page5093? 
If you don't have web access, the disclaimer can be mailed to you on request. 
Disclaimer requests to be sent to [EMAIL PROTECTED] 
 


Re: migration vs move data

2008-02-26 Thread Richard Rhodes
When I moved our primary pools from 3494/3590's to 3584/3592's,  I used
movedata.
I found the movedat was much easier to control the process.  I coded a
script that would vary
the number of movedata's running concurrently depending upon the number of
free/available tape drives.  During the day It would run 3-4 movedata's.
while during
thenight I had 6-8 running. Our problem was the pools were very big
(several thousand
3494/3590 tapes), and we wanted to migrate the data asap.  If your pools
are not
very big, the it's probably not worth the effort to take control via
movedata's.

Rick





 Keith Arbogast
 <[EMAIL PROTECTED]
 .EDU>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> migration vs move data


 02/26/2008 10:17
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






What would be the consequential differences between using migration
versus using 'move data' to transfer backup files from one primary
tape pool to another? I see that with 'move data' I would specify
volumes names myself, whereas migration would pick the volumes to
move. Are there other differences that would favor one technique over
the other? A difference in throughput rate would be especially
interesting.

The files are moving between data centers, and are now in virtual
volumes. The target tape pool consists of physical 3592 cartridges.

This is all running on RHEL 4, TSM 5.4.0.3, over a 10 Gb network link.

Thank you,
Keith Arbogast
Indiana University


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: Orphaned DB2 data

2008-02-26 Thread Collins, Brenda
The q occ only provides the total space.  The select command I was using shows 
actual dates.

Select node_name,backup_date from backups where node_name='xx'

This tells me there are backups out there from almost a year ago when their 
scripts are designed to delete backups older than 45 days.  When the DBA tried 
to delete them, he received the error message below.

Brenda

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jacques 
Van Den Berg
Sent: Tuesday, February 26, 2008 9:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Orphaned DB2 data

Hi,

Does your sql select & query occ  give the same result?
We are running DB2 Ver8 & 9. I delete the backups with db2adutl. This only 
marks them as inactive in tsm. Tsm expire inventory will the delete them from 
tsm.

Regards,

Jacques van den Berg
TSM / Storage / SAP Basis Administrator
Pick 'n Pay IT
Email   : [EMAIL PROTECTED]
Tel  : +2721 - 658 1711
Fax : +2721 - 658 1676
Mobile  : +2782 - 653 8164

Dis altyd lente in die hart van die mens wat God en sy medemens liefhet (John 
Vianney).


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Collins, 
Brenda
Sent: 26 February 2008 05:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Orphaned DB2 data

Hi Everyone!

I have some issues with DB2 (v8 & v9) data.  If I run a select statement to 
show backups on a particular node, it shows many more backups than what the DBA 
sees when they do a query through db2adutl.  When the DBA tried to delete 
backups older than 45 days, he received the following
message:

The current delete transaction failed. You do not have sufficient 
authorization. Attempting to deactivate  backup image(s) instead...

Success.

I do know that over time, they have upgraded DB2 from v8 to v9 but I am not 
sure that is the reason why I have orphaned data.

Any ideas on how to determine what data is orphaned and how to get rid of it in 
DB2 would be greatly appreciated!

TSM Server: 5.4.0.2
OS = AIX 5.3
DB2 - V9 (at this time)
TSM Client 5.4.0.0

Thanks,
Brenda Collins

[CONFIDENTIALITY AND PRIVACY NOTICE]

Information transmitted by this email is proprietary to Medtronic and is 
intended for use only by the individual or entity to which it is addressed, and 
may contain information that is private, privileged, confidential or exempt 
from disclosure under applicable law. If you are not the intended recipient or 
it appears that this mail has been forwarded to you without proper authority, 
you are notified that any use or dissemination of this information in any 
manner is strictly prohibited. In such cases, please delete this mail from your 
records.

To view this notice in other languages you can either select the following link 
or manually copy and paste the link into the address bar of a web browser: 
http://emaildisclaimer.medtronic.com


Read our disclaimer at: http://www.picknpay.co.za/pnp/view/pnp/en/page5093?
If you don't have web access, the disclaimer can be mailed to you on request.
Disclaimer requests to be sent to [EMAIL PROTECTED]


Re: migration vs move data

2008-02-26 Thread Richard Mochnaczewski
Hi Rick,

I'm just about to write such a script ( moving data from a 3494 library to a 
SL8500 ). Can you send me a copy of it ?

Rich



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Richard Rhodes
Sent: Tuesday, February 26, 2008 10:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] migration vs move data


When I moved our primary pools from 3494/3590's to 3584/3592's,  I used
movedata.
I found the movedat was much easier to control the process.  I coded a
script that would vary
the number of movedata's running concurrently depending upon the number of
free/available tape drives.  During the day It would run 3-4 movedata's.
while during
thenight I had 6-8 running. Our problem was the pools were very big
(several thousand
3494/3590 tapes), and we wanted to migrate the data asap.  If your pools
are not
very big, the it's probably not worth the effort to take control via
movedata's.

Rick





 Keith Arbogast
 <[EMAIL PROTECTED]
 .EDU>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> migration vs move data


 02/26/2008 10:17
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






What would be the consequential differences between using migration
versus using 'move data' to transfer backup files from one primary
tape pool to another? I see that with 'move data' I would specify
volumes names myself, whereas migration would pick the volumes to
move. Are there other differences that would favor one technique over
the other? A difference in throughput rate would be especially
interesting.

The files are moving between data centers, and are now in virtual
volumes. The target tape pool consists of physical 3592 cartridges.

This is all running on RHEL 4, TSM 5.4.0.3, over a 10 Gb network link.

Thank you,
Keith Arbogast
Indiana University


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.

  
Standard Life : 175 ans au coeur de nos vies
Standard Life: Part of our lives for 175 years
 
 


Re: migration vs move data

2008-02-26 Thread Robert Ouzen Ouzen
Hi Rick

Can you sent the script to the forum , like to see .

Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Richard 
Mochnaczewski
Sent: Tuesday, February 26, 2008 6:06 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] migration vs move data

Hi Rick,

I'm just about to write such a script ( moving data from a 3494 library to a 
SL8500 ). Can you send me a copy of it ?

Rich



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Richard Rhodes
Sent: Tuesday, February 26, 2008 10:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] migration vs move data


When I moved our primary pools from 3494/3590's to 3584/3592's,  I used
movedata.
I found the movedat was much easier to control the process.  I coded a
script that would vary
the number of movedata's running concurrently depending upon the number of
free/available tape drives.  During the day It would run 3-4 movedata's.
while during
thenight I had 6-8 running. Our problem was the pools were very big
(several thousand
3494/3590 tapes), and we wanted to migrate the data asap.  If your pools
are not
very big, the it's probably not worth the effort to take control via
movedata's.

Rick





 Keith Arbogast
 <[EMAIL PROTECTED]
 .EDU>  To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager"
 <[EMAIL PROTECTED] Subject
 .EDU> migration vs move data


 02/26/2008 10:17
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






What would be the consequential differences between using migration
versus using 'move data' to transfer backup files from one primary
tape pool to another? I see that with 'move data' I would specify
volumes names myself, whereas migration would pick the volumes to
move. Are there other differences that would favor one technique over
the other? A difference in throughput rate would be especially
interesting.

The files are moving between data centers, and are now in virtual
volumes. The target tape pool consists of physical 3592 cartridges.

This is all running on RHEL 4, TSM 5.4.0.3, over a 10 Gb network link.

Thank you,
Keith Arbogast
Indiana University


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.

  
Standard Life : 175 ans au coeur de nos vies
Standard Life: Part of our lives for 175 years
 
 


Restoring a "Renamed Exchange TDP" Backup

2008-02-26 Thread Hart, Charles A
Got a fun one, maybe someone has ran in to.  We've been performing Daily
Full Exchange Backups with a Indefinite Retention... We finally received
word we can put the Exchange backups on a 35 Day Retention but must
preserve all existing backups.   

1) Renamed Existing Exchange TDP Client Names in TSM from Exchange to
Exchangele.lgl 

2) Copied the Excisting Exchange Domain to a New Domain Called
Exchange.lgl - as it has the Nolimits for retention.

3)We then updated the exchange.lgl dom=exchange.lgl - So now we have the
Lgl Backup data segregated

4)Updated exiting Exchnage Dom,ain to do a 35 Day Retention

5)Re-registed the orginal Exchange Node Names to the Original 35 Day
Retemtikon Domain

So far So good the Exchange Admin can pull up via TDP GUI and see the
Old backup data with the Exchange.lgl name, but when he tries to restore
to his Recovery Stggroup he gets an error ACN5241E The MS Exchange
Information Store is currently not Running.  

What appears to be happening is that Exchange is expecting the restore
for \\exchange\stggroup1 while we are trying to restore
\\exchange.lgl\stggroup.

So we fakes out TSM with the Node Name but now Exchange needs to faked
out or somethin Has antone run in to a similar situaltion?  



This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Sharing 3494 library with two TSM instances

2008-02-26 Thread Daad Ali
Can anyone point me to a documentation on how setup library sharing of 3494 
between an old TSM instance and a new one.
   
  Library 3494
  Drives 3490
  TSM: 5.4.1
   
   
  Thanks as always,
  Daad

   
-
Be smarter than spam. See how smart SpamGuard is at giving junk email the boot 
with the All-new Yahoo! Mail  


Re: Orphaned DB2 data

2008-02-26 Thread Bob Booth
On Tue, Feb 26, 2008 at 10:00:09AM -0600, Collins, Brenda wrote:
> The q occ only provides the total space.  The select command I was using 
> shows actual dates.
>
>
>
> Select node_name,backup_date from backups where node_name='xx'
>
>
>
> This tells me there are backups out there from almost a year ago when their 
> scripts are designed to delete backups older than 45 days.  When the DBA 
> tried to delete them, he received the error message below.
>
>
>
> Brenda

The db2 node running the scripts needs to have backupdel=yes authority
I believe.  That is the way our oracle, sql, and db2 TDP nodes are set up,
since the recovery programs actually manage the data retention, not the
TSM server policies..

update node  backupdel=yes

Should fix you up.

hth,

bob


LTO3 Tape Drives Unavailable on AIX

2008-02-26 Thread Lamb, Charles P.
Hi..

 

We have a TSM system (V5.3.4.2) on an IBM 9133-55A with 4-IBM 7311-D20
using FC5758 4GB FC PCI-X adapters.  All fourteen FC adapters are
connected directly with fourteen IBM LTO3 tape drives.  All firmware is
current - LTO3 --> 73P5, 3584 --> 7360.  Here is what is happening at my
site and wondering if anyone else is experiencing the same??

 

1) stopped TSM system

2) loaded Atape V11.0.2.0

3) loaded AIX V5.3 ML 7 SP 2  -- 77 filesets to be installed

4) re-booted server

5) half of the RMTXs and SMC0 went into defined mode and the others were
available

6) re-cycled IBM 3584 library

7) ran cfgmgr -v

8) All RMTXs and SMCXs are in the available state

9) Started TSM system

 

Anytime we need to re-boot the server, we need to run through steps #4 -
#8.  IBM p-series personnel came up with this process (PMR#13QJ2QQ),
however, power re-cycling of the IBM 3584 a good thing (?).


Re: TSM dream setup

2008-02-26 Thread Ben Bullock
Ok, I thought I would reply back here about our experience in
implementing a DataDomain 580 appliance into our TSM environment.

Setup - Easy. Put it on the network and NFS mounted it to our AIX/TSM
server.

TSM config - Easy. Created some "FILE" device classes and pointed them
to the NFS mount points.

Migration of data from tape to DataDomain appliance - Easy. "Move data",
"move nodedata", etc. work great.

Performance - We are getting a consistent 90MB/Sec in writing to the
device and a little better on reads. This is pretty much the limit of
the 1GB adapter we are running the data through. That equates to about
8TB of data movement a day, acceptable for our environment. NICs could
be combined for better throughput.

Dedupe/Compression - Here is where the answer from the vendor is always,
"It depends on the data". And indeed it does, but here is what we are
getting:

DB dumps - Full SQL, Sybase, and Exchange server DB dumps. 
   Original Bytes:   49,945,140,504,962  (TSM says there is this
much data)
  Globally Compressed:5,956,953,849,746  (this is how large it is
after deduplication)
   Locally Compressed:2,792,002,425,204  (this is how big it is
after lz compression)
about an 18 to 1 compression ratio.

Filesystems - OS files, document repositories, image scans, windows
fileservers, etc.
   Original Bytes:   27,051,578,287,711  (TSM says there is this
much data)
  Globally Compressed:7,907,366,156,093  (this is how large it is
after deduplication)
   Locally Compressed:4,499,161,648,844  (this is how big it is
after lz compression)
about an 6 to 1 compression ratio.
  
Overall deduplication/compression on our TSM backups: ~ 10 to 1
compression.

It's kinda like night and day between the fileserver and database
compression rates. We have found that some server's data is very
un-deduplicatable (is that a made-up word, or what?). Here are some
examples:

- A 6TB document repository with TIFF and PDF documents is only getting
about 5 to 1 compression.
- The VMWARE ESXRANGER backups are compressed so we get virtually NO
dedupe when the data goes to the appliance. We are in the process of
re-working this.
- A large application in our environment puts out data files that are
also non-deduplicatable. Who knew. No way to tell until you shovel it to
the appliance and see that it sucked and then shovel it back out to tape
for the time being.

We were well aware that some data isn't really fit for this expensive
appliance, so we are looking into other ways to put that TSM data on
disk and replicate it for DR (perhaps a NAS appliance). 

Overall, we are pleased with the appliance. The ability to replace a
whole tape library with a 6U appliance frees up a lot of computer room
space. And using 1/10th of the power to keep disks spinning (we are
fitting about 100TB of data onto a 10TB DataDomain), feels very "green"
and saves money in HVAC and power. 

Oh ya, and restores are almost instantaneous for individual files, and I
can restore whole filesystems now in a reasonable amount of time. YMMV
of course, it still depends on the number and size of the files. But it
is even faster than before when we were using collocated tapepools on a
LTO2.

Neat new technology 

Ben


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Friday, February 15, 2008 6:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM dream setup

>About deduplication, Mark Stapleton said:
>
> > It's highly overrated with TSM, since TSM doesn't do absolute (full)

> > backups unless such are forced.

>At 12:04 AM 2/15/2008, Curtis Preston wrote:
>Depending on your mix of databases and other application backup data, 
>you can actually get quite a bit of commonality in a TSM datastore.

I've been thinking a lot about dedup in a TSM environment.  While it's
true that TSM has progressive-incremental and no full backups, in our
environment anyway, we have hundreds or thousands of systems with lots
of common files across them.  We have hundreds of desktop systems that
have a lot of common OS and application files.  We have local e-mail
stores that have a lot of common attachments.

While it may be true that overall, you will see less duplication in a
TSM environment than with other backup applications, with TSM you also
have the ability to associate different management classes with
different files, and thereby target different files to different storage
pools.  Wouldn't it be great if we could target only the
files/directories that we *know* have a high likelihood of duplication
to a storage pool that has deduplication capability?  You can actually
do this with TSM.  I'd like to see an option in TSM that can target
files/directories to different back-end storage pools that is
independent of the "management class" concept, which also affects
versions & retentions and other management attributes.


..Paul



--
Paul Zarnowski   

Re: TSM dream setup

2008-02-26 Thread Ben Bullock
In the "best practices" document from DataDomain
http://www.datadomain.com/pdf/GlassHouse-TSM-DataDomain-Whitepaper.pdf
and what other things I read, it sounded like the VTL "mimicry" is not
really needed in a TSM environment unless you are doing lan-free or NDMP
backups. This shop does neither, so there was no need to buy the VTL
option. I just made deviceclasses and storagepools and slipped it into
the storage pool hierarchy.

You are also dead on for the analysis. They recommend that you create
storagepools for the various types of data to keep track of what kind of
compression you are getting. In our case, it was "databases",
"fileservers" and "archives". Our installer said some folks go much more
granular, even as far a 1 storagepool per host, but I didn't want to
create a nightmare in storagepools, so I kept it simple. 

 Each storage pool on the TSM server points to a different directory on
the DataDomain appliance, and the appliance can tell you what the
compression rates are: overall, for each filesystem on the appliance,
and even for each "FILE" the TSM server creates. With that data and a
drill-down using the "q content" command you can pick out the
uncooperative data.

Ben
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Tuesday, February 26, 2008 4:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM dream setup

Ben,
First, thanks for sharing your experiences.  Very enlightening.

I'm curious why you decided to use devclass=file volumes, instead of
using the DD580 as a VTL.  It does have a VTL personality, doesn't it?

At 05:39 PM 2/26/2008, Ben Bullock wrote:
>TSM config - Easy. Created some "FILE" device classes and pointed them 
>to the NFS mount points.


You also did some analysis about what type of data got what level of
reduction.  How did you go about testing this?  Do you use separate
storage pools to segregate the data and then run a test?  I don't see
any other way to do it.

Thanks.
..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]

The BCI Email Firewall made the following annotations
-
*Confidentiality Notice: 

This E-Mail is intended only for the use of the individual
or entity to which it is addressed and may contain
information that is privileged, confidential and exempt
from disclosure under applicable law. If you have received
this communication in error, please do not distribute, and
delete the original message. 

Thank you for your compliance.

You may contact us at:
Blue Cross of Idaho 
3000 E. Pine Ave.
Meridian, Idaho 83642
1.208.345.4550

-


Re: TSM dream setup

2008-02-26 Thread Paul Zarnowski

Ben,
First, thanks for sharing your experiences.  Very enlightening.

I'm curious why you decided to use devclass=file volumes, instead of using
the DD580 as a VTL.  It does have a VTL personality, doesn't it?

At 05:39 PM 2/26/2008, Ben Bullock wrote:

TSM config - Easy. Created some "FILE" device classes and pointed them
to the NFS mount points.



You also did some analysis about what type of data got what level of
reduction.  How did you go about testing this?  Do you use separate storage
pools to segregate the data and then run a test?  I don't see any other way
to do it.

Thanks.
..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]


Re: Orphaned DB2 data

2008-02-26 Thread Jacques Van Den Berg
Q occ also tells you the number of files which is the number of backups
you have in tsm divided by the number of sessions you use to do your db2
backup.

Jacques van den Berg
TSM / Storage / SAP Basis Administrator
Pick 'n Pay IT
Email   : [EMAIL PROTECTED]
Tel  : +2721 - 658 1711
Fax : +2721 - 658 1676
Mobile  : +2782 - 653 8164 
 
Dis altyd lente in die hart van die mens wat God 
en sy medemens liefhet (John Vianney).
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Collins, Brenda
Sent: 26 February 2008 06:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Orphaned DB2 data

The q occ only provides the total space.  The select command I was using
shows actual dates.

Select node_name,backup_date from backups where node_name='xx'

This tells me there are backups out there from almost a year ago when
their scripts are designed to delete backups older than 45 days.  When
the DBA tried to delete them, he received the error message below.

Brenda

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Jacques Van Den Berg
Sent: Tuesday, February 26, 2008 9:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Orphaned DB2 data

Hi,

Does your sql select & query occ  give the same result?
We are running DB2 Ver8 & 9. I delete the backups with db2adutl. This
only marks them as inactive in tsm. Tsm expire inventory will the delete
them from tsm.

Regards,

Jacques van den Berg
TSM / Storage / SAP Basis Administrator
Pick 'n Pay IT
Email   : [EMAIL PROTECTED]
Tel  : +2721 - 658 1711
Fax : +2721 - 658 1676
Mobile  : +2782 - 653 8164

Dis altyd lente in die hart van die mens wat God en sy medemens liefhet
(John Vianney).


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Collins, Brenda
Sent: 26 February 2008 05:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Orphaned DB2 data

Hi Everyone!

I have some issues with DB2 (v8 & v9) data.  If I run a select statement
to show backups on a particular node, it shows many more backups than
what the DBA sees when they do a query through db2adutl.  When the DBA
tried to delete backups older than 45 days, he received the following
message:

The current delete transaction failed. You do not have sufficient
authorization. Attempting to deactivate  backup image(s) instead...

Success.

I do know that over time, they have upgraded DB2 from v8 to v9 but I am
not sure that is the reason why I have orphaned data.

Any ideas on how to determine what data is orphaned and how to get rid
of it in DB2 would be greatly appreciated!

TSM Server: 5.4.0.2
OS = AIX 5.3
DB2 - V9 (at this time)
TSM Client 5.4.0.0

Thanks,
Brenda Collins

[CONFIDENTIALITY AND PRIVACY NOTICE]

Information transmitted by this email is proprietary to Medtronic and is
intended for use only by the individual or entity to which it is
addressed, and may contain information that is private, privileged,
confidential or exempt from disclosure under applicable law. If you are
not the intended recipient or it appears that this mail has been
forwarded to you without proper authority, you are notified that any use
or dissemination of this information in any manner is strictly
prohibited. In such cases, please delete this mail from your records.

To view this notice in other languages you can either select the
following link or manually copy and paste the link into the address bar
of a web browser: http://emaildisclaimer.medtronic.com


Read our disclaimer at:
http://www.picknpay.co.za/pnp/view/pnp/en/page5093?
If you don't have web access, the disclaimer can be mailed to you on
request.
Disclaimer requests to be sent to [EMAIL PROTECTED]


Read our disclaimer at: http://www.picknpay.co.za/pnp/view/pnp/en/page5093? 
If you don't have web access, the disclaimer can be mailed to you on request. 
Disclaimer requests to be sent to [EMAIL PROTECTED]