Our Oracle DBA's want to do multi stream restores using TDP for Oracle. I
have read in other posts using TDP for SQl all you have to do is set the
dest stgpool to Collocate by filespace. Ok, sounds easy but I don't think
that would work for the Oracle DB Backups. I say this as in the TDPO.opt
yo
David,
Was your 4-5 times faster going direct to a Physical Tape Drive? I ask as
we are moving to all Virtual Tape we are finding the LAN Free backups to
any of the VTL Heads (Dilligent / Falcon Store etc) become the bottleneck.
I can see a stream to a 3592 or maybe an LTO3 drive do better than
Our Unix Admin has been able to script / automate the TDP for Oracle
Install but is wondering if anyone has been able to automate the
./tdpoconf password portion of the cfg. has anyone done this? If so were
you able to automate the TPO password set on the client side?
Thanks
Charles Hart
UH
One of our TSM environments that is Backing up Unix and DB2 data is a bit
overrun at this point (lots of MediaW'). I was talking to our DB2 DBA and
he states that the final step to a On-line DB2 backup is to include the
transaction logs that occurred since the backup began. The DB2 log files
are
and should be shipped offsite. And the MediaW will disappear when the
DB2 log files go to disk.
best regards,
Kurt
________
Van: ADSM: Dist Stor Manager namens Charles A Hart
Verzonden: di 2/01/2007 16:56
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: [ADSM-L] TSM DB2 API Back
Got a strange one. We have two libraries (VT1 and VT2 Virtual Tape
Emulating DLT7000) the Active Library VT1 Died, in haste we added in the
other Vtape Library VT2 and updated the devclass to point Library VT2.
Since then we've been able to get VT1 working again, but any time a Vtape
Req comes in
We had a situation where the same devclass was used for two different
Libraries. We then created two devclasses and associated one lib with the
original devclass and the other with a new devclass. The library with the
new devclass can not mount any volumes becuae the volumes are still
associated
I grew up with a 3494 (Some Consider Slow) but recently have been working
with a 3584. The 3584 is "dumb" in comparison, once a week we have to
idle down TSM and re-inventory the 3584 because it always looses its
inventory and cant mount tapes or puts them in the wrong slots! Sooo
frustrating! I
3592 Dual Path Drives on a Windows Box? I say this because the way
Windows uses devices especially if the 3592's are dual pathed... Windows
isn't the most robust / reliable OS so I'm not seeing why you'd even
enable Dual Path. Last place I was at we had 20 3592's Single pathed
(Dual Path in ca
We are in the midst of doing server side includes / excludes with the
Force Option =Yes. Would the following scenario work? (We'll test it
sooon, but a sanity check from the group would be great!
Server Side
OptionSequence Use Option Value
We too are using ProtectTier, started with VTF-Open. VTFopen in a
Clustered environment is fast a 2node Cluster 100MBS. We expected better
factoring, in additional to removing client side compression try and place
like data with like data (IE Prod Unix Box A on the same Protectier
Library with U
Use Wild Cards q occ aps*
Or if you have a spcific list use the TSM ODBC driver via MS Excel to the
Occupancy Table then you can apply filters in excel etc
Charles Hart
UHT - Data Protection
(763)744-2263
Sharepoint:
http://unitedteams.uhc.com/uht/EnterpriseStorage/DataProtection/default.asp
We had an issue with Imation LTO3's. Imation's tape cutting machine was
leaving tape artifacts behind, so when a tape got mounted the extra tape
piece / artifact would hang the drive...
Charles Hart
UHT - Data Protection
(763)744-2263
Sharepoint:
http://unitedteams.uhc.com/uht/EnterpriseStorage/D
TSM ODBC Using Excel, then query the occupancy table, put totals asside
and aftewr a few weeks / months you should see a groeing trend. Its Free
Too
Charles Hart
UHT - Data Protection
(763)744-2263
Sharepoint:
http://unitedteams.uhc.com/uht/EnterpriseStorage/DataProtection/default.aspx
L
According to IBM Tech notes the Block Size as Being
Random-access disk pool, devtype=disk ==>> 4KB block
Sequential-access disk pool, devtype=file ==>> 256KB block
Sequential tape pool, devtype=tape(e.g. lto) ==>> 256KB block
How or can you set the "Sequential tape pool" V
There's options you can use to limit number of days to keep for the sched
and error logs...
SCHEDLOGname /path/dsmsched2.log
SCHEDLOGRetention 5
ERRORLOGname /path/dsmerror2.log
ERRORLOGRetention 33
Charles Hart
UHT - Data Protection
(763)744-2263
Sharepoint:
http://un
Yep... Also the maxScr does play a role, you may have a diff in max scr
between used and allocated that still might be tooo small as a whole for
the amoutn of data you are trying to migrate. ie 100 Tapes @ 100 GB = 1TB
, but you are trying to mugrate 2 TB...
Charles Hart
UHT - Data Protection
(76
Good question, we looked at the Copan solution with our last RFP. For our
environment it would not have panned out as our environment in one
location's 28TB a night, which would essentially break the Maid Concept,
(ie disk would all be spinning. ) To have Maid work in a large env, it
would be saf
Anther methodology we employ for our Offsite Backup Copy Process is a
12Gige Fibre over IP (FCIP using a Cisco 9513 w/FCIP Blade) that spans
approx 12miles. So far we are seeing 20 to 75MBS per Tape Device. Tape
Devices are "Zoned" across the FCIP link for the primary DC TSM Server to
write the Of
Thank you for sharing... it seems rare that people come back and port the
resolutions!
Thanks again!
Charles Hart
UHT - Data Protection
(763)744-2263
Sharepoint:
http://unitedteams.uhc.com/uht/EnterpriseStorage/DataProtection/default.aspx
"Schneider, John" <[EMAIL PROTECTED]>
Sent by: "ADSM:
Very well put...
I was suprised, and disapointed to see that an actual ADSM Member had a
"Concern" was "(One concern
that some people stated was that they thought that newbies would jump in
and out of a forum more often than they would a mailing list, that
joining the list was considered "sweat eq
Yep!
Charles Hart
UHT - Data Protection
(763)744-2263
Sharepoint:
http://unitedteams.uhc.com/uht/EnterpriseStorage/DataProtection/default.aspx
"Allen S. Rout" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager"
05/25/2007 09:00 AM
Please respond to
"Allen S. Rout" <[EMAIL PROTECTED]>
This is why we still front end with Diskpools... as we have had CDL and
VTFopen VTL's go to lunch. Our philosophy is we need to get a good backup
and do not want to have our DBA's move Logs around while we fix the VTL...
Charles Hart
Rajesh Oak <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor
Validate with EMC that the the 1200MBps is Native or compressed, we had
the same discussion with our CDL 740 and the actual max was 469MBS. I'm
not sure if the newer 4100 is clustering the VTL Heads to get an actual
1200MBS.
Charles Hart
lowneil <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor
SATA does not do Read's Writes's at the same time well (ie Migration or
backup Stgpool with Backup client session at the same time... It can be
done, but careful concideration is taken when the SATA Disk Frame is
carved up... (ie striped accross Drawers, small raid groups etc)
The easiest is a Ti
Your DB and Log shold be RAW as well, and in small vols. (ie 12GB log
should be in 2-3GB VOls, DB, vols, depengin on size of db should be 5-10GB
vols. Also try to make sure the raw logical vols are evenly spread
accross as many LUNs as possible.
Charles Hart
"Stapleton, Mark" <[EMAIL PROTEC
We recently found out if we have a 1TB DB that runs 4 RMAN Channels to
disk will only Migrate as one "Stream" to the Onsite Virtual Tape Pool as
the nature of Migrations Streams are processed a one migration process per
client event if you use multiple channels to the Diskpool. That being
Said is
Being that TSM does incremental, your de-dupe ratio will be lower than
other Full / Incr backup products. Here's a few lessons learned with TSM
and a Diligent Protectier.
1) Do the best you can to put like data together. (ie all Oracle
DB Backups go to the same de-dupe VirtualTape head (R
According to Dilligent, when RMAN uses Multiplexing, it intermingles the
data from each RMAN so the data block will be different every time so the
blocks are different, similar to Multiplexing with Netbackup... I'm not
an RMAN expert, just trusting what the Vendor is stating.
The following link
House Technologies
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Charles A Hart
Sent: Monday, August 27, 2007 10:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Deduplication
According to Dilligent, when RMAN uses Multiplexing, it intermi
The compression challenge is more related to creating unique backup
objects that can not be de-duped. Compression does cause a CPU
performance hit on the client. Performace related experience with
Dilkligent Protectier running on a Sunv40 with 4xDaul Core Procs and 32GB
memory we see a max of 250
You're correct, in that there are products that can provide a more global
repository. We used the Dilligent VTFOpen in a 2node cluster and achieved
a 1200MBS write speed! Impressive, so if you don't need the de-dup the
VtfOpen product really screams.
In one of a few large Data Centers we see 25TB
32 matches
Mail list logo