"ADSM: Dist Stor Manager" wrote on 05/22/2008
12:36:11 PM:
>
> 3) We put the clients name into a tracking file with the decommission
> date, so we don't forget when 90 days is up.
We rename the node with a prefix so decommissioned nodes are easy to spot
in a "q node" listing. We would rename nod
For some reason, I have this impression that only the backup process pays
attention to the Management Class, so that's why I have a little doubt
that a restore will know to look in the right place for the TOC.
Regards,
Shawn
Shawn Drew
Internet
We do this also
put a comment in the CONTACT node field with the date the
node can be deleted from TSM, and the ticket number of the request
that asked us to retire the node.
Richard Rhodes wrote:
"ADSM: Dist Stor Manager" wrote on 05/22/2008
12:36:11 PM:
3) We put the clients name into
Richard Rhodes wrote:
"ADSM: Dist Stor Manager" wrote on 05/22/2008
12:36:11 PM:
3) We put the clients name into a tracking file with the decommission
date, so we don't forget when 90 days is up.
We rename the node with a prefix so decommissioned nodes are easy to spot
in a "q node" listing.
It's interesting how the CONTACT field gets used for general comments.
I've often wished
TSM had a general comment section (variable length) where we could keep
track of
changes made to a node over time. This would be SO helpfull!
Rick
Timothy Hughes
<[EMAIL PROTEC
We attach the date (as in \\nodename\c$\_20080523) to our old filespaces and
set a tickler for deleting them.
We will also occasionally rename a filespace to force a full backup if a
filespace is becoming too scattered on the offsite media.
We set the tickler a little past when the last inactive fi
On May 23, 2008, at 10:37 AM, Richard Rhodes wrote:
It's interesting how the CONTACT field gets used for general comments.
I've often wished
TSM had a general comment section (variable length) where we could
keep
track of
changes made to a node over time. This would be SO helpfull!
The DEFine
If you want to make sure the schedules don't interfere with each other,
you should probably make 2 node names and 2 schedule services so there is
no conflict.
Also, you meant "backup" instead of "restore", correct? It was a little
confusion reading this if that isn't the case. Assuming so... Thi
for Posterity:
NetApp has an existing bug (Has been around for several years and it
doesn't look like it will be fixed anytime soon.)
When a tape drive is zoned to a NetApp, it binds using the WWNN and not
the WWPN. This can not be changed.
We zone using the WWPN, which seems to be the default
Our TSM system is eating tapes.
We have a very large number of "filling" tapes. I have sorted them by
collocation group, and some collocation groups have as many as 6 tapes
in "filling" status. We only have 4 tape drives, and we never migrate
with more than 2 drives per storage pool, so I cannot u
The utilization of tapes in Collocation Groups is particularly
tricky. See Technote 1268766 for IBM's explanation. You may have to
dig into the specifics of the evidenced volumes and contextual events
to see what contributed to the situation.
Richard Sims
No, I meant restore. These are scheduled restores.
Shawn Drew wrote:
If you want to make sure the schedules don't interfere with each other,
you should probably make 2 node names and 2 schedule services so there is
no conflict.
Also, you meant "backup" instead of "restore", correct? It was a
12 matches
Mail list logo