>>I am intrigued by the possibility of using node replication to migrate a TSM
>>server from one architecture to another. We have archive data mixed in with
>>our backup data, so just starting with fresh backups is a bit of a problem
>>for us. Node replication may provide a way to do this.
FW
Howdy.
I'd be interested in hearing anyone who's done node replication
discussing their experiences. I'm approaching the jump from v5 to v6,
and am musing about my configuration to be.
I'm currently doing virtual volume copy pools, which have protected me
adequately, but replicating everything
I haven't done node replication yet.
My opinion is that node replication and copy stgpools do different things.
When I first saw node replication, I was thinking I could get rid of my copy
stgpools (stored in a remotely attached tape library), but then I realized that
it wasn't that simple. c
On 8/8/2012 6:43 AM, Allen S. Rout wrote:
> On 08/07/2012 03:26 PM, Arbogast, Warren K wrote:
>
>> There are 4500+ directories under /ip, so virtualmountpoints aren't
>> workable either.
>
> ... Why? It's not hard, it's just big.
>
> Envision this:
>
> find /ip -maxdepth 1 -type d | awk '{ print
"Support for vFiler volumes with be in the TSM 6.40 client. Note that this
support will require ONTAP version 8.1.1 or greater."
Wow, this is huge! Is there a feature list posted for 6.4 somewhere?
Regards,
Shawn
Shawn Drew
Internet
tanen...@
"Thanks for taking the time to respond. I'm thinking that if I want to do
this, I'll probably abandon the TSM Scheduler in favor of home-grown
scripting. But I may just wait to see if snapdiff gets supported on
vFilers, at which point this issue becomes moot for me."
Support for vFiler volumes w
On 08/07/2012 03:26 PM, Arbogast, Warren K wrote:
There are 4500+ directories under /ip, so virtualmountpoints aren't
workable either.
... Why? It's not hard, it's just big.
Envision this:
find /ip -maxdepth 1 -type d | awk '{ print "virtualmountpoint ",$1}'
> /var/tmp/dsm.sys.vmp
cat /
Allen,
No, I'm sure you could put "-snapshotroot=xxx" in the options argument to the
client scheduler. But your response made me realize that I left out one
relevant point: We are backing up several NAS volumes via the scheduler, not
just one. We use the pre- and post- scheduler exits to mou
Thanks Allen.
At 10:05 AM 8/8/2012, Allen S. Rout wrote:
>You're in the position that the snapshot you want to use has a stable
>name. There are folks who have snapshots named related to the date of
>consistency-point.
As it turns out, NetApp / nSeries snapshots do have predictable/static names.
Hi Allen,
This is quite impressive. I will discuss it with the client admin.
Thank you,
Keith
On Aug 8, 2012, at 9:43 AM, Allen S. Rout wrote:
> On 08/07/2012 03:26 PM, Arbogast, Warren K wrote:
>
>> There are 4500+ directories under /ip, so virtualmountpoints aren't
>> workable either.
>
> ...
On 08/08/2012 09:54 AM, Paul Zarnowski wrote:
Allen,
No, I'm sure you could put "-snapshotroot=xxx" in the options
argument to the client scheduler. But your response made me realize
that I left out one relevant point: We are backing up several NAS
volumes via the scheduler, not just one. W
Thanks Del, will do...
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del
Hoobler
Sent: Wednesday, August 08, 2012 7:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP for Exchange 6.1.3 - need help finding out how to
find out what I nee
On 08/07/2012 03:20 PM, Paul Zarnowski wrote:
It seems that the snapshotroot option would be perfect for doing
this, except for the fact that it only seems to work for 'selective'
or 'incremental' backups run from the command line. I don't see a
way to do this for scheduled backups.
Paul, doe
Hi Wanda,
I recommend that you enable tracing, open a PMR,
and send the trace to IBM Support.
You can do that by adding the following to
the "TDPEXCC BACKUP" command:
/TRACEFLAG=SERVICE /TRACEFILE=TRACE.TXT
The trace will tell the IBM Support team
exactly where the code is crashing.
Thank
14 matches
Mail list logo