I'm out for vacation

2008-12-22 Thread Julien Sauvanet
I will be out of the office starting  22/12/2008 and will not return until
06/01/2009.

You can contact the France USBS or IMT France Storage
about BROD , contact Michel Isnard
about SOD , contact Olivier Ranchet


Re: 5.4 --> 6.0 (server)

2008-12-22 Thread Kauffman, Tom
Original Message
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
Of Gee, Norman Sent: Sunday, December 21, 2008 3:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 5.4 --> 6.0 (server)

> What is your opinion of placing the DB2 database on a DS8300 or any
> other high end SAN? (RAID 5 only, all 15K drives) parity write
> overhead.
> Is it possible to access both a DS-4200 and a DS-8300 from the same
> pair of FC cards?  Different multipath drivers.

I don't know anything about DS-4200 -- but we've been running our TSM database 
on ESS Raid5 disk for a long time. I figure that if it's good enough for SAP R3 
on Oracle, it's good enough for TSM.

Tom Kauffman
NIBCO, Inc

CONFIDENTIALITY NOTICE:  This email and any attachments are for the
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in
reliance upon this message. If you have received this in error, please
notify us immediately by return email and promptly delete this message
and its attachments from your computer system. We do not waive
attorney-client or work product privilege by the transmission of this
message.


Re: 5.4 --> 6.0 (server)

2008-12-22 Thread Allen S. Rout
>> On Fri, 19 Dec 2008 12:17:32 -0500, Richard Rhodes 
>>  said:


> Recently (3 weeks ago) we had a meeting with IBM folks about TSM issues.
> In talking about the coming v6.1 they said that TSM v5.5 would be rquired
> for the upgrade to v6.1.  Take it with a large grain of salt, but that's
> what they said.

You'll certainly have to pass through 5.1, however briefly.

- Allen S. Rout


Re: 5.4 --> 6.0 (server)

2008-12-22 Thread Allen S. Rout
>> On Sat, 20 Dec 2008 23:56:33 -0800, "Gee, Norman"  
>> said:

> I guess with the rumor conversion to DB2, it would no longer be
> feasible to place the database on JBOD mirror by TSM.  The
> recommendation may be RAID 5 or 6 storage with fibre channel disks
> and not SATA disks. Am I close?

It's my understanding that there will be some documentation about the
recommended db2 practices, tuned for TSM admins, Real Soon Now.  At
least for the beta audience.

As you can imagine, IBM really would prefer to be able to control the
flow of information on this, so folks don't make plans based on old
information, and then get dissapointed.  I imagine those of us in the
beta are staying silent for exactly this reason. :)


- Allen S. Rout


Re: 5.4 --> 6.0 (server)

2008-12-22 Thread Richard Sims

We'll see what transition steps are actually required once the 6.1
announcement is made.  Many customers (most?) are not at the 5.5
level; and having to establish that level before going on to 6 will be
a major hassle, if it is imposed, where IBM would have to commit to
making the 5.5 level available for an extended period, and customers
would have to acquire it, however briefly.  And I would not want to be
in TSM Support when the 6.x problem reports start coming in and they
have to fully ascertain what may have transpired within the customer
stay at the 5.5 level before going on to 6, which may have contributed
to the problems.

If this is a TSM database preparedness step, it would make FAR more
sense for IBM to create a legacy database conversion utility rather
than require the major work involved in trodding through 5.5 territory
as a circuitous route to 6.x land.

   Richard Sims


Re: 5.4 --> 6.0 (server)

2008-12-22 Thread Remco Post

On Dec 22, 2008, at 21:16 , Richard Sims wrote:


We'll see what transition steps are actually required once the 6.1
announcement is made.  Many customers (most?) are not at the 5.5
level; and having to establish that level before going on to 6 will be
a major hassle, if it is imposed, where IBM would have to commit to
making the 5.5 level available for an extended period, and customers
would have to acquire it, however briefly.  And I would not want to be
in TSM Support when the 6.x problem reports start coming in and they
have to fully ascertain what may have transpired within the customer
stay at the 5.5 level before going on to 6, which may have contributed
to the problems.

If this is a TSM database preparedness step, it would make FAR more
sense for IBM to create a legacy database conversion utility rather
than require the major work involved in trodding through 5.5 territory
as a circuitous route to 6.x land.



Ok, you guys really have to be either in the beta program or a bit
more patient. The current documentation is a lot less worrying than
current assumptions I see on this list. 6.1 announcement is just round
the corner



  Richard Sims


--
Met vriendelijke groeten,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: NDMP policies/schedules

2008-12-22 Thread Shawn Drew
I just queried my backups table.  (it's not bad at all when only dealing
with NAS nodes)
The same node can have backups to different management classes.

However, I use virtualfs's to point to different snapshots, so I can't
verify if the same volume can have multiple management classes.


Regards,
Shawn

Shawn Drew





Internet
david-bron...@uiowa.edu

Sent by: ADSM-L@VM.MARIST.EDU
12/18/2008 08:19 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] NDMP policies/schedules





I'm wondering what those of you using TSM for NDMP backups are using
for your policy settings and managing your backup schedules.  I figure
some of you must have found some kind of workable policy and schedule
configuration...

I was asked to configure NDMP for our NetApps to retain monthly full
backups for a year, weekly fulls for a month, and daily differentials
for a week.  Welcome back to the old days.  Anyway...

So I set up a policy domain for the filers with management classes for
daily (retextra=6), weekly (retextra=31) and monthly (retextra=366)
backups.  Then I set up scripts for each node to run the "backup node"
commands, with arguments specifying mode (full/diff) and mgmt. class,
and set up admin schedules to run the scripts with the right arguments
on the right days of the week/month.

It all sounds good (as good as NDMP gets on TSM).  Except it doesn't
work.  It turns out that specifying the mgmt. class for an NDMP backup
rebinds all previous images for that volume/filespace (and now support
is telling me it in fact rebinds for all filespaces on that node, but
I haven't verified that assertion).  So my weekly full from two weeks
back is gone, because subsequent daily's have rebound it to a 6-day
retention.  This is detailed in Technote 1240848.

So I opened a PMR about how to get the desired retentions, assuming
it's possible.  The initial feedback from support was to use two nodes
in TSM:  one for monthly fulls with 1 year retention, one for weekly
fulls and daily diffs with 1 month retention (for both full and diffs).
The week following the monthly, the diffs would be supposedly be based
off the prior weekly full (now support is unsure about this point).

I'm also exploring using a VIRTUALFSMAPPING to map the volumes into a
/monthly/vol/volname path and run monthly fulls using that path and
corresponding mgmt. class.  This is where the question arises about
whether specifying a different mgmt. class rebinds all filespaces for
the node, or just previous backups of the same filespace.  I plan to
test this soon.

Unfortunately, I have to use NDMP for these backups.  The files are
being migrated from a Windows fileshare cluster to the NetApp.  There
are many millions of files, and on the Windows cluster we absolutely
had to have the TSM journal service in use for the backups to finish
in a reasonable time.  So using a TSM B/A client on a Windows box to
backup the shares doesn't seem viable, since the journal service only
works with local filesystems.

(Oh, to have a native DataONTAP TSM client with journal service...)

Those of you who read this far without your head exploding, congrats
and thanks. :)  So for anyone with similar NDMP requirements, how
have you implemented your solution?

=Dave

--
Hello World.David Bronder - Systems
Admin
Segmentation Fault ITS-SPA, Univ. of
Iowa
Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


This message and any attachments (the "message") is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.