Re: Windows TSM server 6.1.2.0 after clean install : ANR2968E Database backup terminated. DB2 sqlcode: -2033.

2009-08-29 Thread Stefan Folkerts
I get your point Bill but if something this basic is broken and has been
broken for a while (I gather) somebody would just take it out for the
time being, it's not that much work to just disable a function like that
in the software.
People that see the wizard and used to use it will use it again..because
it's what they know from the past.
I mean..it's not like you need major testing to figure it out, just
deploy and do a db backup..failure will occur. :)

I didn't work with the Windows 6.1 TSM instance before but now I know
how to get it to work so no more problems.

I am actually happy with TSM 6.1.2.0 now that I have the DB backup
running, things are sailing along just fine and I LOVE the dedupe on my
filepool..getting about 20% reduction on storage usage but that will
increase one I get more full Exchange db's in there. :)

Oh, and what you said about the ISC/AC thing..neither I nor my customers
use that stuff, we are TSMmanager lovers!
How a 10MB tool can do the same job but the faster and more resource
efficiently...TSMmanager rocks! :)

-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Namens Bill
Boyer
Verzonden: donderdag 27 augustus 2009 15:10
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] Windows TSM server 6.1.2.0 after clean install :
ANR2968E Database backup terminated. DB2 sqlcode: -2033.

The document describes this as being a problem using the Microsoft
management Console (MMC) plug-in for TSM. In the past this was the
easiest
way to create a new TSM instance on Winders. With TSM 6.1 (and I was
part of
the beta, too) I reported issues with the MMC and it crashing and not
doing
the instance creation correctly. This was where I learned about the new
DSMICFGX, which worked every time I used it in the Beta program. I have
not
done a 6.1.x install since the beta program ended, so I can't say how
it's
been since. But according to Bill Colwell (below) the wizard works just
fine, every time. I'm trying to think who it was that posted to the TSM
beta
forum about the MMC, he was also the author of the wizard...Dave Cannan
maybe??... but the impression I got was that the wizard be used in place
of
using the MMC to create instances and maybe the MMC would be upgraded if
they got around to it.

They have this wizard now, don't distribute TOR with 6.1, and
administration
through the ISC/Admin Console. So what's left in the MMC to even bother
with?


Bill Boyer
"Some days you're the windshield and some days you're the bug" - ??


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Stefan Folkerts
Sent: Thursday, August 27, 2009 2:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Windows TSM server 6.1.2.0 after clean install : ANR2968E
Database backup terminated. DB2 sqlcode: -2033.

I was in the beta as well Bill. :)

But I am sorry to say IBM did not do a good job on the 6.1.2.0 Windows
release instance wizard, Wanda pointed me the problem on the IBM page :
http://www-01.ibm.com/support/docview.wss?uid=swg21390301

There they confirm the problem, I did two clean installs and it just
doesn't work out of the box.
I can confirm that it still doesn't work on the TSM 6.1.2.0 64bit
install package.

Stefan

-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Namens
Colwell, William F.
Verzonden: woensdag 26 augustus 2009 17:55
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] Windows TSM server 6.1.2.0 after clean install :
ANR2968E Database backup terminated. DB2 sqlcode: -2033.

Stefan,

I was in the beta and I never got a database to backup because of the
api config
difficulties.  Fortunately this is all handled now by the instance
creation wizard, dsmicfgx.
I have run it 3 times and in every case the instance is created
successfully, starts up and
the db backs up because the api is configured.  I normally don't do
wizards, but IBM did a good
job on this one.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Wanda Prather
Sent: Wednesday, August 26, 2009 9:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Windows TSM server 6.1.2.0 after clean install : ANR2968E
Database backup terminated. DB2 sqlcode: -2033.

Been there done that.
Go to www.ibm.com, in the search window put:  *1390301*
**
That says what you need to do to get the problem fixed.
What it omits to say, is that you must be logged in with the DB2 userid
when
you do it.

**


On Wed, Aug 26, 2009 at 6:24 AM, Stefan Folkerts
wrote:

> I have done a clean install of the 64bit windows version of 64bit TSM
> 6.1.2.0 on Windows 2008 DC +sp1 + windows patches
> After I do a minimal setup of the server instance I am able to connect
> to the instance using TSMmanager.
> When I set the dbrecovery option to the default file device class I
> should be able to backup the TSM database with the 'ba db type=full
> devcass=FILEDEV1' command (FILEDEV1 is the name of th

TSM 6.1 + HACMP

2009-08-29 Thread Mehdi Salehi
Hi,
I am planning to put TSM 6.1 under the control of HACMP 5.4 in a two node
cluster. AIX version is 6.1. This task is more complex than previous TSM
versions because of DB2 you know. The question is that do I need DB2 HADR?

Regards,
Mehdi Salehi


Seeking wisdom on dedupe..filepool file size client compression and reclaims

2009-08-29 Thread Stefan Folkerts
TSM guru's of the world,

I am toying around with a new TSM server we have and I am pondering some
options and would like your thoughts about them.
I am setting up a 6.1.2.0 TSM server with a filepool only, planning on
using deduplication.

When I set up a filepool I usually make a fairly small volume size..10G
maybe ~20G depending on the expected size of the TSM environment.
I do this because if a 100G volume is full and starts expiring relaim
won't occur for a while and that makes up until 49% (49GB) of the volume
space useless and wasted.
So I set up 10G volumes in our shop (very small server) and just accept
the fact that I have a lot of volumes, no problem TSM can handle a lot
of volumes.

Now I am thinking, dedupe only occurs when you move data the volumes or
reclaim them but 10G volumes might not get reclaimed for a LONG time
since they contain so little data the chance of that getting reclaimed
and thus deduplicated is relatively smaller than that happening on a
100G volume.

As an example, I migrated all the data from our old 5.5 TSM server to
the new one using a export node command, once it was done I scripted a
move data for all the volumes and I went from 0% to 20% dedupe save in 8
hours.
If I would let TSM handle this it would have taken me a LONG time to get
there.

If I do a full Exchange backup I fill 10 volumes with data, identify
will mark data on them for deduplication but it won't have an effect at
all since the data will expire before the volumes are reclaimed.
This full Exchange backup will happen every week and is held for 1
month, that means the bulk of my data has no use of deduplication with
this setup or am I missing something here? :)

So I am thinking, with a 10G volume being filled above the reclaim
threshold so easy and therefor missing the dedupe action what should one
do?
I would almost consider a query nodedata script that would identify
Exchange node data and move that around for some dedupe action.

Also client compression, does anybody have an figures on how this effect
the effectiveness of deduplication?
Because these are both of interest in a filepool, if deduplication works
just as good in combination with compression that would be great.

Regards,

  Stefan


Re: Daily TSM maintenance schedules

2009-08-29 Thread Roger Deschner
We're big. (2tb/day backup data Mon-Fri nights) Expiration is the 20-ton
elephant in the room. If it doesn't run to completion every once in a
while, we're in trouble and we have the dreaded database bloat. It has
happened, and it wasn't pretty.

You can call this schedule crazy, but it's what works for a big TSM system:

CLIENT BACKUP + EXPIRATION + RECLAMATION (5PM)
BACKUP STGPOOLS + EXPIRATION
MIGRATION + EXPIRATION
BACKUP DEVCONFIG + BACKUP VOLHIST ***
BACKUP DB (incr on weekdays, full on Sun) + RECLAMATION
DELETE VOLHIST (triggered by end of BACKUP DB) + RECLAMATION
RECLAMATION + EXPIRATION (1-5PM - a relatively slow time. This is my
  maintenance window.)
back to top

*** These are fast. Runs while the tape lib is dismounting the migration
tapes, and mounting the DB BACKUP tape.

Saturday morning only: DELETE FILESPACE (we queue them up all week)
You've got to work DELETE FILESPACE into your schedule, because it
involves very heavy database I/O. You can't EXPIRE INVENTORY or BACKUP
DB during DELETE FILESPACE. MIGRATION won't even start if DELETE
FILESPACE is running. We do allow users to do it themselves, but in
actual practice they never do. We've got to nag them about ancient,
abandoned filespaces, and then we do it for them Sat AM. This is also
when we remove old nodes.

We use a tape reuse delay of 2 days, as disaster protection for doing
things in the "wrong" order.

The idea here is to keep the CPU and the 15,000rpm database disks busy
24/7, and to use as many tape drives as possible for as much of the time
as possible. I am constantly tuning this schedule. The basic problem is
that there are only 24 hours in a day.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing & Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 28 Aug 2009, Howard Coles wrote:

>Correction, it's "marked for expiration" and it is still recoverable,
>until the "Expire" process runs and removes it from the Database.  I
>know this from experience, as we disable our expiration process for a
>few days due to a server failure, and once due to a legal request.  The
>Expire Process actually removes the DB entry for that version of the
>file.
>
>See Ya'
>Howard
>
>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
>> Of Wanda Prather
>> Sent: Friday, August 28, 2009 11:21 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: Re: [ADSM-L] Daily TSM maintenance schedules
>>
>> Agreed.
>> "expire inventory" is actually something of a misnomer.
>>
>> If you have your retention set to 14 versions, and someone takes the
>> 15th
>> backup, the oldest version expires right then, you can't get it back.
>> That
>> type of "expiration" takes place, no matter whether EXPIRE INVENTORY
>> runs or
>> not.
>>
>> The EXPIRE INVENTORY is what updates the %utilization on your tapes,
>> based
>> on the files that have expired.  I think it also does some cleanup to
>> make
>> space in the DB for the expired files reusable.
>>
>> So you want to run EXPIRE INVENTORY before reclaim.
>> But I don't think it affects your migration in any way.
>>
>> W
>>
>> On Fri, Aug 28, 2009 at 11:23 AM, Thomas Denier <
>> thomas.den...@jeffersonhospital.org> wrote:
>>
>> > -Sergio O. Fuentes wrote: -
>> >
>> > >I'm revising my TSM administrative schedules and wanted to take an
>> > >informal poll on how many of you lay out your daily TSM maintenance
>> > >routines.  Functions I'm talking about here include:
>> > >
>> > >BACKUP DISK STGPOOLS
>> > >BACKUP TAPE STGPOOLS
>> > >BACKUP DEVCONFIG
>> > >BACKUP VOLHIST
>> > >BACKUP DB TYPE=FULL
>> > >PREPARE
>> > >DELETE VOLHIST
>> > >MIGRATE STG
>> > >EXPIRE INV
>> > >RECLAIM TAPE
>> > >RECLAIM OFFSITES
>> > >CLIENT BACKUP WINDOW STARTS (back to top)
>> > >
>> > >The above sequence is roughly how I handle our maintenance and is
>> > >based off of the IBM Redbook (sg247379) TSM Deployment Guide for
>5.5
>> > >(page 300).  I'm seriously considering altering it in this manner:
>> > >
>> > >BACKUP STGPOOLS
>> > >BACKUP DEVCONFIG
>> > >BACKUP VOLHIST
>> > >BACKUP DB
>> > >PREPARE
>> > >DELETE VOLHIST
>> > >EXPIRE INV
>> > >RECLAIM
>> > >MIGRATE STG
>> > >CLIENT BACKUP WINDOW STARTS (back to top)
>> > >
>> > >The key difference here, is that I'd be expiring right after the DB
>> > >Backups, and reclaiming space before migration.  I feel that this
>> > >would be more efficient in terms of processing actual unexpired
>data
>> > >and data storage (since reclamation would have freed up storage
>> > >space).  I would be concerned that migration would run in
>perpetuity
>> > >in cases where the migration window runs into the client backup
>> > >window.  Therefore, I might have migrations run before
>reclamations.
>> > >Does anyone else expire data right after your DB backups on a daily
>> > >basis?  Suggestions from anyone?  Thank you kindly.
>> >
>> > I don

Re: Seeking wisdom on dedupe..filepool file size client compression and reclaims

2009-08-29 Thread Allen S. Rout
>> On Sat, 29 Aug 2009 09:24:11 +0200, Stefan Folkerts 
>>  said:


> Now I am thinking, dedupe only occurs when you move data the volumes
> or reclaim them but 10G volumes might not get reclaimed for a LONG
> time since they contain so little data the chance of that getting
> reclaimed and thus deduplicated is relatively smaller than that
> happening on a 100G volume.

I think that, to a first approximation, the size of the volume is
irrelevant to the issues you're discussing here.

Do a gedankenexperiment: Split 100TB into 100G vols, and into 10 10G
vols.  Then randomly expire data from them.

What you'll have is a bunch of volumes ranging from (say) 0% to 49%
reclaimable.  You will reclaim your _first_ volume a skewtch sooner in
the 10G case. But on the average, you'll reclaim 500G of space in
about the same number of days.  Or said differently: in a week you'll
reclaim about the same amount of space in each case.

I need to publish a simulator.


So pick volume sizes that avoid being silly in any direction.

- Allen S. Rout


Re: Seeking wisdom on dedupe..filepool file size client compression and reclaims

2009-08-29 Thread Stefan Folkerts
Interesting ideas and a simulator would be fun for this purpose.
You could be right and your example does make sense in a way but still..
I do wonder if it works out in the real world.

Let's say you have normal data that expires (user files etc) and large
databases, some you keep for many months and sometimes even years.

If you use 200G volumes and a database fills this volumes for 60%+ this
volume might not expire for a long time even if the rest of the data has
expired, that leaves a waste of 80G

If you use 18G volumes and a database fills 6 volumes for a 100% and one
volume for 60%, that leaves a waste of 10,8GB.

Also, I don't have a clue of what the downside of small volumes could
be, is there a disadvantage in having a few hundred volumes instead of
30 large ones?
I can't think of a problem, maybe fs performance if the amount of
volumes becomes insane..or a slight TSM performance impact if you start
the reclaim process or do a query nodedata or something like that.

This remark : Do a gedankenexperiment: Split 100TB into 100G vols, and
into 10 10G
vols.  Then randomly expire data from them.

Is not how I think real world data works, data doesn't expire randomly,
parts of it do but large chunks of it don't (databases)

Please prove me wrong, I love to learn new stuff! :)


-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Namens Allen
S. Rout
Verzonden: zondag 30 augustus 2009 1:57
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] Seeking wisdom on dedupe..filepool file size
client compression and reclaims

>> On Sat, 29 Aug 2009 09:24:11 +0200, Stefan Folkerts
 said:


> Now I am thinking, dedupe only occurs when you move data the volumes
> or reclaim them but 10G volumes might not get reclaimed for a LONG
> time since they contain so little data the chance of that getting
> reclaimed and thus deduplicated is relatively smaller than that
> happening on a 100G volume.

I think that, to a first approximation, the size of the volume is
irrelevant to the issues you're discussing here.

Do a gedankenexperiment: Split 100TB into 100G vols, and into 10 10G
vols.  Then randomly expire data from them.

What you'll have is a bunch of volumes ranging from (say) 0% to 49%
reclaimable.  You will reclaim your _first_ volume a skewtch sooner in
the 10G case. But on the average, you'll reclaim 500G of space in
about the same number of days.  Or said differently: in a week you'll
reclaim about the same amount of space in each case.

I need to publish a simulator.


So pick volume sizes that avoid being silly in any direction.

- Allen S. Rout