Re: delete filespace and LOGMODE NORMAL

2011-11-22 Thread Loon, EJ van - SPLXO
Hi Sascha!
Indeed this sounds strange. I can imagine that the delete filespace pins
the log, which causes the log to grow, but as soon as you cancel the
delete filespace, the pinning should be gone and thus the log
utilization should be back to 0.
This only proves my point: I have a PMR open for months about log
utilization. Our log continues to grow and triggers a backup several
times a night. We switched to normal mode, just to see what happened,
but this also causes the log to grow. Less (up to 60/70%) but still, the
log grows more than expected. When running in normal mode, the log only
contains uncommitted transaction. Typically large SQL Server client
backups tend to pin the log for a long time, but I also saw that the log
space isn't returned after a pinning transaction is completed.
Development explained that the recovery log uses a sort of a round robin
method and that this is the reason why space isn't returned straight
away.
The fact that a canceled delete filespace doesn't free the log only
proves to me that something is definitely broken/not working correctly
in TSM, but I can't seem convince development...
Kind regards,
Eric van Loon

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Sascha Askani
Sent: maandag 21 november 2011 15:20
To: ADSM-L@VM.MARIST.EDU
Subject: delete filespace and LOGMODE NORMAL

Hi List,

Again, I'd like to tap into your (almost) infinite wisdom regarding TSM
;)

I have a 5.5.4.1 / Solaris server running here with a node that has
approx. 36 Million objects stored. I already exported this node to a
newer (in terms of Hardware and TSM version) server and switched
backups.

Now I want to get rid of the filespace on the old server. Since deleting
36 million objects will fill up the recovery log pretty fast and often,
I thought setting the logmode to "normal" would result in a nice cleanup
without triggering the DBBACKUPtrigger every n minutes.

So I set the logmode to normal but this didn't keep the log from filling
up until 78% where I cancelled the DELETE FILESPACE. Interestingly, the
log didn't go back to 0% utilization as I would have expected it. So I
did a manual DBBACKUP which zeroed the log.

I also opened a PMR with IBM, my contact told me that cancelling the
DELETE FILESPACE, backing up the DB and resuming the DELETE FILESPACE
was the correct way to do it. So I set the logmode back to ROLLFORWARD
and defined the Trigger to 32 incremental backups; this way I didn't
have to have an eye on the server while deleting the filespace.

So (finally) my question is: Does "Logmode Normal" not prevent a fill-up
of the log in this case? Sounds like a bug to me somehow. And why did
the log not revert back to zero when I cancelled the DELETE?

Sorry for the long mail and thanks in advance!

Sascha

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




Re: delete filespace and LOGMODE NORMAL

2011-11-22 Thread Sascha Askani
Am 22.11.2011 12:27, schrieb Loon, EJ van - SPLXO:
> Hi Sascha!
> Indeed this sounds strange. I can imagine that the delete filespace pins
> the log, which causes the log to grow, but as soon as you cancel the
> delete filespace, the pinning should be gone and thus the log
> utilization should be back to 0.

Yes, that's what I was expecting.

> This only proves my point: I have a PMR open for months about log
> utilization. Our log continues to grow and triggers a backup several
> times a night. We switched to normal mode, just to see what happened,
> but this also causes the log to grow. Less (up to 60/70%) but still, the
> log grows more than expected. When running in normal mode, the log only
> contains uncommitted transaction. Typically large SQL Server client
> backups tend to pin the log for a long time, but I also saw that the log
> space isn't returned after a pinning transaction is completed.
> Development explained that the recovery log uses a sort of a round robin
> method and that this is the reason why space isn't returned straight
> away.
> The fact that a canceled delete filespace doesn't free the log only
> proves to me that something is definitely broken/not working correctly
> in TSM, but I can't seem convince development...

While thinking about your answer I remember a had a strange behaviour
(yes, yet another one!):

After cancelling the "DELETE FILESPACE" and the log not returning to
zero, I tried a "DELETE VOL n DISCARDD=yes" in the STGPool affected;
after that the log returned to zero immediately, but unfortunately, I
could not reproduce this, so maybe it was jst coincidence, who knows?

However, it feels good not being the only one having this type of problem :)

BR,

Sascha


Re: Stupid question about TSM server-side dedup

2011-11-22 Thread Allen S. Rout

On 11/21/2011 11:40 PM, Prather, Wanda wrote:


So here's the question.  NDMP backups come into the filepool and
identify duplicates is running.  But because of those long retention
times, all the volumes in the filepool are FULL, but 0% reclaimable,
and they will continue to be that way for 6 months, as no dumps will
expire until then.  Since the dedup occurs as part of reclaim, and
the volumes won't reclaim -how do we "prime the pump" and get this
data to dedup?  Should we do a few MOVE DATAs to get the volumes
partially empty?



Would RECLAIMSTGPOOL help you here?

Original usecase was a disk stgpool to permit those with a single
drive to put the data somewhere whilst reclaiming, and the reclaimstg
would then eventually drain back to the primary stgpool.

But you might have the NDMP data filter into a pool in which you force
reclamation of _everything_, and have it debouche into another stg.  I
know that dedupe is per-pool, but I seem to recall that it works to
move from a dedupe pool to another dedupe pool?

- Allen S. Rout


Re: Stupid question about TSM server-side dedup

2011-11-22 Thread Colwell, William F.
Wanda,

when id dup finds duplicate chunks in the same storagepool, it will
raise the pct_reclaim
value for the volume it is working on.  If the pct_reclaim isn't going
up, that means there
are no duplicate chunks being found.  Id dup is still chunking the
backups up (watch you database grow!)
but all the chunks are unique.

Is it possible that the ndmp agent in the storage appliance is putting
in unique metadata with each file?
This would make every backup appear to be unique in chunk-speak.

I remember from the v6 beta that the standard v6 clients were enhanced
so that the metadata could
be better identified by id dup and skipped over so that it could just
work on the files and get
better dedup ratios.  If id dup doesn't know how to skip over the
metadata in an ndmp stream, and
the metadata is always changed, then you will get very low dedup ratios.

If you do a 'q pr' while the id dup is running, do the processes say
they are finding duplicates?

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, November 21, 2011 11:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Stupid question about TSM server-side dedup

Have a customer would like to go all disk backups using TSM dedup.  This
would be a benefit to them in several respects, not the least in having
the ability to replicate to another TSM server using the features in
6.3.

The customer has a requirement to keep their NDMP dumps 6 months.  (I
know that's not desirable, but the backup group has no choice in the
matter right now, it's imposed by a higher level of management.)

The NDMP dumps come via TCP/IP into a regular TSM sequential filepool.
They should dedup like crazy, but client-side dedup is not an option (as
there is no client).

So here's the question.  NDMP backups come into the filepool and
identify duplicates is running.  But because of those long retention
times, all the volumes in the filepool are FULL, but 0% reclaimable, and
they will continue to be that way for 6 months, as no dumps will expire
until then.  Since the dedup occurs as part of reclaim, and the volumes
won't reclaim -how do we "prime the pump" and get this data to dedup?
Should we do a few MOVE DATAs to get the volumes partially empty?


Wanda Prather  |  Senior Technical Specialist  |
wprat...@icfi.com  |
www.icf.com
ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)
Connect with us on social media


Re: Windows 2008 client ..TSM schedule won't run

2011-11-22 Thread Hughes, Timothy
Harold, Richard, Gary and Wanda (and everyone else who replied)

Now all four Windows 2008 Servers are completing successfully the partial 
dsm.opt is below all I did was comment out the TCPCLIENTADDRESS line (on two of 
the servers dsm.opt file)

*TCPCLIENTADDRESS xxx.x.x.xxx

Before that I did put in the following

tcpclientport 1501
Webport 1552 1553
HTTPport 1581
TCPport 1500


LANG AMENG
DOMAIN ALL-LOCAL
TCPSERVERADDRESS .xxx..xx.xxs
PASSWORDACCESS GENERATE
*TCPCLIENTADDRESS xxx.x.x.xxx
NODENAME 
tcpclientport 1501
Webport 1552 1553
HTTPport 1581
TCPport 1500
Errorlogname c:\progra~1\tivoli\tsm\baclient\dsmerror.log
SCHEDlogname c:\progra~1\tivoli\tsm\baclient\dsmsched.log
Errorlogretention 14
schedlogretention 7
LARGECOMMbuffers  yes
changingretries 2
subdir yes
txnbytelimit 25600
tcpb 32
tcpw 63
tcpnodelay yes
schedmode prompted
replace prompt
*exclude "*:\MSSQL\DATA\MSSQL.1\MSSQL\Data\*.*"
MANAGEDSERVICES WEBCLIENT SCHEDULE



However, here is another dsm.opt from one of the 4 that was missing (but now 
the backup is completing successfully) and I did not have to comment out the 

TCPCLIENTADDRESS xxx.x.x.xxx

and I DON'T have the following.

Webport 1552 1553
 

I am still confused because on this one I have the following.

tcpclientp 1581 and not  HTTPport 1581


LANG AMENG
DOMAIN ALL-LOCAL
TCPSERVERADDRESS .xxx..xxx
TCPCLIENTADDRESS xxx.x.x.xxx
tcpclientp 1581
webports 1500 1501


this one looks like this

LANG AMENG
DOMAIN ALL-LOCAL
TCPSERVERADDRESS .xxx..xx.xx
webports 1500 1501 1552 1553 
tcpclientp 1581

Thanks  for all your help!

Regards
-Original Message-
From: Hughes, Timothy 
Sent: Monday, November 21, 2011 11:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: RE: Windows 2008 client ..TSM schedule won't run

Thanks

I performed the 'dsmc q inclexcl' to test communication information as Richard 
suggested I got a successful communication I believe. (this is the only one 
left to fix see below)

Session established with server TSM: AIX
  Server Version 6, Release 2, Level 2.0
  Server date/time: 11/21/2011 10:47:31  Last access: 11/21/2011 10:26:23

*** FILE INCLUDE/EXCLUDE ***
Mode Function  Pattern (match from top down)  Source File
 - -- -
Excl Directory *\...\History.IE5  Server
Excl Directory *\TEMPORARYServer
Excl Directory *\I386\Server
Excl Directory *\RECYCLER Server
Excl Directory *\RECYCLED Server
Excl Directory *\...\Temporary Internet Files Server
Excl Directory *\System Volume InformationServer
Excl Directory *\...\.TsmCacheDir TSM
Excl Directory c:\Windows\system32\microsoft\protect Operating System
Excl Directory \\?\Volume{d7559fa6-87ef-11e0-9592-806e6f6e6963}\Boot Operating S
ystem
Excl Directory C:\Windows\winsxs  Operating System
Excl Directory C:\Windows\Vss\Writers Operating System
Excl Directory C:\Windows\Tasks   Operating System


F.Y.I -

One of the two clients with the remaining issues backed up successfully last 
night

Notice I had  commented out the TCPCLIENTADDRESS which someone suggested wasn't 
really needed.

With this edited dsm.opt file this client worked

LANG AMENG
DOMAIN ALL-LOCAL
TCPSERVERADDRESS .xxx..xx.xx
PASSWORDACCESS GENERATE
*TCPCLIENTADDRESS 516.3.x.x.100
tcpclientport 1501
Webport 1552 1553
HTTPport 1581
TCPport 1500
NODENAME caplira

However this backup did not work  ( noticied left the TCPCLIENTADDRESS 
126.2.2.188 in) 

LANG AMENG
DOMAIN ALL-LOCAL
TCPSERVERADDRESS .xxx..xx.xx 
PASSWORDACCESS GENERATE
TCPCLIENTADDRESS xxx.x.x.xxx
NODENAME malina22
tcpclientport 1501
Webport 1552 1553
HTTPport 1581
TCPport 1500

Wanda



Thanks again Harold, Richard and Wanda



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Monday, November 21, 2011 10:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Windows 2008 client ..TSM schedule won't run

Another thing I've done in cases like this is temporarily switch from prompted 
to polling mode.  That's usually easier to get working, as it eliminates a 
couple of the problems, like incorrect/changed IP address, and needs only 1 
port in the firewall (1500).

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Monday, November 21, 2011 9:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Windows 2008 client ..TSM schedule won't run

Good details from Richard.

Are all the clients on the same remote network segment?

We had a case once where an agency setup new clients, but put them on a 
different network segment.  Firewall became the issue, again, even though the 
firewall mgr said "not the firewall."  The FW manager didn't know there were 
new IPs involved.

We also had a case where the client name in the DSM.OPT f

Re: Stupid question about TSM server-side dedup

2011-11-22 Thread Ian Smith

Wanda,

Are the identify processes issuing any failure notices in the activity log ?

You can check if id dup processes have found duplicate chunks yet to be
reclaimed by running 'show deduppending '  WARNING, can
take a long time to return if stgpool is large, don't panic !

I am unfamiliar with NDMP backup but off the top of my head a couple of
other (simple) things to check would be:
is the server-side SERVERDEDUPETXNLIMIT option set very low  and
preventing dedup id ?

Have these dumps been backed up to copypool yet ? ( perhaps you've
overlooked the deduperequiresbackup option at the server )?
- IIRC the identify processes run but find nothing if this option is set
and the data has not yet been backed up to copypool.

Ian Smith


On 22/11/11 15:17, Colwell, William F. wrote:

Wanda,

when id dup finds duplicate chunks in the same storagepool, it will
raise the pct_reclaim
value for the volume it is working on.  If the pct_reclaim isn't going
up, that means there
are no duplicate chunks being found.  Id dup is still chunking the
backups up (watch you database grow!)
but all the chunks are unique.

Is it possible that the ndmp agent in the storage appliance is putting
in unique metadata with each file?
This would make every backup appear to be unique in chunk-speak.

I remember from the v6 beta that the standard v6 clients were enhanced
so that the metadata could
be better identified by id dup and skipped over so that it could just
work on the files and get
better dedup ratios.  If id dup doesn't know how to skip over the
metadata in an ndmp stream, and
the metadata is always changed, then you will get very low dedup ratios.

If you do a 'q pr' while the id dup is running, do the processes say
they are finding duplicates?

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, November 21, 2011 11:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Stupid question about TSM server-side dedup

Have a customer would like to go all disk backups using TSM dedup.  This
would be a benefit to them in several respects, not the least in having
the ability to replicate to another TSM server using the features in
6.3.

The customer has a requirement to keep their NDMP dumps 6 months.  (I
know that's not desirable, but the backup group has no choice in the
matter right now, it's imposed by a higher level of management.)

The NDMP dumps come via TCP/IP into a regular TSM sequential filepool.
They should dedup like crazy, but client-side dedup is not an option (as
there is no client).

So here's the question.  NDMP backups come into the filepool and
identify duplicates is running.  But because of those long retention
times, all the volumes in the filepool are FULL, but 0% reclaimable, and
they will continue to be that way for 6 months, as no dumps will expire
until then.  Since the dedup occurs as part of reclaim, and the volumes
won't reclaim -how do we "prime the pump" and get this data to dedup?
Should we do a few MOVE DATAs to get the volumes partially empty?


Wanda Prather  |  Senior Technical Specialist  |
wprat...@icfi.com   |
www.icf.com
ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)
Connect with us on social media


Re: delete filespace and LOGMODE NORMAL

2011-11-22 Thread Roger Deschner
In TSM V5, DELETE FILESPACE is extremely resource-intensive. To get rid
of this huge filespace you may have to plan to schedule it in pieces.
Set a schedule to start it every day at a quiet time, and then cancel
the process when the server needs to do something else. Doing it in
small pieces will also keep it from filling the log. Repeat for as many
days as it takes to get rid of the filespace. I don't know if it's
faster in V6, but I sure hope so, since the average Apple Mac client
node has about 1,000,000 files, and we've got many Macs. The larger log
size in V6 should help.

More comments below.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== "You will finish your project ahead of schedule." ===
= (Best fortune-cookie fortune ever.) ==


On Tue, 22 Nov 2011, Sascha Askani wrote:

>Am 22.11.2011 12:27, schrieb Loon, EJ van - SPLXO:
>> Hi Sascha!
>> Indeed this sounds strange. I can imagine that the delete filespace pins
>> the log, which causes the log to grow, but as soon as you cancel the
>> delete filespace, the pinning should be gone and thus the log
>> utilization should be back to 0.
>
>Yes, that's what I was expecting.
>
>> This only proves my point: I have a PMR open for months about log
>> utilization. Our log continues to grow and triggers a backup several
>> times a night. We switched to normal mode, just to see what happened,
>> but this also causes the log to grow. Less (up to 60/70%) but still, the
>> log grows more than expected. When running in normal mode, the log only
>> contains uncommitted transaction. Typically large SQL Server client
>> backups tend to pin the log for a long time, but I also saw that the log
>> space isn't returned after a pinning transaction is completed.
>> Development explained that the recovery log uses a sort of a round robin
>> method and that this is the reason why space isn't returned straight
>> away.
>> The fact that a canceled delete filespace doesn't free the log only
>> proves to me that something is definitely broken/not working correctly
>> in TSM, but I can't seem convince development...
>
>While thinking about your answer I remember a had a strange behaviour
>(yes, yet another one!):
>
>After cancelling the "DELETE FILESPACE" and the log not returning to
>zero, I tried a "DELETE VOL n DISCARDD=yes" in the STGPool affected;
>after that the log returned to zero immediately, but unfortunately, I
>could not reproduce this, so maybe it was jst coincidence, who knows?

I have seen this kind of behavior. It was a coincidence. The thing that
caused the log to go back down to 0 was the simple passage of time, so
don't bother with DELETE VOL or any other trick. Sometimes it can take
several hours.

This is how TSM behaves about a lot of things, including CANCEL PROCESS.
It acts on them when it gets around to it and it decides that all the
related locks have expired, or it gets to the next file to be moved, or
whatever else it is that makes it take its time. A lot of things can be
held up by a client node restore that is underway, which will preempt
most other processes. Relax and go with the flow.

--Roger


>
>However, it feels good not being the only one having this type of problem :)
>
>BR,
>
>Sascha
>


Re: Stupid question about TSM server-side dedup

2011-11-22 Thread Stefan Folkerts
NDMP data is not dedupable by TSM when using filepools (as opposed to a VTL
like the Protectier that does a great job at it) because it is stuffed with
date/time stamps and TSM can't parse the files correctly for hashing at the
moment., when TSM sees NDMP data is doesn't even try to dedupe the data.

So NDMP with TSM dedupe is not happening at the moment, I don't know about
the roadmap for this feature.




On Tue, Nov 22, 2011 at 5:23 PM, Ian Smith  wrote:

> Wanda,
>
> Are the identify processes issuing any failure notices in the activity log
> ?
>
> You can check if id dup processes have found duplicate chunks yet to be
> reclaimed by running 'show deduppending '  WARNING, can
> take a long time to return if stgpool is large, don't panic !
>
> I am unfamiliar with NDMP backup but off the top of my head a couple of
> other (simple) things to check would be:
> is the server-side SERVERDEDUPETXNLIMIT option set very low  and
> preventing dedup id ?
>
> Have these dumps been backed up to copypool yet ? ( perhaps you've
> overlooked the deduperequiresbackup option at the server )?
> - IIRC the identify processes run but find nothing if this option is set
> and the data has not yet been backed up to copypool.
>
> Ian Smith
>
>
>
> On 22/11/11 15:17, Colwell, William F. wrote:
>
>> Wanda,
>>
>> when id dup finds duplicate chunks in the same storagepool, it will
>> raise the pct_reclaim
>> value for the volume it is working on.  If the pct_reclaim isn't going
>> up, that means there
>> are no duplicate chunks being found.  Id dup is still chunking the
>> backups up (watch you database grow!)
>> but all the chunks are unique.
>>
>> Is it possible that the ndmp agent in the storage appliance is putting
>> in unique metadata with each file?
>> This would make every backup appear to be unique in chunk-speak.
>>
>> I remember from the v6 beta that the standard v6 clients were enhanced
>> so that the metadata could
>> be better identified by id dup and skipped over so that it could just
>> work on the files and get
>> better dedup ratios.  If id dup doesn't know how to skip over the
>> metadata in an ndmp stream, and
>> the metadata is always changed, then you will get very low dedup ratios.
>>
>> If you do a 'q pr' while the id dup is running, do the processes say
>> they are finding duplicates?
>>
>> Bill Colwell
>> Draper Lab
>>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>> Prather, Wanda
>> Sent: Monday, November 21, 2011 11:41 PM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: Stupid question about TSM server-side dedup
>>
>> Have a customer would like to go all disk backups using TSM dedup.  This
>> would be a benefit to them in several respects, not the least in having
>> the ability to replicate to another TSM server using the features in
>> 6.3.
>>
>> The customer has a requirement to keep their NDMP dumps 6 months.  (I
>> know that's not desirable, but the backup group has no choice in the
>> matter right now, it's imposed by a higher level of management.)
>>
>> The NDMP dumps come via TCP/IP into a regular TSM sequential filepool.
>> They should dedup like crazy, but client-side dedup is not an option (as
>> there is no client).
>>
>> So here's the question.  NDMP backups come into the filepool and
>> identify duplicates is running.  But because of those long retention
>> times, all the volumes in the filepool are FULL, but 0% reclaimable, and
>> they will continue to be that way for 6 months, as no dumps will expire
>> until then.  Since the dedup occurs as part of reclaim, and the volumes
>> won't reclaim -how do we "prime the pump" and get this data to dedup?
>> Should we do a few MOVE DATAs to get the volumes partially empty?
>>
>>
>> Wanda Prather  |  Senior Technical Specialist  |
>> wprat...@icfi.com   |
>> www.icf.com
>> ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 |
>> 410.539.1135 (o)
>> Connect with us on social 
>> media
>> >
>>
>