Hi Sascha!
Indeed this sounds strange. I can imagine that the delete filespace pins
the log, which causes the log to grow, but as soon as you cancel the
delete filespace, the pinning should be gone and thus the log
utilization should be back to 0.
This only proves my point: I have a PMR open for mo
Am 22.11.2011 12:27, schrieb Loon, EJ van - SPLXO:
> Hi Sascha!
> Indeed this sounds strange. I can imagine that the delete filespace pins
> the log, which causes the log to grow, but as soon as you cancel the
> delete filespace, the pinning should be gone and thus the log
> utilization should be b
On 11/21/2011 11:40 PM, Prather, Wanda wrote:
So here's the question. NDMP backups come into the filepool and
identify duplicates is running. But because of those long retention
times, all the volumes in the filepool are FULL, but 0% reclaimable,
and they will continue to be that way for 6 mon
Wanda,
when id dup finds duplicate chunks in the same storagepool, it will
raise the pct_reclaim
value for the volume it is working on. If the pct_reclaim isn't going
up, that means there
are no duplicate chunks being found. Id dup is still chunking the
backups up (watch you database grow!)
but
Harold, Richard, Gary and Wanda (and everyone else who replied)
Now all four Windows 2008 Servers are completing successfully the partial
dsm.opt is below all I did was comment out the TCPCLIENTADDRESS line (on two of
the servers dsm.opt file)
*TCPCLIENTADDRESS xxx.x.x.xxx
Before that I did pu
Wanda,
Are the identify processes issuing any failure notices in the activity log ?
You can check if id dup processes have found duplicate chunks yet to be
reclaimed by running 'show deduppending ' WARNING, can
take a long time to return if stgpool is large, don't panic !
I am unfamiliar with
In TSM V5, DELETE FILESPACE is extremely resource-intensive. To get rid
of this huge filespace you may have to plan to schedule it in pieces.
Set a schedule to start it every day at a quiet time, and then cancel
the process when the server needs to do something else. Doing it in
small pieces will a
NDMP data is not dedupable by TSM when using filepools (as opposed to a VTL
like the Protectier that does a great job at it) because it is stuffed with
date/time stamps and TSM can't parse the files correctly for hashing at the
moment., when TSM sees NDMP data is doesn't even try to dedupe the data