We have two TSM v7.1.7.0 server running on different RHEL 7.x servers. The 
primary storage pool is BACKUPPOOL which has it's volumes in the local OS 
mounted as NFS volumes across a 10g network connection. The volumes live on the 
Data domain which does it's own deduplication in the background. We have a 
schedule that does a full TSM DB backup daily. The target is a separate file 
system but is also mounted as a NFS on the Data Domain across a 10g network 
connection. The TSM Active log is mounted on local disk. The TSM DB is also 
mounted locally on a RAID of SSD drives for performance.
                The issue I am seeing is that although there are only 261 
S00*.LOG files in /tsmactivelog, they appear to all be open multiple times.
"lsof|grep -i tsmactive|wc -l"
command tells me that there are 94576 files opened in /tsmactivelog. The 
process that has the /TSMACTIVELOG files opened is db2sysc. I've never seen 
this on our TSM 6.x server. It's almost as if the active log file is opened but 
then never closed. It isn't a gradual climb to a high number of mounts either. 
10 minutes after booting the server, there are an excessive number of files 
mounted in /tsmactivelog. This behavior is happening even when the server is 
extremely idle with very few (or none) sessions and/or processes running.
The issue we keep seeing every few days is processes like the TSM full DB 
backup, runs for a few minutes, and then progress just stops. After cancelling 
it, it never comes down so I am forced to HALT the server and then reboot it. 
It seems like the excessive number of opened files in /tsmactivelog and the 
hanging DB Backup feel related but I am not sure.
I've been working with IBM on the hanging processes but so far they are also 
stumped but they agree the two issues seem like they should be related.

I'm hoping someone out there might have some ideas.

Reply via email to