On 6.12.2018. 6:20, Schneck Dennis wrote: > Hello Sasa, > >> did you tune kernel parameters for your Linux TSM server? > > yes, here is my /etc/sysctl.conf> > kernel.randomize_va_space=0 > vm.swappiness=0 > vm.overcommit_memory=0 >
My ipcs is somewhat different, and I doubt it has anything to do with your problem - here it is: # ipcs -l ------ Messages Limits -------- max queues system wide = 131072 max size of message (bytes) = 65536 default max size of queue (bytes) = 65536 ------ Shared Memory Limits -------- max number of segments = 32768 max seg size (kbytes) = 134217728 max total shared memory (kbytes) = 18014398442373116 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 32768 max semaphores per array = 300 max semaphores system wide = 9830400 max ops per semop call = 35 semaphore max value = 32767 ---------------------------------------------- And your TSMDB1 looks just fine... Also, I doubt the following would be connected with your issue, but here it is... Did you tune TSM instance user in /etc/security/limits.conf ? This is mine: tsminst01 soft nofile 26400 tsminst01 hard nofile 66000 -- Sasa > tsm01:~ # ipcs -l > > ------ Messages Limits -------- > > max queues system wide = 64512 > > max size of message (bytes) = 65536 > > default max size of queue (bytes) = 65536 > > ------ Shared Memory Limits -------- > > max number of segments = 16128 > > max seg size (kbytes) = 18014398509481983 > > max total shared memory (kbytes) = 18014398509480960 > > min seg size (bytes) = 1 > > ------ Semaphore Limits -------- > > max number of arrays = 32000 > > max semaphores per array = 32000 > > max semaphores system wide = 1024000000 > > max ops per semop call = 500 > > semaphore max value = 32767 > > >> And what is the size of you TSM DB and on what type of storage and >> filesystem is it located? > > tsm: TSM01>q db > > Database Name Total Pages Usable Pages Used Pages Free > Pages > -------------- ------------ ------------ ------------ > ------------ > TSMDB1 102,416 99,312 43,100 > 56,212 > > tsm: TSM01>q db f=d > > Database Name: TSMDB1 > Total Space of File System (MB): 197,939 > Space Used on File System(MB): 11,444 > Space Used by Database(MB): 1,280 > Free Space Available (MB): 186,495 > Total Pages: 102,416 > Usable Pages: 99,312 > Used Pages: 43,100 > Free Pages: 56,212 > Buffer Pool Hit Ratio: 99.9 > Total Buffer Requests: 4,971,182 > Sort Overflows: 0 > Package Cache Hit Ratio: 87.1 > Last Database Reorganization: 12/04/18 16:44:07 > Full Device Class Name: 3310L1DEV > Number of Database Backup Streams: 1 > Incrementals Since Last Full: 0 > Last Complete Backup Date/Time: 12/05/18 15:06:39 > Compress Database Backups: No > Protect Master Encryption Key: No > > Filesystem Size Used Avail Use% Mounted on > > /dev/mapper/SLESVG-lvtsmdb01 49G 373M 46G 1% /tsminst1/db01 > > /dev/mapper/SLESVG-lvtsmdb02 49G 373M 46G 1% /tsminst1/db02 > > /dev/mapper/SLESVG-lvtsmdb03 49G 373M 46G 1% /tsminst1/db03 > > /dev/mapper/SLESVG-lvtsmdb04 49G 373M 46G 1% /tsminst1/db04 > > > > > Am 05.12.18 um 16:45 schrieb Sasa Drnjevic: >> On 2018-12-05 15:06, Schneck Dennis wrote: >>> Hello Sasa >>> >>>> Check for errors on FC switch ports... >>> Ok will inform the colleage in the branch >>> >>> >>>> When have the problems started? >>> Its a new server - did not work before >> >> Hi Dennis, >> >> did you tune kernel parameters for your Linux TSM server? >> >> https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.8/srv.install/t_srv_krnlparms_lnx-linux.html >> >> >> And what is the size of you TSM DB and on what type of storage and >> filesystem is it located? >> >> -- >> Sasa Drnjevic >> www.srce.unizg.hr >> >> >> >>> Am 05.12.18 um 14:59 schrieb Sasa Drnjevic: >>>> Check for errors on FC switch ports... >>>> >>>> When have the problems started? >>>> >>>> -- >>>> Sasa Drnjevic >>>> www.srce.unizg.hr >>>> >>>> >>>> >>>> >>>> On 5.12.2018. 14:15, Schneck Dennis wrote: >>>>> Hello, >>>>> >>>>> starting with new TSM-Server 7.1.9.000 on a SLES12SP3 x86_64 with >>>>> LIN_TAPE 3.033 >>>>> >>>>> TAPE.Lib is a TS3310 with LTO5 Drives. >>>>> >>>>> Server has: Emulex AJ762B/AH402A FC Adapter an the run both with: 4 Gbit >>>>> (lspci | grep -i fibre) >>>>> >>>>> >>>>> tsm: TSM01>*SET DBRECOVERY 3310L1DEV PROTECTKeys=no* >>>>> ANR2784W Specifying PROTECTKEYS=NO requires the server's encryption keys >>>>> to be backed up manually. >>>>> >>>>> Do you wish to proceed? (Yes (Y)/No (N)) *y* >>>>> ANR2782I SET DBRECOVERY completed successfully and device class for >>>>> automatic DB backup is set to 3310L1DEV. >>>>> >>>>> tsm: TSM01>*backup db devclass=3310L1DEV type=full PROTECTKeys=no* >>>>> ANR2784W Specifying PROTECTKEYS=NO requires the server's encryption keys >>>>> to be backed up manually. >>>>> >>>>> Do you wish to proceed? (Yes (Y)/No (N))*y* >>>>> ANR2280I Full database backup started as process 3. >>>>> ANS8003I Process number 3 started. >>>>> >>>>> tsm: TSM01>*q act* >>>>> >>>>> 12/05/18 15:07:08 ANR8337I LTO volume 000260L5 mounted in drive >>>>> 3310L1D2 >>>>> (/dev/IBMtape2). (SESSION: 1, PROCESS: 3) >>>>> >>>>> 12/05/18 15:07:08 ANR0513I Process 3 opened output volume >>>>> 000260L5. >>>>> >>>>> after ~30 minutes only 34 MB >>>>> >>>>> tsm: TSM01>*q pro* >>>>> >>>>> Process Process Description Process Status >>>>> >>>>> Number >>>>> -------- -------------------- >>>>> ------------------------------------------------- >>>>> 3 Database Backup TYPE=FULL in progress. Bytes backed >>>>> up: 34,048 >>>>> KB. Current output volume(s): >>>>> 000260L5. >>>>> >>>>> **If I look with nmon, see only one CPU is 100% in use all other 0%**** >>>>> >>>>> *How to find the bottleneck ? * >>>>> >>>>>