Hi Arno! On Tuesday 07 August 2007, Arno Lehmann wrote: > 07.08.2007 16:11,, Andreas Kopecki wrote:: > ... > > >> Perhaps you have different priorities. > >> > >> Anyway, we will need more detailed information for further help... > >> either a complete job output from 'show job=...' for two jobs that > >> should run in parallel, but don't, or the complete job setup from the > >> configuration file. > > > > Here is the show job output for two jobs. Both share the same pool. I > > have jobs that use different pools, also, but that doesn't change > > anything. > > > > Job: name=VISPER-RAID-home6.Backup-Full JobType=66 level=Full Priority=10 > > Job priority 10 > > > Enabled=1 > > MaxJobs=4 Resched=0 Times=0 Interval=1,800 Spool=1 > > WritePartAfterJob=1 --> Client: name=visper-fd address=visper.hlrs.de > > FDport=9102 MaxJobs=20 > > Client max jobs 20 (the DIRs perspective) > > ... > > > --> Storage: name=GRAU address=hadrian.hlrs.de SDport=9103 MaxJobs=5 > > Storage jobs simultaneously 5 > > ... > > > Job: name=VISPER-RAID-home7.Backup-Full JobType=66 level=Full Priority=10 > > Identical priorities. Good. > > > Enabled=1 > > MaxJobs=4 Resched=0 Times=0 Interval=1,800 Spool=1 > > WritePartAfterJob=1 --> Client: name=visper-fd address=visper.hlrs.de > > FDport=9102 MaxJobs=20 > > The same client... the FD configuration allows more than one > simultaneous job? > > What does 'sta client=visper-fd' in bconsole report?
#sta client=visper-fd Connecting to Client visper-fd at visper.hlrs.de:9102 visper-fd Version: 2.0.3 (06 March 2007) x86_64-redhat-linux-gnu redhat (Stentz) Daemon started 07-Aug-07 16:17, 21 Jobs run since started. Heap: bytes=243,665 max_bytes=888,881 bufs=155 max_bufs=752 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 Running Jobs: JobId 920 Job VISPER-RAID-var2.Backup-Full.2007-08-07_21.55.19 is running. Backup Job started: 08-Aug-07 08:56 Files=264,073 Bytes=19,446,301,000 Bytes/sec=14,632,280 Files Examined=264,073 Processing file: [...] SDReadSeqNo=5 fd=6 Director connected at: 08-Aug-07 09:18 ==== > > --> Storage: name=GRAU address=hadrian.hlrs.de SDport=9103 MaxJobs=5 > > Ok, 5 again, same storage device... what's the output of 'sta sd=GRAU'? #sta sd=GRAU Connecting to Storage daemon GRAU at hadrian.hlrs.de:9103 hadrian-sd Version: 2.0.3 (06 March 2007) i486-pc-linux-gnu debian 4.0 Daemon started 27-Jul-07 15:40, 139 Jobs run since started. Heap: bytes=157,022 max_bytes=385,525 bufs=118 max_bufs=136 Running Jobs: Writing: Full Backup job VISPER-RAID-var2.Backup-Full JobId=920 Volume="000438" pool="GRAU-WF" device=""SONY-1" (/dev/nst0)" spooling=1 despooling=0 despool_wait=0 Files=277,064 Bytes=20,356,118,966 Bytes/sec=14,880,203 FDReadSeqNo=2,510,101 in_msg=1766274 out_msg=5 fd=7 ==== Jobs waiting to reserve a drive: ==== Terminated Jobs: JobId Level Files Bytes Status Finished Name =================================================================== 910 Incr 235 2.133 G OK 07-Aug-07 23:12 VISPER-RAID-home3.Backup-Full 911 Incr 0 0 OK 07-Aug-07 23:14 VISPER-RAID-home4.Backup-Full 912 Incr 18 573.2 K OK 07-Aug-07 23:20 VISPER-RAID-home5.Backup-Full 913 Incr 183 66.45 M OK 07-Aug-07 23:23 VISPER-RAID-home6.Backup-Full 914 Incr 0 0 OK 07-Aug-07 23:30 VISPER-RAID-home7.Backup-Full 915 Incr 0 0 OK 07-Aug-07 23:36 VISPER-RAID-media-soft.Backup-Full 916 Incr 3 413.6 K OK 07-Aug-07 23:43 VISPER-RAID-old.Backup-Full 917 Full 790,386 66.54 G OK 08-Aug-07 04:02 VISPER-RAID-svn.Backup-Full 918 Incr 10,880 1.708 G OK 08-Aug-07 04:16 VISPER-RAID-tmp.Backup-Full 919 Full 729,427 97.38 G OK 08-Aug-07 08:56 VISPER-RAID-var.Backup-Full ==== Device status: Autochanger "GRAU" with devices: "SONY-1" (/dev/nst0) Device "SONY-1" (/dev/nst0) is mounted with Volume="000438" Pool="GRAU-WF" Slot 18 is loaded in drive 0. Total Bytes=607,190,621,184 Blocks=9,412,056 Bytes/block=64,512 Positioned at File=630 Block=0 ==== In Use Volume status: 000438 on device "SONY-1" (/dev/nst0) ==== Data spooling: 1 active jobs, 20,383,517,845 bytes; 103 total jobs, 42,949,688,716 max bytes/job. Attr spooling: 1 active jobs, 0 bytes; 104 total jobs, 468,689,920 max bytes. > I didn't notice anything. Unless you forgot a FD or SD configuration > setting, or the necessary restart after these changes. I restarted them multiple times. This is my sd.conf: Storage { # definition of myself Name = hadrian-sd SDAddress = hadrian.hlrs.de SDPort = 9103 # Director's port WorkingDirectory = "/var/bacula" Pid Directory = "/var/run" Maximum Concurrent Jobs = 20 } Director { Name = hadrian-dir } Director { Name = hadrian-mon Monitor = yes } Director { Name = viscose-mon Monitor = yes } Device { Name = SONY-1 Drive Index = 0 Device Type = Tape Media Type = SDZ-130 Archive Device = /dev/nst0 AutomaticMount = yes # when device opened, read it AlwaysOpen = yes RemovableMedia = yes RandomAccess = no AutoChanger = yes Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'" Maximum Spool Size = 180G Maximum Job Spool Size = 40G Spool Directory = /mnt/spool/bacula } Autochanger { Name = GRAU Device = SONY-1 Changer Device = /dev/changer Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d" } Messages { Name = Standard director = hadrian-dir = all } The bacula-fd.conf at the client side: Director { Name = hadrian-dir } Director { Name = hadrian-mon Monitor = yes } Director { Name = viscose-mon # password for Directors Monitor = yes } FileDaemon { Name = visper-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/bacula Pid Directory = /var/run Maximum Concurrent Jobs = 20 } Messages { Name = Standard director = hadrian-dir = all, !skipped } [...] > Ok, the jobs definitions look good. I suspect it's either the SD or FD > stalling the jobs. The status outputs from above should have some > details... hopefully. Andreas ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users