On 2020-01-28 10:08, Jean Mark Orfali wrote: > Hello Phil and Thomas, > > Here is the result of the bconsol. > > > Select daemon type for status (1-5): 5 > bacula-dir Version: 7.0.5 (28 July 2014) x86_64-redhat-linux-gnu redhat > Enterprise release > Daemon started 27-Jan-20 14:53. Jobs: run=0, running=5 mode=0,0 > Heap: heap=270,336 smbytes=652,990 max_bytes=737,558 bufs=557 max_bufs=563 > > Scheduled Jobs: > Level Type Pri Scheduled Job Name Volume > =================================================================================== > Incremental Backup 10 28-Jan-20 23:00 BackOVH *unknown* > Incremental Backup 10 28-Jan-20 23:00 BackphppROD *unknown* > Incremental Backup 10 28-Jan-20 23:00 BackServeurFichier *unknown* > Incremental Backup 10 28-Jan-20 23:00 BackBlanchard *unknown* > ==== > > Running Jobs: > Console connected at 27-Jan-20 15:36 > Console connected at 28-Jan-20 10:01 > JobId Type Level Files Bytes Name Status > ====================================================================== > 75 Back Full 0 0 BackphppROD is running > 76 Back Incr 0 0 BackOVH is waiting on > Storage "File1" > 77 Back Full 0 0 BackphppROD is waiting on max > Client jobs > 78 Back Full 0 0 BackServeurFichier is waiting on > Storage "File1" > 79 Back Full 0 0 BackBlanchard is waiting on > Storage "File1" > ====
OK, this looks pretty clear that something is not working as intended in your Storage configuration. You have a full backup running on BackphppROD, and your Storage is configured such that nothing else can write while that job is running. What's more, you have a second job waiting on BackphppROD that can't run. Now we have some idea where to look. For one thing, you should probably be using the following directives in your Director configuration: Allow Duplicate Jobs = no Cancel Queued Duplicates = yes What this does is tell the Director not to queue or run any new job on a client while another copy of that job, at the same or different level, is already running. This will prevent the situation above where you have a Full backup job BackphppROD running, and another Full backup copy (probably promoted from Incremental) queued to run as soon as it finishes. Now on to your Storage configuration. Let's see... What it looks like to me is that your fundamental problem here is that you have each client assigned its own POOL, with its own set of Volumes, but you only have one Storage device, which can only have one Volume open at a time. This means that EVERY job will block on ANY other running job because your Storage can only service one client at a time since your Clients each have their own unique Pools. (I see you've set up the virtual autochanger, but I don't use that and have no experience with it, so I can't speak about it.) Do you have an OPERATIONAL NEED to keep the backup data from different Clients separated on different Volumes? -- Phil Stracchino Babylon Communications ph...@caerllewys.net p...@co.ordinate.org Landline: +1.603.293.8485 Mobile: +1.603.998.6958 _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users