Spooling can reduce overall throughput because the data is sequentially written to disk and then read back.
To see how fast bacula copied the spool file to tape, which is the critical thing to avoid shoe shining, look in the log for lines like this: Despooling elapsed time = ..., Transfer rate = ... Bytes/second __Martin >>>>> On Wed, 6 Feb 2019 11:52:56 -0500, Nate K said: > > Indeed it looks like my 2x 1tb mirror is bottle necking. I looked back at > an older job I ran when I had data spooling off and it saved 930gb at a > rate of 67.0 mb/s and then the same job ran again later with spooling on at > a rate of 46.0 mb/s. Both these rates are much lower than the theoretical > 160 mb/s max so should I assume even the fast server on 10gbe is a bottle > neck? I guess I will keep the max jobs per client at 1 and look into > setting up a ram disk for spooling. > > On Wed, Feb 6, 2019 at 10:06 AM Nate K <nate...@gmail.com> wrote: > > > Thanks Martin, I will add the max clients jobs directive. That is a good > > question regarding the mirror throughput, I’ll look into testing it. I > > wonder if I could spool on a ramdisk (the bacula server has 32gb) since the > > other server which is backed up is faster (raidz2 of 8x4tb 7200rpm drives > > connected over 10gbe) or change to spool attributes only or leave spooling > > off altogether. Is there a way to check if the drives are being > > bottlenecked and causing “shoe shining”? > > > > On Feb 6, 2019, at 9:48 AM, Martin Simmons <mar...@lispworks.com> wrote: > > > > >>>>>> On Wed, 6 Feb 2019 00:05:21 -0500, Nate K said: > > >> > > >> I've tried to figure this out on my own with searches and going through > > the > > >> manual and I need some clarification. I've included the relevant > > section > > >> of the bacula-sd.conf file below. I'm confused because I think this > > should > > >> work properly but I am getting the message "is waiting on max Client > > jobs" > > >> for all additional jobs that are running after the first. Every other > > >> daemon's config has maxes of 20 jobs. > > > > > > You need to increase "Maximum Concurrent Jobs" in the Client resource in > > > bacula-dir.conf to prevent "is waiting on max Client jobs". It defaults > > to 1. > > > > > > > > >> I also am confused about the spool directive. The server running bacula > > >> has 2x 1tb drives in a mirror zfs pool. I wonder how large I could make > > >> the spool directives. It isn't clear to me when I set the spool > > directives > > >> in each device section, if they all share "Maximum Spool Size = 100g" > > or if > > >> each of the 5 drives will allocate 100gb then using 500gb total of my > > disk > > >> space. if I want to never exceed 80% used space on the zpool and I also > > >> need 150gb for VMs and also need space for the catalog backing up > > 12-15tb > > >> of files, how high should I set the max and job spools? > > > > > > The "Maximum Spool Size" is the size per spool file, so you will use up > > to > > > 500GB. > > > > > > Does your 2 way mirror have enough throughput to feed 5 LTO3 drives > > > simultaneously (or even 1 drive with 4 other jobs simultaneously writing > > to > > > their spool files)? > > > > > > __Martin > > > _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users