Zitat von Antony Mayi <antonym...@yahoo.com>: >> ________________________________ >> From: Jérôme Blion <jerome.bl...@free.fr> >> To: bacula-users@lists.sourceforge.net >> Sent: Wednesday, 20 February 2013, 9:41 >> Subject: Re: [Bacula-users] backup concurrency >> >> Le 2013-02-20 10:19, Antony Mayi a écrit : >>> Hi community, >>> >>> I am running backups of multiple database servers and each backup job >>> is defined with "RunScript" command that dumps the databases on each >>> particular server and these dumps are then taken for backups. >>> >>> The dumps take different times on each server - minutes to couple of >>> hours. Since the dumping doesn't involve bacula storage daemon I want >>> another jobs to be running in parallel that can be sending data to >>> storage while the db servers are dumping the databases. Also the >>> database dumps can be running in parallel instead of sequentially as >>> that's purely local matter. >>> >>> I've increased the "Maximum Concurrent Jobs" in "Director" resource >>> but I can still see only one job running at a time. I don't think I >>> want to increase this as well in "Storage" resource and definitely >>> not >>> in the "Job" resource. >>> >>> What am I missing here? >>> >>> ...running bacula 5.2 >>> >>> thx, >>> Antony. >> >> Hello, >> >> What is the state of the other jobs while one is running? >> How did you set up your Maximum Concurrent jobs? >> >> > > good point. > > Running Jobs: > Console connected at 20-Feb-13 11:02 > JobId Level Name Status > ====================================================================== > 45 Full DB1Backup.2013-02-19_23.05.00_21 is running > 46 Full CatalogBackup.2013-02-19_23.10.00_22 is waiting for > higher priority jobs to finish > 47 Increme Srv1Backup.2013-02-20_11.02.40_26 is waiting on max > Storage jobs > 48 Increme Srv2Backup.2013-02-20_11.02.45_27 is waiting on max > Storage jobs > > > so it seems the DB1Backup is blocking the storage although not using > it really since it is busy with running the "RunScript" db dump > which takes in this case about 4 hours. so that suggests I need to > increase the concurrency for accessing the Storage which I wanted to > avoid to not to interleave writes from multiple Jobs. I find it > quite inefficient blocking the storage resource for several hours > although not really using it. Is there a way around it or is the > only approach enabling concurrent access to the Storage (which is > discouraged from restoration performance point of view)? > > thx, > Antony.
If you want to avoid concurrent (interleaved) write to tape by multiple jobs have a look at the data spooling feature. With this you split up your jobs in configurable chunks which will be written to tape non-interleaved. Your second possibility is to limit the actual tape device to 1 job, but not the storage daemon (not sure if this works though). Regards Andreas ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_feb _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users