I have about 80 jobs all starting on the same schedule (1Tb fiesystems).

They start in order of their entry in the config file and this means 
that if there are any problems resulting in backups taking more than a 
day, the last jobs in the list can end up consistently not being run if 
duplicates are disallowed.

A bit of randomisation in the start order would take care of that.

It would also help spread the backups better across multiple 
fileservers, given that the usual practice is normally to group any 
given machine's filesets in the config file - this can result in 4-6 
filesets being backed up simultaneously on one machine while others are 
idle.





------------------------------------------------------------------------------
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
_______________________________________________
Bacula-devel mailing list
Bacula-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-devel

Reply via email to