On Fri, Mar 05, 2010 at 11:31:41AM -0500, Phil Stracchino wrote: > Why don't you try this (I'm assuming you're backing up to disk, as you > didn't specify): > > Set up three Storage devices on your Storage daemon, and three Pools, > each tied to one of the Storage devices. So you have Pool A on Storage > device A, Pool B on Storage device B, Pool C on Storage device C. (They > can all point to the same physical disk pool. That's OK.) Assign the > clients from group A to use pool A, group B to pool B, group C to pool > C. Then set maximum concurrency to 50 on Storage C, and to 1 on Storage > A and B. To achieve the same effect, you could use the same basic setup
Thanks Phil, that's a great idea ! Unfortunately, we're backing up to several LTO drives, and as far as I've seen so far it seems that the tapes don't like being shared by several storage devices (or can bacula 5.x now handle that OK ?) At worst, I might dedicate one LTO drive to force serialization of both groups A+B, and leave the maximum concurrency for group C. It would be sub-ideal (as A and B would not run in parallel), but much better than current group-C starving. What would be ideal is to have ability to set Concurrecy per-JobDefs, but it doesn't seem to work that way (it looks like it copies it to Job using it). -- Matija Nalis Odjel racunalno-informacijskih sustava i servisa Hrvatska akademska i istrazivacka mreza - CARNet Josipa Marohnica 5, 10000 Zagreb tel. +385 1 6661 616, fax. +385 1 6661 766 www.CARNet.hr ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users