On 10/22/2013 03:59 PM, rgreiner wrote: > Hi. > > I would like to know if it possible to create different "queues" for making > backup in Bacula. My problem is that I have 3 different locations > (datacenters) with servers I want to backup. But one those datacenters has a > small link (50Mbps) with somewhat large file (total of about 1TB). > > In a simple description, serveres 1-10 are in datacenter 1 (50Mbps), 11-20 in > datacenter 2 and 21-30 in datacenter 3 (both with 1Gbps links). I would like > to start one queue with servers 1-10, and simultaneously another queue with > servers 11-30. Is that possible? I saw the page > http://www.bacula.org/5.2.x-manuals/en/problems/problems/Tips_Suggestions.html#SECTION003170000000000000000 > with tips about simultaneous backup, but that does not cover my particular > use case. Could someone point me where I could find the information I need? > Simultaneous writing is not an issue, as I'm using a storage and not tapes.
I'm not sure if it'll work, but try this. It's not a final solution because I haven't done this yet, but I intend to - my need is a little different - I have a pool of virtual machines and I need to back no more than N of the at time to limit i/o on the host, but it ends up being the same - I need separate queues. 1) define several devices in your storage daemon (bacula-sd.conf) like so: Device { Name = Dev1 Media Type = File-bacula Device Type = File Archive Device = /bacula LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; Maximum Concurrent Jobs = 1 } Device { Name = Dev2 Media Type = File-bacula Device Type = File Archive Device = /bacula LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; Maximum Concurrent Jobs = 1 } They can (and for a start they should - it simpler this way) point to the same dir with the same media type. Set Maximum Concurrent Jobs on the storage daemon to something high. Depending on what you want, you can set Maximum Concurrent Jobs for each drive to a higher value, but it'll cause data interleaving - probably spooling (into ram for example) would be usefull. For now I just use more drives (4) to have 4 parallel jobs running. 2) define two storage resources in your director pointing to the same storage daemon, just having different names and set Maximum Concurrent Jobs to values less than number of drives defined in the storage daemon. This way niether of storage resources will use all of drives. 3) create separate pools each pointing to a different storage resource (like pool1->st1 and pool2->st2). 4) Run site #1 jobs into pool1 and site #2 and #3 into pool2. This way you have something like a queue, where number of simultaniously running jobs is limited by Maximum Concurrent Jobs on each storage resource. Actually you can have more pools pointing to each storage resource - concurrency is controlled at the storage resource anyway. At least that's how I'm going to try to do it. It may be necessary to give each storage resource it's own set of drives, but that shouldn't be a problem. Hope this helps. Martin ------------------------------------------------------------------------------ October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users