Hi,

I implemented a setup with one LVM disk (2x500GB drives) and an autochanger
with 2 virtual disks (disk-0 and disk-1) following the instructions in the
Vchanger HowTo document (rev. 0.7.4 2006-12-12).

I created the conf file for the vchanger, and I updatd the bacula-dir.conf
and bacula-sd.conf files with the approrpiate entries. I also setup "Max
Concurrent Jobs = 20" in the director conf file for the directives:
Director, Storage, Client, and Job, and in the storage deamon conf file for
the directive: Storage.

I can start one job at a time, and things work fine. But, if I start more
than one job, then there is no concurrency and every new job will fail.

Is there some trick for getting concurrency to work in bacula; I thought it
would just fall into place once things are setup properly. It seems to
always use disk-0 to bind to the volume it is backing up to; it never uses
disk-1, even though it part of the changer. Have you seen disk-1 used in
similar situations.

Thanks you for your help,
Elie



Arno Lehmann wrote:
> 
> Hi,
> 
> 06.11.2007 00:22,, Elie Azar wrote::
>> Sorry for the previous reply; I clicked on the wrong "Reply To" button...
> 
> I was almost sure it was something like that :-)
> 
>> So, what you're suggesting is to setup large filesystems using LVM,
>> something like BLV01, BLV02, etc. each with several hard drives.
> 
> One filesystem would be enough - you don't have to foresee the space 
> you need per storage device, but they share a common available space.
> 
>> Then, in bacula, setup a vchanger for each of these LVMs,
> 
> ... or directories on the same LVM filesystem.
> 
>> and then direct
>> jobs to these storage devices.
> 
> Yes.
> 
>> Like that we can have multip[le jobs
>> simultaneously backing up to each LVM, right? I think that would work;
>> I'll
>> try it.
> 
> It should work, at least.
> 
> Of course, using one filesystem for several storage devices can limit 
> the overall throughput. If that's limited by network speed, yozu won't 
> have a problem. If you're backing up several FC-connected drive 
> arrays, this will be more of a problem :-)
> 
> Arno
> 
>> 
>> Thanks,
>> Elie
>> 
>> 
>> 
>> Arno Lehmann wrote:
>>> Hi,
>>>
>>> I suppose this was meant for the list... otherwise, I'd have to charge
>>> you a fee ;-)
>>>
>>> 05.11.2007 23:38,, [EMAIL PROTECTED] wrote::
>>>> Hi Arno,
>>>>
>>>> Thanks for the quick reply.
>>>>
>>>> The intention is to have a single job per volume. What I would like
>>>> is for Bacula to pick up a new volume for the new job, even if it's
>>>> on the same storage device. I would imagine that it should be able
>>>> to backup multiple jobs to different volumes simultaneously.
>>> It is, but...
>>>
>>>> Unless of course there is an inherent limitation in Bacula that
>>>> prevents that; but I didn't think there was.
>>> Indeed there is.
>>>
>>> Bacula handles all storage devices like tape drives. And a tape drive 
>>> an only access one tape at any time.
>>>
>>>> Do I need to change
>>>> something in my conf files to effect that? I'm not sure.
>>> I'd suggest to set up a "virtual autochanger".
>>>
>>> Basically, you use one set of volume files, and several storage 
>>> devices accessing this.
>>>
>>> Bacula will automatically choose distinct volumes for use by different 
>>> devices. There are a number of mails and how-tos available in the list 
>>> archives - search for vchanger (I just notice you know that already...)
>>>
>>> Alternatively, set up several file based storage daemons - most 
>>> commonly per client - and use these in your jobs.
>>>
>>> This requires more configuration, but using a template you process 
>>> with sed or awk it's easy to create files to include in the main 
>>> configuration.
>>>
>>> Hope this helps,
>>>
>>> Arno
>>>
>>>> Thanks, Elie
>>>>
>>>> Arno Lehmann wrote:
>>>>> Hi,
>>>>>
>>>>> 05.11.2007 23:08,, Elie Azar wrote::
>>>>>> Hi,
>>>>>>
>>>>>> I created an LVM disk, made up of 2x500GB hard drives, and I
>>>>>> made the necessary changes in the bacula conf files to be able
>>>>>> to send jobs to that new storage. Here are some of the
>>>>>> configuration changes.
>>>>>>
>>>>>> My problem is that I cannot send multiple jobs to backup
>>>>>> simultaneously. the first job starts, then I get an error on
>>>>>> each subsequent job. I dont' know if I'm missing something in
>>>>>> my configuration, or I still need to do something to get the
>>>>>> LVM disk properly installed in bacula, or something else... I'm
>>>>>> not sure.
>>>>>>
>>>>>> I have the "Maximum Concurrent Jobs = 20" in the Storage and
>>>>>> the Client directives, and I have it setup to 80 in the
>>>>>> Director directive, in the bacula-dir.conf file; and also in
>>>>>> the Storage directive in the bacula-sd.conf file.
>>>>>>
>>>>>> It seems to be requesting the same volume,
>>>>>> BLVPool13-BLV01-V0003 in this case, which is being used by the
>>>>>> previous job. I would expect it to be getting the next
>>>>>> available volume. Is there something that I'm not seeing...
>>>>> Yup... see below.
>>>>>
>>>>>> Any help would be greatly appreciated.
>>>>>>
>>>>>> Thanks, Elie Azar
>>>>>>
>>>>>>
>>>>> ...
>>>>>> # 13 day BLV pool definition Pool { Name = BLVPool13 Pool Type
>>>>>> = Backup Recycle = yes                                       #
>>>>>> Bacula can automatically recycle Volumes AutoPrune = yes
>>>>>> # Prune expired volumes Volume Retention = 13 days           #
>>>>>> 13 days Maximum Volume Jobs = 1            # one job per volume
>>>>>>
>>>>> You allow only one job per Volume.
>>>>>
>>>>>> LabelFormat = "${Pool}-${MediaType}-V${NumVols:p/4/0/r}" }
>>>>> You either need another storage device, or have to allow more
>>>>> than one job per volume.
>>>>>
>>>>>> *mes 05-Nov 13:38 coal-dir JobId 13667: Start Backup JobId
>>>>>> 13667, Job=Linux2-Test1.2007-11-05_13.38.10 05-Nov 13:38
>>>>>> coal-dir JobId 13667: There are no more Jobs associated with
>>>>>> Volume "BLVPool13-BLV01-V0003". Marking it purged. 05-Nov 13:38
>>>>>> coal-dir JobId 13667: All records pruned from Volume 
>>>>>> "BLVPool13-BLV01-V0003"; marking it "Purged" 05-Nov 13:38
>>>>>> coal-dir JobId 13667: Recycled volume "BLVPool13-BLV01-V0003" 
>>>>>> 05-Nov 13:38 coal-dir JobId 13667: Using Device "BLV01" 05-Nov
>>>>>> 13:38 coal-sd JobId 13667: Fatal error: Cannot recycle volume 
>>>>>> "BLVPool13-BLV01-V0003" on device "BLV01"
>>>>>> (/backups/autofs/BLV01/bacula) because it is in use by another
>>>>>> job.
>>>>> This error message is quite clear, I think... there was only this
>>>>>  volume that could be used, but it's in use currently, and so
>>>>> can't be recycled.
>>>>>
>>>>> Arno
>>>>>
>>>>> -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de
>>>>
>>> -- 
>>> Arno Lehmann
>>> IT-Service Lehmann
>>> www.its-lehmann.de
>>>
>>> -------------------------------------------------------------------------
>>> This SF.net email is sponsored by: Splunk Inc.
>>> Still grepping through log files to find problems?  Stop.
>>> Now Search log events and configuration files using AJAX and a browser.
>>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>> _______________________________________________
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>>
>> 
> 
> -- 
> Arno Lehmann
> IT-Service Lehmann
> www.its-lehmann.de
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Can%27t-run-concurrent-jobs-to-the-same-resource-tf4754731.html#a13619231
Sent from the Bacula - Users mailing list archive at Nabble.com.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to