Did work so far. Will verify this behavior over the next few days.
This will give me sort of a "administrative overhead" concerning schedules.....
Mit freundlichem Gruß
i. A. Christoff Buch
=====================================
[EMAIL PROTECTED]
OneVision Software AG
Dr.-Leo-Ritter-Str. 9
93049 Regensburg
Attila Fülöp <[EMAIL PROTECTED]> wrote on 29.03.2006 20:36:07:
> Christoff Buch wrote:
> > Thanks Attila. I'll give it a try with the priorities. Tomorrow more about
> > it.
> > But if priorities really are system-wide, I don't see any sense in them,
> > because then I'll have to organize jobs all via schedules.
>
> Yep, same problem here. Well i found this by trial and error, don't
> know if this is intended behaviour or just a bug. Maybe someone can
> commnet on this? I too would prefer storage wide priorities.
>
>
> > Attila Fülöp <[EMAIL PROTECTED]> wrote on 29.03.2006 18:33:23:
> >
> >
> >>Christoff Buch wrote:
> >>
> >>>Hi "listers",
> >>>
> >>>I'm trying to do the following:
> >>>
> >>>I have a backup - server with bacula 1.38.5 which controls two
> >
> > built-in
> >
> >>>SLR-100 streamers.
> >>>There is rsynced data, that is transferred to the hdd of the
> >
> > backup-server
> >
> >>>and backed up from there to the two streamers. It's a single rsynced
> >
> > file
> >
> >>>for each job.
> >>>There are 5 jobs.
> >>>4 of them go to, say, Drive1.
> >>>1 of them (very large, see Job "No. 4" below) goes to Drive2.
> >>>I want this to happen parallel, because there are two drives so why
> >
> > don't
> >
> >>>use them simultaneously....
> >>>The problem is, they just don't go simultaneously. Any help would be
> >>>highly appreciated!
> >>
> >>You have to give your jobs the same priority. A job with Priority = 9
> >>will wait for the Priority = 8 job to finish, regardless of the storage
> >>assigned. So priorities are system wide, not storage wide.
> >>
> >>
> >>
> >>>What I configured is:
> >>>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > - -
> >
> >>>- - - - - - - - - - -
> >>>bacula-fd.conf:
> >>>
> >>>FileDaemon { # this is me
> >>> Name = savery-fd
> >>> FDport = 9102 # where we listen for the director
> >>> WorkingDirectory = /usr/local/bacula/working
> >>> Pid Directory = /var/run
> >>> Maximum Concurrent Jobs = 20
> >>>}
> >>>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > - -
> >
> >>>- - - - - - - - - - -
> >>>
> >>>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > - -
> >
> >>>- - - - - - - - - - -
> >>>bacula-sd.conf: (No data spooling!)
> >>>
> >>>Storage { # definition of myself
> >>> Name = savery-sd
> >>> SDPort = 9103 # Director's port
> >>> WorkingDirectory = "/usr/local/bacula/working"
> >>> Pid Directory = "/var/run"
> >>> Maximum Concurrent Jobs = 20
> >>>
> >>>}
> >>>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > - -
> >
> >>>- - - - - - - - - - -
> >>>
> >>>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > - -
> >
> >>>- - - - - - - - - - -
> >>>bacula-dir.conf: (No data spooling!)
> >>>
> >>>Director { # define myself
> >>> Name = savery-dir
> >>> ....
> >>> Maximum Concurrent Jobs = 2
> >>> ....
> >>>}
> >>>.....
> >>>
> >>>Client {
> >>> Name = savery-fd
> >>> Address = savery
> >>> .....
> >>> Maximum Concurrent Jobs = 2 # All Jobs are synced and taken from
> >>>savery, but should go on Drive1 and Drive2 simultaneously.
> >>>}
> >>>....
> >>>
> >>>Storage {
> >>> Name = Tape1
> >>> Address = savery # N.B. Use a fully qualified name
> >
> > here
> >
> >>> .....
> >>> Device = SLR1 # must be same as Device in
> >
> > Storage
> >
> >>>daemon
> >>> Media Type = SLR100 # must be same as MediaType in
> >
> > Storage
> >
> >>>daemon
> >>> Maximum Concurrent Jobs = 1 # One Job at a time to this
> >
> > Streamer.
> >
> >>>}
> >>>
> >>>Storage {
> >>> Name = Tape2 # externes SLR-Laufwerk
> >>> Address = savery # N.B. Use a fully qualified
> >
> > name
> >
> >>>here
> >>> .......
> >>> Device = SLR2 # must be same as Device in
> >>>Storage daemon
> >>> Media Type = SLR100 # must be same as MediaType in
> >
> >
> >>>Storage daemon
> >>> Maximum Concurrent Jobs = 1 # One Job at a time to this
> >>>Streamer.
> >>>}
> >>>....
> >>>
> >>>Job {
> >>> Name = "No.1"
> >>> Jobdefs = "DefaultJob"
> >>> Client = savery-fd
> >>> FileSet = "For No.1"
> >>> Storage = Tape1
> >>> Pool = Default1
> >>> Schedule = "DailyCycle"
> >>> Priority = 8
> >>> RunBeforeJob = "/usr/local/bacula/scripts/baculamount.sh"
> >>>}
> >>>
> >>>Job {
> >>> Name = "No.2"
> >>> JobDefs = "DefaultJob"
> >>> Client = savery-fd
> >>> FileSet = "For No.2"
> >>> Storage = Tape1
> >>> Pool = Default1
> >>> Schedule = "DailyCycle"
> >>> Priority = 9
> >>>}
> >>>
> >>>Job {
> >>> Name = "No.3"
> >>> JobDefs = "DefaultJob"
> >>> Client = savery-fd
> >>> FileSet = "For No.3"
> >>> Storage = Tape1
> >>> Pool = Default1
> >>> Schedule = "DailyCycle"
> >>> Priority = 10
> >>>}
> >>>
> >>>
> >>>Job {
> >>> Name = "No.4"
> >>> JobDefs = "DefaultJob"
> >>> Client = savery-fd
> >>> FileSet = "For No.4"
> >>> Storage = Tape2
> >>> Pool = Default2
> >>> Schedule = "DailyCycle_pool2"
> >>> Priority = 8
> >>>}
> >>>
> >>>
> >>>Job {
> >>> Name = "No.5" # This is the backup of the catalog.
> >>> JobDefs = "DefaultJob"
> >>> Level = Full
> >>> FileSet="Catalog"
> >>> Storage = Tape1
> >>> Pool = Default1
> >>> Priority = 11
> >>> Schedule = "WeeklyCycleAfterBackup"
> >>>}
> >>>
> >>>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> >
> > - -
> >
> >>>- - - - - - - - - - -
> >>>
> >>>Kind Regards,
> >>>
> >>>i. A. Christoff Buch
> >>>
> >>>=====================================
> >>>[EMAIL PROTECTED]
> >>>OneVision Software AG
> >>>Dr.-Leo-Ritter-Str. 9
> >>>93049 Regensburg
> >>
> >
>
>
>