Hello,
The reason this artificial limit is there is that because I saw several
cases of copy/migrate jobs starting on the order of 600 jobs, which
caused the systems in question to totally choke up. It was probably
a combination of insufficient hardware for 600 jobs, and much too
high limits placed on the maximum number of Storage daemon jobs.
In retrospect, setting it to 100 was probably a bad idea. I am sorry
for the problems you are having.
It is probably better to have a directive or to add additional
documentation that explains the downside of possibly starting
a huge number of jobs at the same time or possibly a different
algorithm. I will find some suitable fix in the next version, which
will be released in March. In the mean time, you can change the
source code to set the limit to a larger value and rebuild from source.
The limit is in file
src/dird/migrate.c
at line 673 and it reads:
int limit = 99; /* limit + 1 is max jobs to start */
Or you can use a special SQL statement as a number of users have
suggested.
Best regards,
Kern
On 01/10/2014 11:20 PM, Paul De Audney wrote:
>
> On 9 January 2014 12:54, Steven Hammond
> <shamm...@technicalchemical.com
> <mailto:shamm...@technicalchemical.com>> wrote:
>
> I missed a couple of days (holidays) backing up from disk to tape
> (we backup disk to disk every night) so when I went to run the job
> to copy disk to tape it only grabbed 100 jobs. This seems sort of
> artificial (what if I had more than 100 workstations/servers I was
> backing up?). I was wondering if there is something to set that
> will override that setting (I couldn't find one at a cursory
> glance). I would prefer not to use a special query if possible
> (we are using PoolUncopiedJobs). I know how to write a query and
> have one already that I use to see how many jobs need to be backed
> up. I'm just curious how others are getting around this
> artificial limit. Thanks.
>
>
> I am currently using a custom SQL query to copy jobs. I do this
> because we have different backup schedules for various systems. I find
> the SQL query does give me more control over what jobs get copied and
> when.
>
> Unfortunately when I originally configured bacula, I setup the backups
> for incremental and full backups for all hosts to be written into a
> single pool for disk based backups.
> So if I use pooluncopiedjobs I will end up copying all my incremental
> backups to tape.
>
>
>
> ------------------------------------------------------------------------------
> CenturyLink Cloud: The Leader in Enterprise Cloud Services.
> Learn Why More Businesses Are Choosing CenturyLink Cloud For
> Critical Workloads, Development Environments & Everything In Between.
> Get a Quote or Start a Free Trial Today.
> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
------------------------------------------------------------------------------
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments & Everything In Between.
Get a Quote or Start a Free Trial Today.
http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users