On 01/21/2015 06:41 PM, Bill Arlofski wrote:
> Bacula has a hard-coded 6 day limit on a job's run time. 518401 seconds =
> 6.1157 days, so it appears that is the cause for the watchdog killing the
> job.
Hard-coded, huh? Nobody's tried backing up that big data I keep hearing
about?
> Does
On 01/21/2015 05:13 PM, Dimitri Maziuk wrote:
> (Take 2)
>
> I've a client with ~316GB to back up. Currently the backup's been
> running for 5 days and wrote 33GB to the spool file. Previous runs
> failed with
>
>> User specified Job spool size reached: JobSpoolSize=49,807,365,050
>> MaxJobSpool
On 01/21/2015 05:12 PM, Heitor Faria wrote:
> Hey Mr. Dimitri: do you have Attribute Spooling on for this job (Job
resource, Spool Attributes=yes)? It usually improves the performance if
backing up lots of files, witch maybe causing this bottleneck.
Yes:
SpoolData = yes
SpoolAttributes =
> I've a client with ~316GB to back up. Currently the backup's been
> running for 5 days and wrote 33GB to the spool file. Previous runs
> failed with
>
> > User specified Job spool size reached: JobSpoolSize=49,807,365,050
> > MaxJobSpoolSize=49,807,360,000
> > Writing spooled data to Volume. Des
(Take 2)
I've a client with ~316GB to back up. Currently the backup's been
running for 5 days and wrote 33GB to the spool file. Previous runs
failed with
> User specified Job spool size reached: JobSpoolSize=49,807,365,050
> MaxJobSpoolSize=49,807,360,000
> Writing spooled data to Volume. Despoo