>
> Job "serverA" runs, and auto-labels a new file as
> "/store/serverA-434"... but the job fails because ServerA is
> unreachable. So bacula times out, and then proceeds with backup job
> "serverB", but it uses the file it's already created for serverA's
> backup.
>
> Any way around this?
you
Hi bacula-users,
We have configured disk volumes and auto-labeling in such a way that a
new file is created for every backup job, and the job ID is part of the
filename:
--
Pool {
Name = daily
Pool Type = Backup
Recycle = no
Use Volume Once = yes
AutoPrune = yes # Pr
I've set up a very basic Ubuntu server to run as a headless file server only to
be controlled through SSH or Webmin. This is running fine on a local network.
Next job is to get some basic local disk backup working and then add remote
backup.
I was drawn to Bacula because it seems to be highly rec
On Tue, December 14, 2010 11:48 am, Robert Wirth wrote:
> Hi,
>
> strange problem. Here's some hardware where Bacula has been running
> successfully for ca. 5 years. It was release 1.38.11 under Solaris 10x86.
>
> Last month, we had a system disk crash on the backup system. No backup
> datas ha
Hi,
strange problem. Here's some hardware where Bacula has been running
successfully for ca. 5 years. It was release 1.38.11 under Solaris 10x86.
Last month, we had a system disk crash on the backup system. No backup
datas have been lost. We just had to reinstall the backup system.
Since th
On Mon, December 13, 2010 9:18 pm, Dan Langille wrote:
> From time to time, I get a job stuck. But I'm not sure why. Several
> other jobs have already gone through this storage device since this item
> was queued. Confused...
>
> Running Jobs:
> Console connected at 14-Dec-10 02:03
> JobId L
> On Tue, 14 Dec 2010 09:26:40 +0100, Hugo said:
>
> hi list
>
> I had to do a full backup of our archive, which is about 5.5 TB big. I ran
> into
> some problems a few times, like the hardcoded 6 days job running limit, so I
> split the archive, moved one of the folders out of my backup
'Marcello Romani' wrote:
>Il 01/12/2010 16:04, Henrik Johansen ha scritto:
>> Hi folks,
>>
>> I did prepare a paper for this years "Bacula Konferenz 2010" about doing
>> large scale, high peformance disk-to-disk backups with Bacula but
>> unfortunately my workload prohibited me from submitting.
>>
hi list
I had to do a full backup of our archive, which is about 5.5 TB big. I ran into
some problems a few times, like the hardcoded 6 days job running limit, so I
split the archive, moved one of the folders out of my backup dir and started
the
job. after that, I moved it back, and started th