I have read through the manual and understand that all the delete and purge
commands work against the catalog. However, I have sometimes encountered a
situation where a job to tape will fail (for whatever reason). In these
situations I usually have other backups that have successfully completed and
Jesper Krogh wrote:
> Arno Lehmann wrote:
>
>> Well, but your setup doesn't consider upgraded jobs. Imagine you add a
>> client today. The "Run = Level=Incremental Pool=Daily SpoolData=Yes
>> mon-tue thu-sat at 2:10" line will match and initiate a backup with
>> spooling. This backup will the
Mingus Dew wrote:
> I have read through the manual and understand that all the delete and
> purge commands work against the catalog. However, I have sometimes
> encountered a situation where a job to tape will fail (for whatever
> reason). In these situations I usually have other backups that ha
Thanks Dan
On Jan 21, 2008 11:48 AM, Dan Langille <[EMAIL PROTECTED]> wrote:
> Mingus Dew wrote:
> > I have a set of clients which all need the same fileset backed up. Is it
> > possible to specify multiple clients for the same job definition? Like
> so:
> >
> > Job {
> > Name = "Daily_Disk_to_
Mingus Dew wrote:
> I have a set of clients which all need the same fileset backed up. Is it
> possible to specify multiple clients for the same job definition? Like so:
>
> Job {
> Name = "Daily_Disk_to_Disk"
> Type = Backup
> Client = server1, server2, server3
> FileSet = "Linux_FileSys
I have a set of clients which all need the same fileset backed up. Is it
possible to specify multiple clients for the same job definition? Like so:
Job {
Name = "Daily_Disk_to_Disk"
Type = Backup
Client = server1, server2, server3
FileSet = "Linux_FileSystems"
Schedule = "Incr_0500_Sun_F
hi,
i have a client (a database server) on which i use a 'Client Run Before
Job' directive to dump the database before the backup. this takes about
4 hours to complete, after that i get an authentication error with the
storage daemon. when i deactivate the 'Client Run Before Job' statement
everyth
Hi Dan,
Thanks for the quick response. I'm running Bacula 2.2.6 on RedHat EL4 64
bit. The system is in production for close to two months backing up 8
clients and I haven't had a single hitch. Also, I've done batch labeling
multiple times before without ever encountering the mentioned problem. FYI
Win Htin wrote:
> Hi folks,
>
> I loaded 5 brand new tapes into the auto-changer and ran the label
> command from "bconsole" with the following parameters.
>
> *label barcodes Storage=TS3200_1 Pool=Scratch Slots=26-30
>
> Only ONE volume was successfully labeled and the rest failed with
> "ERR
Hi folks,
I loaded 5 brand new tapes into the auto-changer and ran the label command
from "bconsole" with the following parameters.
*label barcodes Storage=TS3200_1 Pool=Scratch Slots=26-30
Only ONE volume was successfully labeled and the rest failed with "ERR=Child
died from signal 15." message
Hi all,
have found an fix for my problem :-)
Because the JobId in the Job table is auto_increment, it was able to add
an ORDER BY JobId ASC in the sql_cmds.c
Here it is:
diff -u src/cats/sql_cmds.c_old src/cats/sql_cmds.c
--- src/cats/sql_cmds.c_old 2008-01-21 13:26:03.0 +0100
+++
Hi all,
i have an problem with restoring files from an filelist in an database.
the best is, that i will explain this in an example:
I use bacula 2.2.6 on over 110 backup servers an on over 1200 clients.
To easy control restores (6-10 restores every day) i have written an
perl script to search
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rolland Stockmann wrote:
> hi,
>
> I am new to this utility, what am i missing, below that lines i received and
> the part of the sd.conf file ?
> All I want to do is to run a diskbackup on my server with removable
> harddisks. the directory /media/rd
13 matches
Mail list logo