When deleting a pool via bconsole, the volumes of that pool still exists
in filesystem but are not known anymore to the director and needs to be
deleted manually. When I use the label command:
Connecting to Storage daemon daily_storage at 192.168.2.211:9103 ...
Sending label command for Volume
I have bacula running on a productional server where external backup storage is
accessible via ftp only.
Therefore I want to use curlftpfs to mount the storage into the filesystem.
This works fine on the commandline but introduces problems with bacula.
The Config Section used for SD:
> Device {
>
I just tried to simulate a database crash and disaster recovery and came
to the point with restoring database with bacula-console and bootstrap file.
The documentation
http://bacula.org/en/dev-manual/Restore_Command.html#database_restore
shows the possibility to restore within console so I tried
JobDefs = def_verify_home
Client = trac-test-fd
}
which verifies a job with name trac-test-backup_home and fileset "home_dir".
Last job that was run:
galactica-backup_src which is another client (galactica) and another
fileset (src_dir).
If you need further configuration info pl
Dan Langille schrieb:
> On Feb 19, 2008, at 8:46 AM, Gunnar Thielebein wrote:
>
>> Hi group :-)
>>
>> This is a short one:
>>
>> Is there the possibility to delete job records only for a specific
>> pool/volume? Apart of sql magic ;-)
>
>
>
>
Hi group :-)
This is a short one:
Is there the possibility to delete job records only for a specific
pool/volume? Apart of sql magic ;-)
Regards,
Gunnar
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. M
Hi,
is someone using a script which benefits of the postgres' point in time
recovering feature? What I wonder is if this is possible/already
approached by someone...
on fullbackup (initial one):
- stop database
- do full filesystem-backup from database
- start database
on incremental:
SELECT p