Frank Sweetser schrieb:
>> (Client)RunBefore/AfterJob:
>> sticking commands one after another on one line separated by semicolon
>> seems not to work propperly.
>
> That's because treating the semicolon as a command separator is a function of
> the shell. The problem isn't that you're not escapi
Hi,
I recently moved up from Bacula 2.2.3 to 2.4.4. It seemed to be working but
one of my client systems failed to backup with the following error:
16-Feb 01:50 springfield-dir JobId 3650: Fatal error: Can't fill Path table
Query failed: INSERT INTO Path (Path) SELECT a.Path FROM (SELECT DIST
> P.S. Hasn't anybody created a graphical configuration program for bacula
> yet? ^^
I'm working on one using PHP and MySQL, I'm hoping to be able to pull the
configuration straight from MySQL for the Director and SD. The FD doesn't
change so much so I was going to just spit out a file to put on t
> I'm not sure if I quite follow what you are trying to accomplish, but it
> seems to me that maybe rsync would be another option to accomplish
> basically the same goal? Or am I way off base here?
>
For disk volumes this works great. Just rsync the folder with volumes.
John
-
> I tried to get offsite backups by doing a local backup first to the
> local SD and then another backup to the remote SD. This works fine when
> backupping but when you need to recover data (and you are using
> incrementals or differentials instead of full backups) the SD which you
> told to perfo
> P.S. Hasn't anybody created a graphical configuration program for bacula
> yet? ^^
>
webmin
John
--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open so
Berend Dekens wrote:
> Hi all,
>
> After trying the new beta's which hold Copy Jobs support I discovered
> that a copy job (just like a migration job) can only transfer data from
> one storage pool to another within *the same* storage daemon. Because I
> wanted to use this for offsite backups this
Hi all,
After trying the new beta's which hold Copy Jobs support I discovered
that a copy job (just like a migration job) can only transfer data from
one storage pool to another within *the same* storage daemon. Because I
wanted to use this for offsite backups this won't work for me.
I tried to g
> I know this is turning into a long-running monologue, but this
> performance issues is the last thing standing between me and a Backup
> Exec-free environment, so it's important to me.
>
> I believe I've eliminated the disks as the performance bottleneck.
> Through various tuning knobs (sector si
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
What's your database setup? Is it on the same filesystem/physical disks as your
tapefiles?
J.
Jean F. Gobin
Network Administrator
Tel: 212.542.3175
Mobile: 917.213.2532
Fax: 212.981.6545
32 Avenue of the Americas, 4th Floor, New
On Fri, Feb 13, 2009 at 4:00 PM, (private) HKS wrote:
> On Thu, Feb 12, 2009 at 6:03 PM, (private) HKS wrote:
>> On Tue, Feb 10, 2009 at 3:56 PM, (private) HKS wrote:
>>> On Tue, Feb 10, 2009 at 1:04 PM, Steve Polyack wrote:
(private) HKS wrote:
>
> My server's network performance
Hi list!
I'm trying to make a rescue cd just to try a bare metal restore.
The problem I have, is that installations of bacula-fd were made by
different ways (rpm for red-hat, .deb, and others rpm for centos), and
when trying to compile the bacula-restore, it needs to have the source.
Is there a wo
Tobias Barth wrote:
>
>
> Feature Request Form
>
> Item n: Migration jobs tape to tape with single drive
> Origin: Tobias Barth
> Date: 11 February 2009
> Status: new
>
> What: Migration jobs from tape to table should be possible with one
> single tape drive. File Systems (har
Feature Request Form
Item n: Storage Daemon based encryption
Origin: Steve Polyack
Date: 16 February 2009
Status: new
What: The ability to encrypt and decrypt data that moves between the
storage daemon and its storage devices.
Why:Storage daemon based encryption coul
Hi,
You could say "run" from bconsole and select the jobs you want to run
Greets
Julian
--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Ope
- Romain Dolbeau wrote:
> Jason Dixon wrote:
>
> > Can anyone lend a hand here? Is there no one else using Migration in
> > production?
>
> I tried once and it worked :-) , but I'm waiting for 3.0.4 or 3.0.5 to
> get Copy job, for my offsite backups (rather than the current
> duplication
Hi,
I have the following scenario: A full backup is being done and this
process lasts around 2 days with 4 dlt tapes. What happens if my system
stops due to a power outbreak on my area (and the UPS devide can not
support the machine of course)? When the server returns back, there is
no more job and
Well, since there has not been a response about my previous scheduling
question, here is another alternative to accomplish the same thing. Could a
Run Before Job script change some of the job settings? Specifically, I am
wondering if a Run Before Job script could be used to change the pool a job
On Monday 16 February 2009 16:37:52 Brian Debelius wrote:
> Sorry, I only have my one reason, Setting up a separate pool for every
> day, and also every week seems a bit much. I have an admim job which
> closes the last tape used at the end of each days backup run. It
> works. I will continue
I second Alex' request, and it also seems to me that Alex' version is
quite an expansion of what item 6 of the project file requests.
Item 6 just requests that there should be a way to delete a file system
file for pruned/purged volumes. The way I read it, the record in the
catalog would not be
Sorry, I only have my one reason, Setting up a separate pool for every
day, and also every week seems a bit much. I have an admim job which
closes the last tape used at the end of each days backup run. It
works. I will continue using it.
Kern Sibbald wrote:
> On Wednesday 11 February 2009
Can anyone lend a hand here? Is there no one else using Migration in
production?
Thanks,
Jason
- Original Message -
From: Jason Dixon
To: bacula-users@lists.sourceforge.net
Sent: Thu, 12 Feb 2009 16:34:02 -0500 (EST)
Subject: [Bacula-users] Problems with Migration
On Wednesday 11 February 2009 22:30:04 Jean Gobin wrote:
> I think what he wants is a way to make sure a tape is closed to, say,
> start a week with a fresh tape.
Thanks. That makes more sense.
>
> Pretty easy to do with different pools/schedules/jobs actually.
Yes, I agree.
Unless the author
John Drescher wrote:
>> Ok, I've got everything scripted up except the deletion. What all needs
>> to be done to delete a volume? Right now I am looking at doing a:
>>
>> $ bconsole <>
>>> delete media volume=volume_name
>>> quit
>>> EOF
>>>
>> $ rm -f /backup/volume_name
>>
>> Is tha
Alex F wrote:
>
>
> --- On *Mon, 2/16/09, Kern Sibbald* wrote:
>
> On Monday 16 February 2009 14:08:59 Dan Langille wrote:
> > Alex F wrote:
> > > Item 1. Delete volume on purge.
> > > Origin: Alex F, alexxzell at yahoo dot com
> > > Date: 16 February 2009
> > >
> > > Wh
Ralf Brinkmann wrote:
> Foo schrieb:
>
>> So I added in the job definition of the machine:
>>
>> ClientRunBeforeJob = "svnadmin --quiet dump /var/svnrepo
>>> /tmp/svnrepo_backup.svn_dump; gzip -9 /tmp/svnrepo_backup.svn_dump"
>> ClientRunAfterJob = "rm -f /tmp/svnrepo_backup.svn_dump.gz"
>>
>> Thi
--- On Mon, 2/16/09, Kern Sibbald wrote:
On Monday 16 February 2009 14:08:59 Dan Langille wrote:
> Alex F wrote:
> > Item 1. Delete volume on purge.
> > Origin: Alex F, alexxzell at yahoo dot com
> > Date: 16 February 2009
> >
> > What: A feature that would permit Bacula to delete a volume from t
On Mon, Feb 16, 2009 at 8:22 AM, Ariel Dorfman
wrote:
> Hi all!
>
> I´m new to bacula
>
> My problem is this
>
> I have a Director server (WinXP) where I have also running the client.
>
> I want to backup a networked Map, like z:. but I receive this error
>
>
>
> Could not stat z:: ERR=The system
I wrote a WSH script to mount a network drive, supplying with user name and
password (the user had full privileges to the network drive) and executes it
in the "ClientRunBeforeJob" directive. This is not net, but seems to work
so far.
To obtain more detailed messages, please follow the instruc
On Monday 16 February 2009 14:08:59 Dan Langille wrote:
> Alex F wrote:
> > Item 1. Delete volume on purge.
> > Origin: Alex F, alexxzell at yahoo dot com
> > Date: 16 February 2009
> >
> > What: A feature that would permit Bacula to delete a volume from the
> > hard disk after it has been purged.
Yes, this is all the error that i see
The files are there.
I want to backup all the files, within the remote file server
Ariel
-Original Message-
From: Dan Langille [mailto:d...@langille.org]
Sent: lunes, 16 de febrero de 2009 11:27 a.m.
To: Ariel Dorfman
Cc: bacula-users@lists.sourcefor
Hello,
What is the difference with item 6 of the current project file ?
Bye
Le Monday 16 February 2009 11:02:34 Alex F, vous avez écrit :
> Item 1. Delete volume on purge.
> Origin: Alex F, alexxzell at yahoo dot com
> Date: 16 February 2009
>
> What: A feature that would permit Bacula to delete
Ariel Dorfman wrote:
> Hi all!
>
> I´m new to bacula
>
> My problem is this
>
> I have a Director server (WinXP) where I have also running the client.
>
> I want to backup a networked Map, like z:. but I receive this error
>
>
>
> Could not stat z:: ERR=The system cannot find the file speci
Hi all!
I´m new to bacula
My problem is this
I have a Director server (WinXP) where I have also running the client.
I want to backup a networked Map, like z:. but I receive this error
Could not stat z:: ERR=The system cannot find the file specified
But when I run a job with a local fil
Alex F wrote:
> Item 1. Delete volume on purge.
> Origin: Alex F, alexxzell at yahoo dot com
> Date: 16 February 2009
>
> What: A feature that would permit Bacula to delete a volume from the
> hard disk after it has been purged.
>
> Why: Useful for users backing up to hard disks. I for instance,
On Mon, 16 Feb 2009 10:10:27 +0100, Ralf Brinkmann
wrote:
> (Client)RunBefore/AfterJob:
> sticking commands one after another on one line separated by semicolon
> seems not to work propperly.
Looks that way, but sticking them on separate lines doesn't work either,
since redirection is not pr
On Mon, 16 Feb 2009 09:02:20 +0100, Craig Ringer
wrote:
> Try setting the high priority job to -c1 (realtime) and the bacula fd to
> -c3 (idle) priority.
Bacula is already at -c3, I'm not supposed to touch the other app,
unfortunately. Thanks for your other suggestions, I'll forward those
Ralf Brinkmann wrote:
> Foo schrieb:
>
>> So I added in the job definition of the machine:
>>
>> ClientRunBeforeJob = "svnadmin --quiet dump /var/svnrepo
>>> /tmp/svnrepo_backup.svn_dump; gzip -9 /tmp/svnrepo_backup.svn_dump"
>> ClientRunAfterJob = "rm -f /tmp/svnrepo_backup.svn_dump.gz"
>>
>> This
Item 1. Delete volume on purge.
Origin: Alex F, alexxzell at yahoo dot com
Date: 16 February 2009
What: A feature that would permit Bacula to delete a volume from the hard disk
after it has been purged.
Why: Useful for users backing up to hard disks. I for instance, am doing this,
and have to m
Foo schrieb:
> So I added in the job definition of the machine:
>
> ClientRunBeforeJob = "svnadmin --quiet dump /var/svnrepo
>> /tmp/svnrepo_backup.svn_dump; gzip -9 /tmp/svnrepo_backup.svn_dump"
> ClientRunAfterJob = "rm -f /tmp/svnrepo_backup.svn_dump.gz"
>
> This fails with:
>
> ClientRunBefo
Foo wrote:
> Thanks, that helped, although there is still some packetloss (about a
> quarter of the previous value).
Try setting the high priority job to -c1 (realtime) and the bacula fd to
-c3 (idle) priority.
If that's still not sufficient, you may need to tune your disk subsystem
for low
41 matches
Mail list logo