On 11/07/2011 14:34, Christian Tardif wrote:
On 11/07/11 12:15 PM, Adrian Reyer wrote:
Never tried it myself, but I have seen some documentation yesterday
about labling tapes when needed by issuing 'unmount' first. Doesn't this
resolve your 'blocked' status?
->
http://www.bacula.org/5.0.x-manual
On 7/11/2011 4:30 PM, Josh Fisher wrote:
> No, whitespace does not matter, and Bacula would tell you if there were
> a config error. Maybe there were jobs already running when you issued
> the reload command in bconsole to bring in the config changes? Try
> restarting SD and DIR and see if the prob
On Mon, Jul 11, 2011 at 2:41 PM, rlh1533 wrote:
> I've searched everywhere and can't seem to find the solution. My current
> setup is such that I have bacula-dir, bacula-fd, and bacula-sd installed on
> one CentOS 5.5 machine, and just the client (bacula-fd) installed on another
> CentOS 5.5 ma
I've searched everywhere and can't seem to find the solution. My current setup
is such that I have bacula-dir, bacula-fd, and bacula-sd installed on one
CentOS 5.5 machine, and just the client (bacula-fd) installed on another CentOS
5.5 machine. iptables is totally off and is not running on eith
On 2011-07-11 06:13, Martin Simmons wrote:
>> On Sun, 10 Jul 2011 12:17:55 +, Steve Costaras said:
>> Importance: Normal
>> Sensitivity: Normal
>>
>> I am trying a full backup/multi-job to a single client and all was going
>> well until this morning when I received the error below. All
On 7/11/2011 2:28 PM, Mike Hobbs wrote:
> On 07/11/2011 11:52 AM, Josh Fisher wrote:
>
>> My understanding is that if AllowMixedPriority=yes, then the higher
>> priority job should run before any other queued lower priority jobs.
>> Although it will not preempt already running jobs, it should star
On 07/11/2011 02:47 PM, tscollins wrote:
> ERR=Connection refused
Mac firewalls are a huge pain...
Check the firewall settings on the client in the "Preferences->Security"
section, AND check "ipfw list" from the console to see if the rules
there might be blocking access - evidently these are two
OK I have simplified my configuration files to just have one Mac OS X client
and here they are:
Mac1 bacula-fd.conf
Director {
Name = dracula-dir
Password = "blah"
}
FileDaemon {
Name = Mac1-fd
FDport = 9102
WorkingDirectory = /private/var/bacula/working
Pid Directory = /var/run
Max
Hello,
I have the bacula-dir and bacula-sd daemons running, and configured to use the
default port (9101) and I'm connecting with bconsole to that port and to the
correct host. bconsole fails to either connect or halt. Why might this be
happening? I can post sections of the config if needed
On 11/07/11 12:15 PM, Adrian Reyer wrote:
Never tried it myself, but I have seen some documentation yesterday
about labling tapes when needed by issuing 'unmount' first. Doesn't this
resolve your 'blocked' status?
->
http://www.bacula.org/5.0.x-manuals/en/main/main/Brief_Tutorial.html#SECTION0016
On 11/07/11 12:29 PM, Konstantin Khomoutov wrote:
Provided I understood your question correctly, you just `umount` the
current tape first, physically change the cartridge, label the new one,
mount it and then the job continues all by itself.
I'm not sure about labeling a new
On 07/11/2011 11:52 AM, Josh Fisher wrote:
> Did any of the jobs that ran just after those two write to the same
> volume? What was the status (from 'status dir' command) of the jobs that
> were not running when only two jobs were running?
Sorry, I probably should have cut and pasted that informat
On Mon, 11 Jul 2011 11:12:00 -0400
Christian Tardif wrote:
> This must be a simple task to achieve, but I did not find how to make
> it work.
>
> Let's say, for example, that I did not put the right tape into the
> drive, and that I don't have a tape with the right label (or not a
> tape within
On Mon, Jul 11, 2011 at 11:12:00AM -0400, Christian Tardif wrote:
> I know I can just kill the job, label my tape, and restart the job. But
> there are certain times where I just can't do that. So, how can I
> unblock the device to be abe to label my tape and let the job continue
> normally?
Ne
> On Mon, 11 Jul 2011 11:42:35 +0200 (CEST), Pierre Bourgin said:
>
> Hello,
>
> I have installed bacula 5.0.3 on a CentOS 5.4 x86_64 system (RPM x86_64
> rebuilt from source) and it's working great since a year.
>
> After a mistake I mad, I need to restore my catalog.
> So I tried to use b
On 7/10/2011 11:14 AM, Mike Hobbs wrote:
> On 6/29/2011 5:05 PM, Josh Fisher wrote:
>> By default, Bacula will select a volume that is already in a drive in
>> preference to a volume not in a drive. For concurrent jobs writing to
>> the same pool, this means they will always select the same volume
Hello,
I have the following question, I got the following error Messages, could these
indicate that my space ran out, therefore the backup created a new volume,
which succeded later on because some space was freed, and now, with the copy
job which transfers the data to tape, it just tells me th
> This must be a simple task to achieve, but I did not find how to make it
> work.
>
> Let's say, for example, that I did not put the right tape into the
> drive, and that I don't have a tape with the right label (or not a tape
> within the right pool). So I need to label a new tape. But bacula the
Hi,
This must be a simple task to achieve, but I did not find how to make it
work.
Let's say, for example, that I did not put the right tape into the
drive, and that I don't have a tape with the right label (or not a tape
within the right pool). So I need to label a new tape. But bacula then
Hello,
I've set up disk-based backups. The backups themselves seem to be running ok,
but restore fails with:
11-Jul 16:45 mission-control-dir JobId 108: Error: Bacula backuphost-dir 5.0.1
(24Feb10): 11-Jul-2011 16:45:31
Build OS: x86_64-pc-linux-gnu ubuntu 10.04
JobId:
I personally found the built-in Priority system to be unsuitable for my
needs. I am instead using the (undocumented) feature that backups
are typically executed in the order that they are placed into the schedule.
So in my case, I have four different machines, and each machine gets a
full backup
Excerpts from Martin Simmons's message of Mon Jul 11 05:24:38 -0400 2011:
Hi Martin,
> > You can override priority for each job that uses the jobdefs on a
> > job-by-job basis, but you'll also need to make multiple jobs per
> > client (one for full, one for daily/differential) so that you can
> >
> On Sun, 10 Jul 2011 12:17:55 +, Steve Costaras said:
> Importance: Normal
> Sensitivity: Normal
>
> I am trying a full backup/multi-job to a single client and all was going well
> until this morning when I received the error below. All other jobs were
> also canceled.
>
> My quest
> On Sun, 10 Jul 2011 20:07:22 -0400, Ben Walton said:
>
> You can override priority for each job that uses the jobdefs on a
> job-by-job basis, but you'll also need to make multiple jobs per
> client (one for full, one for daily/differential) so that you can
> assign different priorities.
Do
24 matches
Mail list logo