an unknown unreproducible error with the autochanger device left behind
lots of jobs with status code "C" (Created, but not running) blocking
all current jobs.
My idea is to add a "RunBeforeJob"-script that ensures all expired jobs
are deleted from the job list before the actual backup starts.
Ha
I cannot find why I am getting these errors. The only thing odd about the
installation is that mysql is not in the standard place but i use the
-with-mysql=dir.
Here is a snippet of the error but it comes up for sd, fd and director. It
is complaining about -lssl but I have no idea what this is.
TI
Hi,
I'm trying to figure out why I can't run concurrent jobs on my
installation. The Director, the FileDaemon and the Storage are all set as:
Maximum Concurrent Jobs = 20
All the bacula setup resides on a single server: 2 SCSI tapes, data to
backup, and bacula progs. I have two jobs I would
Hi,
14.08.2009 23:16, K. M. Peterson wrote:
> Hi everyone,
>
> I have some strategy questions.
Actually, I think you had a bunch of very interesting things to tell
us ;-)
> We've been using Bacula for about 18 months; backing up ~3.5TB/week to
> DLT-S4 (Quantum SuperLoader). We are still on
I have reinstalled and upgraded a bunch of packages trying to fix this issue
even broke all my virtual drupal sites for a couple of hours very scary heehee.
Anyways here some more info
ldd /usr/sbin/bacula-dir used to say file not found for libmysqlclient_r.so.12
however now it finds version 1
I have some questions related to the new VirtualFull backup level in
bacula 3.0. I have upgraded to 3.0 and am looking to use VirtualFull
jobs to synthesize our disk backups onto tape media. This worked fine
the first week, but now I am running into some problems. When I go to
run the ne
I ran the memtest on our bacula server last night. After 14 hours and 8 passes
it didn't find any problems. I'm at the end of my rope here. I'm trying a new
virtual server to see if that fixes the issue.
_
Corey Shaw
Technology Specialist
O. 801.491.0705 (x. 157)
F.
Is there a directive to specify the Minimum lapsed time between backup jobs?
I.e. I don't need Bacula 3.0.2 running the same job multiple times in a
row, as it got hung up waiting for a tape..
I already have the following in jobdefs:
Allow duplicate jobs = no
Cancel Queued Duplicates = yes
Mark Nienberg wrote:
> In the documentation it says:
>
> "Jobs on Volumes will be Migration only if the Volume is marked, Full, Used,
> or
> Error. Volumes that are still marked Append will not be considered for
> migration."
>
> That seems like a good idea. But will Bacula examine the volumes
HI folks,
I assumed providing the deleted sub-directory e.g.
/home/john/deleted_dirname would restore not only the directory but
all files underneath. Found out the hard way it doesn't. The end user
doesn't remember the name of files. How can I restore whatever was
under the deleted sub-directory.
jaschu wrote:
>
> Now the question: Will old Volumes be deleted automatically, after
> VolumeRetention has passed, or will they remain on disk, in which case I
> would have to delete them manually?
>
Hi,
>From Bacula documentation about volume recycling:
"...when Bacula recycles a Volume, th
solved that issue by setting "Heartbeat Interval = 60" in the fd-config.
thx
michael
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and dep
hi,
i've a fresh bacula-setup (3.0.1), and the following problem..
after a job is finished, the director didn't write the batch-table
into the database.. that is what it looks like for me.. the job
is just frozen... but some smaller jobs work well.
any hints?
i already set my.cnf
[mysqld]
wait
Hi dear Bacula'ers !
I've one customer where we will change the server hardware to a new one.
( changing also from 32bits to 64bits opensuse Linux )
We use now many projects with postgresql database, and mysql was just keep for
historical reason
(bacula running with form the 1.3x version to the
> I want to backup to disk, having each Job in a separate Volume.
>
> For this purpose, I have in bacula-dir.conf
> Pool {
> UseVolumeOnce = yes
> VolumeRetention = 30 days
> AutoPrune = yes
> ...
> }
>
> ... and in bacula-sd.conf
> Device {
> LabelMedia = yes
> AutomaticMount = y
Hi all,
I want to backup to disk, having each Job in a separate Volume.
For this purpose, I have in bacula-dir.conf
Pool {
UseVolumeOnce = yes
VolumeRetention = 30 days
AutoPrune = yes
...
}
... and in bacula-sd.conf
Device {
LabelMedia = yes
AutomaticMount = yes
...
}
Allan Black wrote:
> RYAN vAN GINNEKEN wrote:
>
>> sudo apt-get install libmysqlclient15off
>> Reading package lists... Done
>> Building dependency tree
>> Reading state information... Done
>> libmysqlclient15off is already the newest version.
>>
>
>
>>> Starting Bacula Director: /sbin/b
Item 41: compression based on filesize
Origin: Michael Patzer (mpatzer-at-dunkel.de)
Date: 19. August 2009
Status: feature request
What: The ability to define a minimum filesize a file
must have, to run into gzip compression.
Why:If you have a system with a lot of s
18 matches
Mail list logo