Hemant Shah wrote:
> This is a database question, but I figured some of the bacula users may have
> come across this problem so I am posting it here.
>
> Every monday I run following commands to check and garbage collect bacula
> database:
>
> dbcheck command
> vacuumdb -q -d bacula -z -f
The
Hemant Shah wrote:
> Folks,
>
> This is a database question, but I figured some of the bacula users may have
> come across this problem so I am posting it here.
>
>
> Every monday I run following commands to check and garbage collect bacula
> database:
>
> dbcheck command
> vacuumdb -q -d bacula
On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
> Jason Dixon wrote:
> >
> > I've tried that. But since the scheduled OS backup jobs are already
> > running, the client-initiated transaction log jobs are forced to wait.
> >
> Then you probaby still had a Maximum Concurrent Jobs = 1
On Thu, Mar 19, 2009 at 9:03 PM, Doug Forster wrote:
> I have gone into the database and can see that the database is empty for the
> job in question. I think that there is an issue with the insertion of over a
> million entrees all at once that is giving bacula a hard time. I have found
> a suppo
Jason Dixon wrote:
> On Thu, Mar 19, 2009 at 03:41:10PM -0700, Kevin Keane wrote:
>
>> If you are using a single tape drive, you will have an issue.
>>
>> One thing you could theoretically do is configure the storage daemon,
>> storage resource, job, and pool all for multiple concurrent jobs. T
I have gone into the database and can see that the database is empty for the
job in question. I think that there is an issue with the insertion of over a
million entrees all at once that is giving bacula a hard time. I have found
a supporting post here:
http://www.backupcentral.com/phpBB2/two-way-m
On Thu, Mar 19, 2009 at 03:41:10PM -0700, Kevin Keane wrote:
> If you are using a single tape drive, you will have an issue.
>
> One thing you could theoretically do is configure the storage daemon,
> storage resource, job, and pool all for multiple concurrent jobs. The
> bacula manual explains
Any ideas about this file mismatch. The incremental backed up 2,340 files
and there are only 2,297 files inserted into the tree.
Select the Client (1-50): 27
The defined FileSet resources are:
1: Logs
2: System
Select FileSet resource (1-2): 2
++---+---+-
The Retention is set for the Job not on the file. These files should be
getting into the database.
Conf.
JobDefs {
Name = "Critical"
Type = Backup
Level = Full
FileSet = "System"
Schedule = "Weekly"
Storage = PV-124T
RescheduleOnError = yes
RescheduleInterval = 5
RescheduleTimes
If you are using a single tape drive, you will have an issue.
One thing you could theoretically do is configure the storage daemon,
storage resource, job, and pool all for multiple concurrent jobs. The
bacula manual explains how to do that. But I'm not sure if that really
gives you the results
Rather than continue to beat my head against the wall, let me describe
what I need Bacula to perform and hopefully someone smarter than me can
suggest the best setup.
We have 3 types of jobs in this particular facility:
1) OS backups that run nightly
2) Database backup that runs nightly
3) Databa
I've provided these before, but since a few versions have passed us by I
have updated these to Bacula 2.4.4. These are client binaries only. If
anyone needs director, or storage daemon binaries, let me know and I
should be able to get them built as well.
MacOSX i386 (built on 10.5.2-i686)
ht
Folks,
This is a database question, but I figured some of the bacula users may have
come across this problem so I am posting it here.
Every monday I run following commands to check and garbage collect bacula
database:
dbcheck command
vacuumdb -q -d bacula -z -f
reindexdb
Usually I purge one
On Thursday 19 March 2009 12:56:34 Martin Simmons wrote:
> > On Wed, 18 Mar 2009 19:18:49 +0100, Kern Sibbald said:
> >
> > Hello,
> >
> > We have just released Bacula BETA version 2.5.42-b2 to the bacula
> > download of Source Forge.
>
> Is the uploaded bacula-2.5.42-b2.tar.gz truncated? It i
I need help reviewing my config to see if that is the problem or what
else it might be.
I have Bacula 2.2.8 installed on a Linux box CentOS 2.6.9-78.0.8.ELsmp
Disk backups work fine. No problem.
Backing up to Tape I will often have the Autochanger go 'offline'
somehow after a large backup is
f
>> I
>> have not seen a cpu that can do more than 20 MB/s. I know my 2.83GHz
>> core2 quad is no way as fast as my LTO2 tape drive when it comes to
>> compression.
>
> there is a multi-threading version of bzip2 - but I have no idea whether
> bacula will be able to handle bzip2
>
This is pbzip2, I
K. Lelong schrieb:
> Something similar happens when I run a job for the disk : the
unmounted disk is mounted, but afterwards I need to manually unmount it
(umount ...) to eject the disk.
> What am I missing ?
with your last backup job include e.g. the following line:
RunAfterJob = "/etc/
John Drescher schrieb:
>>> I am using compressed,
>>> GZIP backup.
>>>
> I
> have not seen a cpu that can do more than 20 MB/s. I know my 2.83GHz
> core2 quad is no way as fast as my LTO2 tape drive when it comes to
> compression.
there is a multi-threading version of bzip2 - but I have no
Ignore me, I'm being foolish.
I found what wrong.
Prashant Ramhit wrote:
Hi
All,
I am trying to use a different Pool for Incremental backup, but its not
doing it.
In the Job Directive I have the following
Job {
Name = "Test"
Enabled = no
Type = Backup
Client = Fileserver1
On Wed, Mar 18, 2009 at 7:03 PM, terryc wrote:
> (private) HKS wrote:
>
>> I've update firmware, cleaned the drive, and tested multiple tapes.
>> Since it seems to be working anyhow, is this something to even be
>> concerned about?
>
> Have you tried it with different tapes?
> Or even just rearran
> What a waste of a joke, straight over everyone's head :-(
>
> I was asking for info on the "RTFM" that was mentioned (note that that was
> all I quoted from the previous message in my reply) :-)
>
> Ah well, once it needs explaining, the joke kind of loses something!
>
I sent you the link for the
Ulrich Leodolter wrote:
> On Wed, 2009-03-18 at 17:49 +, Mike Holden wrote:
>> Ulrich Leodolter wrote:
>> > PS: RTFM might help too
>>
>> That sounds interesting. Where can I download that from?
>>
>
> http://mysqltuner.pl
>
>
> Isn't it a nice url for a perl script?
>
> On Wed, 2009-03-18 at 1
On Thursday 19 March 2009 12:56:34 Martin Simmons wrote:
> > On Wed, 18 Mar 2009 19:18:49 +0100, Kern Sibbald said:
> >
> > Hello,
> >
> > We have just released Bacula BETA version 2.5.42-b2 to the bacula
> > download of Source Forge.
>
> Is the uploaded bacula-2.5.42-b2.tar.gz truncated? It i
> On Wed, 18 Mar 2009 19:18:49 +0100, Kern Sibbald said:
>
> Hello,
>
> We have just released Bacula BETA version 2.5.42-b2 to the bacula download of
> Source Forge.
Is the uploaded bacula-2.5.42-b2.tar.gz truncated? It is only 688128 bytes
and gzip gives an eof error.
__Martin
-
Hi All,
I am trying to use a different Pool for Incremental backup, but its not
doing it.
In the Job Directive I have the following
Job {
Name = "Test"
Enabled = no
Type = Backup
Client = Fileserver1
FileSet = "Test"
Schedule = "Test"
Write Bootstrap = "/var/lib/bacula/dailybac
Hi,
In bacula-sd I have this :
Device {
Name = Rev1
Archive Device = "/mnt/REV70"
Requires Mount = yes
Mount Point = "/mnt/REV70"
Mount Command = "/bin/mount
/dev/disk/by-path/pci-:00:1f.5-scsi-0:0:0:0 %m"
Unmount Command = "/bin/umount %m"
Media Type = File
Random Acce
Hello,
How can i limit console messages to specific client?
If i call bconsole on clientxyz i'd like to see
only messages related to clientxyz-fd
Thanks
--
Ulrich Leodolter
OBVSG
--
Apps built with the Adobe(R) Flex(
hi!
does dbcheck even check if backups referenced in the database are
still there on the harddisk? i am backing up to disk currently and
would like to make sure no orphaned catalog entries remain after some
files were deleted the other day.
/andreas
--
28 matches
Mail list logo