Bacula 2.0.1, backing up to disk exclusively, using once device and the
default pool. I get this about 20% of the time:
12-Feb 23:11 vger-dir: Start Backup JobId 601, Job=vger_u2.2007-02-12_23.05.02
12-Feb 23:11 vger-dir: Recycled volume "Backup-0065"
12-Feb 23:11 vger-sd: vger_u2.2007-02-12_23.0
On Tue, 20 Feb 2007, Steve Barnes wrote:
> How about coining a new "word" RTNM (read the nice manual), or RTGM
> (read the good manual) or RTBM (read the big manual). :-)
RTFBM :)
-
Take Surveys. Earn Cash. Influence the F
d for minutes
at a stretch with no I/O being performed.
Steve
--------
Steve Thompson E-mail: [EMAIL PROTECTED]
Voyager Software LLC Web: http://www.vgersoft.com
39 Smugglers Path
different FD, it works. Anyone seen
this?
Steve
Steve Thompson E-mail: [EMAIL PROTECTED]
Voyager Software LLC Web: http://www.vgersoft.com
39 Smugglers Path VSW Support
Using Bacula 2.0.3 with MySQL 4.1.20 on a CentOS 4.5 x86 director. Doing a
full backup of a new file system from a 2.0.3/CentOS 4.5/x86_64 client
gives:
28-Jun 12:33 dante-dir: No prior Full backup Job record found.
28-Jun 12:33 dante-dir: No prior or suitable Full backup found in catalog.
Doin
On Thu, 28 Jun 2007, David Romerstein wrote:
> On Thu, 28 Jun 2007, Steve Thompson wrote:
>
>> 28-Jun 12:33 dante-dir: inca-10_data1.2007-06-28_12.33.36 Fatal error:
>> sql_create.c:753 Create db File record INSERT INTO File
>> (FileIndex,JobId,PathId,File
Bacula 2.0.3, director is 32-bit CentOS 4.5, clients are all CentOS 4.5,
both 32-bit and 64-bit. Backups are to disk files.
I find that I cannot do any restores:
06-Sep 13:59 dante-sd: RestoreFiles.2007-09-06_13.58.29 Error: block.c:275
Volume data
error at 0:899088562! Wanted ID: "BB02
re-run full backups of
all of my data (about 2 TB on this system), and then restore the whole lot
to see what I get. If I get time I will take a peek at the source.
Steve
----
Steve Thompson E-mail: smt AT
however, to assist in whatever way I can, given these constraints.
Steve
--------
Steve Thompson E-mail: smt AT vgersoft DOT com
Voyager Software LLC Web: http://www DOT vgersoft DOT com
39 Smugglers Path V
; telling me?
Steve
----
Steve Thompson E-mail: smt AT vgersoft DOT com
Voyager Software LLC Web: http://www DOT vgersoft DOT com
39 Smugglers Path VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
"186,3
On Wed, 26 Sep 2007, Ross Boylan wrote:
> I've been having really slow backups (13 hours) when I backup a large
> mail spool. I've attached a run report. There are about 1.4M files
> with a compressed size of 4G. I get much better throughput (e.g.,
> 2,000KB/s vs 86KB/s for this job!) with othe
On Wed, 5 Dec 2007, Martin Simmons wrote:
>>>>>> On Wed, 5 Dec 2007 11:36:33 -0500 (EST), Steve Thompson said:
>> I see this very often as well, and I am using disk exclusively. It also
>> happens about 40% of the time, and has done since I started with bacula at
&g
On Wed, 5 Dec 2007, Martin Simmons wrote:
>>>>>> On Wed, 5 Dec 2007 13:46:26 -0500 (EST), Steve Thompson said:
>> They are all the same at 2.2.4. It happens even in the case where
>> bacula-dir, bacula-fd and bacula-sd are running on the same machine.
>> E
On Wed, 5 Dec 2007, Dan Langille wrote:
> My first idea: different versions of SD and FD, with one trying to use a
> command the other does not recognize.
> What version is each of: bacula-dir, bacula-fd, bacula-sd
They are all the same at 2.2.4. It happens even in the case where
bacula-dir, ba
On Wed, 5 Dec 2007, [EMAIL PROTECTED] wrote:
> I am still experiencing this problem on a regular basis; not every job
> does this, but it seems a good 40% do each night.
> [...]
> 05-Dec 03:33 escabot-fd JobId 8219: Fatal error: job.c:1811 Bad response
> to Append Data command. Wanted 3000 OK dat
On Wed, 5 Dec 2007, Martin Simmons wrote:
> If that is OK, then I suggest running the SD with debug level 200, which
> might give us a clue where the error occurs.
So far I have been unable to get it to fail using -d200, while it does
fail if I don't specify a debug level. Maybe there is a timi
On Fri, 27 Mar 2009, John Drescher wrote:
> Have you ever seen bacula die? I mean in 5 years of using bacula on 35
> to 50 machines I do not recall ever seeing bacula die.
Yep; storage daemon (2.4.2) dies on me about once a month. I get a file
daemon failure about once a month too, but of course
fter changing the order?
Steve
----
Steve Thompson E-mail: smt AT vgersoft DOT com
Voyager Software LLC Web: http://www DOT vgersoft DOT com
39 Smugglers Path VSW Support: support AT vgersoft DOT c
On Sun, 16 Dec 2007, David Legg wrote:
> I'm sure this is all 'obvious' to the old hacks but is there a way to
> prevent files being written into the mount point when no drive is
> actually mounted?
Just do your backups to a subdirectory on the drive which is not
present below the mount point whe
Here's something interesting. Bacula 2.2.4 on both client (64-bit CentOS
4.5) and director (32-bit CentOS 4.5).
During a backup:
JobId 3516 Job asimov_data7.2007-12-20_23.00.08 is running.
Backup Job started: 20-Dec-07 23:44
Files=111,850 Bytes=1,863,801,574 Bytes/sec=15,977 Errors=0
On Mon, 24 Dec 2007, Martin Simmons wrote:
>>>>>> On Sat, 22 Dec 2007 08:29:54 -0500 (EST), Steve Thompson said:
>> [...]
>> So what is the "Files Examined" count really telling me? The JobFiles
>> count from a 'list job' is correct, howe
In the output of commands such as 'list jobs', is it possible to configure
bacula to display numeric quantities as digits alone (no commas)? I really
find this difficult to read.
-s
-
This SF.net email is sponsored by: Micr
On Fri, 29 Feb 2008, Ryan Novosielski wrote:
> Martin Simmons wrote:
>>>>>>> On Thu, 28 Feb 2008 11:20:26 -0500 (EST), Steve Thompson said:
>>> In the output of commands such as 'list jobs', is it possible to configure
>>> bacula to display nume
CREATE INDEX file_tmp_pathid_idx ON File (PathId);
CREATE INDEX file_tmp_filenameid_idx ON File (FilenameId);
before the dbcheck, which produces an enormous speedup.
Steve
----
Steve Thompson E-mail:
On Tue, 4 Nov 2008, John Drescher wrote:
>> Converting it to an XML file would not pose the problems specified in
>> the above wiki, there are lots of tools to create/parse XML files tha
>> could be useful.
>>
> I would vote against this if I could. I mean this will make it harder
> for me to e
y, even if there is a fancy tool to
edit them, I will stop using bacula. No doubt there are many that will
find this view unreasonable, but I can't help that.
Steve
----
Steve Thompson E-mail: sm
Bacula 2.4.2.
I have just added a second pool to an all-disk configuration and have a
question concerning automatic volume numbering. The relevant details are:
Pool {
Name = Foo_Pool
Storage = Foo_Storage
Label Format = "Foo-"
...
}
Pool {
Name = Bar_Pool
On Thu, 4 Dec 2008, Arno Lehmann wrote:
> 04.12.2008 19:41, Steve Thompson wrote:
>> Single catalog. When the second pool was added, there were 3260 volumes in
> Foo_Pool (from Foo-0001 to Foo-3260). Everything works, but the first
>> backup that went to Bar_Pool created a volu
On Thu, 18 Dec 2008, Stefan Sorin Nicolin wrote:
> I am about to reconfigure a mid sized Bacula intallation. I'd like to
> rename the storage daemon meaning the "Name" directive in the Storage
> { } block. Is this asking for trouble? Right now I am a bit nervous
> because I just learned (the hard
Xeon system, with all backups done
to disk. The SD is about half a mile distant). However, if I do a restore
of a large volume of data, I get 32-35 MB/sec. Seems a little odd that it
is so asymmetrical.
Steve
--
Steve
On Mon, 14 Jan 2013, John Drescher wrote:
> I would say this is a combination of filesystem performance ( remember
> that when you backup there can be a lot of seeks that reduce
> performance) and decompression performance. Decompression is less CPU
> intensive than compression.
Ah yes, you're ri
On Tue, 11 Jun 2013, Leonardo - Mandic wrote:
> On old versions never have this problem, and its same network and same
> servers of old bacula versions.
I have periodically had this problem on all versions of bacula that I have
used back to 1.38, and have never been able to identify a network p
rse a
different question.
Steve
----
Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web:
http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT
vgersoft DOT com Ithaca, NY 14850
"186,300 miles per second: i
On Mon, 29 Mar 2010, Roland Roberts wrote:
> It was a major upgrade reboot was part of the process. And it's been
> rebooted since then.
What does a "telnet archos.rlent.pnet 9102" give you?
Steve
--
Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script
that returns a non-zero status causes the job to be cancelled. This is not
what appears to happen, however. Instead a fatal error is declared:
11-Aug 13:30 cbe-dir JobId 686: No prior Full backup Job record found.
11-Aug 13:
On Wed, 11 Aug 2010, John Drescher wrote:
>> Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script
>> that returns a non-zero status causes the job to be cancelled. This is not
>> what appears to happen, however. Instead a fatal error is declared:
>
> Maybe the documentation shou
On Thu, 26 Aug 2010, m...@free-minds.net wrote:
> 1) do we really need to spool? We are not writing to real tapes, we have a
> filesystem as backend (ext3 over glusterfs).
I am one of those that believes spooling to be useful even when writing
backups to disk; it is obviously not in question tha
me?
Steve
--------
Steve Thompson E-mail: smt AT vgersoft DOT com
Voyager Software LLC Web: http://www DOT vgersoft DOT com
39 Smugglers Path VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
"186,282 miles per s
On Wed, 1 Dec 2010, Henrik Johansen wrote:
> The remaining posts will follow over the next month or so.
Just a minor question from part III. You state that your storage servers
each use three Perc 6/E controllers, allowing the attachment of 9 MD1000
shelves. I believe that you can attach 6 shel
client (about 50). Backups are compressed and TLS is used
(storage is offsite). No other changes were made: backup throughput
performance almost exactly doubled.
Steve
Steve Thompson E-mail: smt AT
On Mon, 6 Dec 2010, Josh Fisher wrote:
> On 12/5/2010 9:20 AM, Steve Thompson wrote:
>> Bacula 5.0.2. This is not a problem; just an observation.
>>
>> I do backups to disk only, using six RAID arrays for storage, totalling
>> 45TB physical disk. Originally I used six
Bacula 5.0.2, CentOS 5.5, x86_64.
I reported this back in November, to no comment. I have a lot of full
backups that are reporting "Software Compression: None". Software
compression is most definitely turned on. For example, all of my fileset
definitions begin in a similar fashion to:
FileSet
On Tue, 18 Jan 2011, Dan Langille wrote:
> On 1/18/2011 4:16 PM, Steve Thompson wrote:
>> > Whether software compression happens or not seems to be random. Anyone
>> know why this is happening?
>
> There was a discussion this week about this. Add Signature to your option
On Thu, 20 Jan 2011, Martin Simmons wrote:
> It reports "None" if there were no files in the backup or if the compression
> saved less than 0.5%, so it doesn't necessarily mean that it wasn't attempted.
I understand that, but I have several file sets that, for a full backup
level, sometimes give
On Thu, 20 Jan 2011, Dan Langille wrote:
> On 1/20/2011 7:24 AM, Steve Thompson wrote:
>> On Thu, 20 Jan 2011, Martin Simmons wrote:
>>
>>> It reports "None" if there were no files in the backup or if the
>>> compression
>>> saved less than
On Thu, 20 Jan 2011, Dan Langille wrote:
> Time for new eyes. Post the job emails.
One full backup completed. Here are the relevant definitions:
Job {
Name = "bear_data15"
JobDefs = "defjob"
Pool = Pool_bear_data15
Write Bootstrap = "/var/lib/bacula/bear_data15.bsr"
Client = bear
On Thu, 20 Jan 2011, Martin Simmons wrote:
> This will never compress -- the "default" Options clause needs to the last
> one, but you have it as the first one.
Yes, of course you are correct; thank you. And I've even read that in the
documentation. And moving the default Options clause to the e
I'm using bacula 5.0.2 on CentOS 5.5. Is there any way (or any other
version of bacula) that allows one to disable generation of the
.bconsole_history file?
-steve
--
Fulfilling the Lean Software Promise
Lean software p
On Wed, 11 Jan 2012, Honia A wrote:
> But when I checked the size of the database it's still really large:
>
> root@servername:/var/lib/bacula# ls -l
> -rw--- 1 bacula bacula 208285783 2012-01-10 05:23 bacula.sql
Depending on what you are backing up, that is not really all that big.
Mine is
Bacula 5.0.2. For the following example job:
Job {
Name = "cbe_home_a"
JobDefs = "defjob"
Pool = Pool_cbe_home_a
Write Bootstrap = "/var/lib/bacula/cbe_home_a.bsr"
Client = clarke-fd
FileSet = "cbe_home_a"
Schedule = "Saturday3"
}
FileSet {
Name = "cbe_home_a"
In
On Sat, 24 Mar 2012, James Harper wrote:
>> more than one client is available to backup the (shared) storage. If I change
>> the name of the client in the Job definition, a full backup always occurs the
>> next time a job is run. How do I avoid this?
>
> That's definitely going to confuse Bacula.
ept that it
does not backup any directories (and their contents) in (say)
/mnt/toe/data1/home/foo that have white space in their names. What have I
done wrong?
Steve
--
Steve Thompson E-mail: smt
On Tue, 17 Apr 2012, Martin Simmons wrote:
> Are you sure it is related to white space? I don't see anything in the above
> FileSet that would cause it. Maybe the missing directories are part of a
> different filesystem mounted on top of the main one?
There's only one file system and no nested
Bacula 5.0.2, CentOS 5.8.
I have this in my job definitions:
Full Max Run Time = 29d
but still they are terminated after 6 days:
14-Jul 20:27 cbe-dir JobId 39969: Fatal error: Network error with FD
during Backup: ERR=Interrupted system call
14-Jul 20:27 cbe-dir JobId 39969: Fata
On Sat, 14 Jul 2012, Joseph Spenner wrote:
> That's insane! :)
Heh :)
> Ok, can you maybe carve it up a little? How big is the backup?
I have already carved it up just about as much as I can. I have to back up
about 6 TB in 28 million files (that change very slowly) to a remote
offsite SD.
On Sat, 14 Jul 2012, Boutin, Stephen wrote:
> Try changing (or adding if you don't have it already) the heartbeat
> interval variable. I have about 160TB I'm currently backing up total &
> some of the boxes are 8-29TB jobs. Heartbeat is must, for large jobs, as
> far as I'm concerned.
Good ide
On Sun, 15 Jul 2012, Thomas Lohman wrote:
> This actually is a hardcoded "sanity" check in the code itself. Search
> the mailing lists from the past year. I'm pretty sure I posted where in
> the code this was and what needed to be changed.
Excellent; thank you! I have found your post and the re
On Thu, 19 Jul 2012, Dan Langille wrote:
> On 2012-07-15 13:48, Steve Thompson wrote:
>> On Sun, 15 Jul 2012, Thomas Lohman wrote:
>>
>>> This actually is a hardcoded "sanity" check in the code itself. Search
>>> the mailing lists from the past year.
the directories whose name begins
with "s", but it also backs up st123, which has been excluded. Presumably
I have the Options clauses incorrectly defined?
TIA,
Steve
--
Steve Thompson E-mail
On Thu, 19 Jul 2012, Dan Langille wrote:
> On 2012-07-15 13:48, Steve Thompson wrote:
>> On Sun, 15 Jul 2012, Thomas Lohman wrote:
>>
>>> This actually is a hardcoded "sanity" check in the code itself. Search
>>> the mailing lists from the past year.
60 matches
Mail list logo