I am getting errors like this randomly.
14-Feb 19:14 comp0-sd: Migrate_Full.2007-02-14_17.27.23 Error:
../../stored/block.c:275 Volume data error at 0:1360235627! Wanted ID:
"BB02", got "^‹Ûõ". Buffer discarded.
-
Take Sur
When I do a migration I see extra jobs being started, even though I only
started a migration job. For example:
Running Jobs:
JobId Level Name Status
==
59 FullMigrate_Full.2007-02-14_17.27.23 is
Hello,
If I migrate a full backup from disk to tape, the next backup says that
it cant find a full in the catalog and runs a full job again. A thought
I just had while writing this is, Do I have to wait for an incremental
job to run before I migrate? But that doesnt seem write though. I am
> On Tue, 13 Feb 2007 15:12:40 +0100, Andrea Venturoli said:
>
> > If it did, it might be useful to find out what that tape contains by doing
> >
> > bls -c bacula-sd.conf -j -V'*' /dev/nsa0
>
> Here it is:
>
> # bls -c /usr/local/etc/bacula-sd.conf -j -V'*' /dev/nsa0
> bls: match.c:249 add
Hello, folks!
I've found an alternative way to configure postgres than in the doc.
I've installed an identd on my postgrest/bacula-dir server. In
pg_hba.conf I wrote the line:
local bacula bacula ident bacula
This line is right after the administrative entry and before the other
local entri
On Wed, February 14, 2007 12:31 am, Jesper Krogh wrote:
> Anyone who can tell if this is typical.. or where my bottleneck is in this
> system?
My LTO-3 jukebox setup is very similar to yours. My backup speeds as show
by "Rate:" in the log entries ranges from about 56KB/s to 27600KB/s,
depending
Hi,
I would like to archive some data to tape and keep it around forever.
Would using bacula to do this be the right way? Or would simply
tarring them up to the tape be better?
This is a one time job (and not a regular backup), and I have defined
a job for ad-hoc stuff like this within bacula.
Hello,
I did not want to cross post here & at bacula-devel list.
Here's a snippet of the trace which shows what happens at the time
when directories are skipped from being included in the backup.
win01
> On Tue, 13 Feb 2007 09:32:06 -0600, Jason King said:
>
> I am running the "FILL" command using the btape testing tool. The tape
> has been filled and it wrote something (i guess) to the second tape
> after asking me to mount another tape. AFter it wrote the second tape it
> asked me to mo
> On Wed, 14 Feb 2007 18:44:44 +0100, Kern Sibbald said:
>
> On Wednesday 14 February 2007 15:40, Gavin Conway wrote:
> > Martin Simmons wrote:
> > >> Hope this helps someone
> > >
> > > That file is generated by configure. Does rerunning configure break it
> > > again? What is on the broken
I am setting up a Disk to Disk to Tape backup and need some help with a
concept or two.
Here's what I'm trying to do:
1) Backup of critical files to 'MyVolume' (/mnt/backup/MyVolume)
2) text dump of catalog to /var/bacula/catalog
3) Append /var/log/catalog to 'MyVolume'
4) Migration of complete '
I'm trying to run the bacula-client on my RHEL box. I have it running
correctly on another RHEL box but on another one it doesn't work. When I
try and start the FD I get a simple message "Segmentation Fault". The
system logs say the same thing "Segmentation Fault". There are no other
clued that
I see...what's the syntax of the bscan command. I'm reading the man
pages now but I want to make sure I get the flag usage correct.
My tape does have more than one job on it...but only a few...and it was
for bacula testing anyway. I could purge the whole tape and start it
over but I want to mak
On Wednesday 14 February 2007 15:40, Gavin Conway wrote:
> Martin Simmons wrote:
> >> Hope this helps someone
> >
> > That file is generated by configure. Does rerunning configure break it
> > again? What is on the broken line?
> >
> > __Martin
>
> Running configure does break it yes. The line loo
On Wednesday 14 February 2007 18:13, Alan Brown wrote:
> On Wed, 14 Feb 2007, Jason King wrote:
> > Now my tape still shows that I have put 100G of
> > data on it...but the catalog shows that no job actually ran.
>
> Not quite, it shows no job completed - the attributes are despooled to
> tape afte
On Wednesday 14 February 2007 13:22, Gavin Conway wrote:
> Kern Sibbald wrote:
> > The problem is that your Catalog database is not well tuned (missing
> > indexes), or you have a very large database. The performance problem
> > comes from Bacula attempting to find the next volume that will be use
On Wed, 14 Feb 2007, Alan Brown wrote:
> I know this is outside the scope of the list, (despite someone having this
> on the wishlist), but I'm looking for a *nix-compatible (pref linux)
> hierarchical storage system.
IBM Tivoli Storage Manager has an HSM component called TSM for Space
Management
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Which rpm did you use, though? If 'daemon' is not a command that exists
in RHEL3, the script should probably not try to use it.
=R
Beren wrote:
> Hi... I copied to the list this time... sorry about the last two times :)
>
> I'm using RedHat Advanced
On Wed, 14 Feb 2007, Jason King wrote:
> Now my tape still shows that I have put 100G of
> data on it...but the catalog shows that no job actually ran.
Not quite, it shows no job completed - the attributes are despooled to
tape after the job finishes.
> Question is, can I update the catalog wit
Yea, you have to exclude the dfsroots from the backup.
I havent played with bscan yet, but that may be what you want.
brian-
ason King wrote:
> This is kind of strange. I ran a backup job last night (about 100G
> worth). Everything went fine but I got to a DFSRoot folder which is just
> a windo
This is kind of strange. I ran a backup job last night (about 100G
worth). Everything went fine but I got to a DFSRoot folder which is just
a windows virtual DFS folder so bacula couldn't actually grab it so the
whole job errored out. Now my tape still shows that I have put 100G of
data on it..
> On Wed, 14 Feb 2007 14:40:10 +, Gavin Conway said:
>
> Martin Simmons wrote:
> >>
> >> Hope this helps someone
> >
> > That file is generated by configure. Does rerunning configure break it
> > again?
> > What is on the broken line?
> >
> > __Martin
> >
>
> Running configure does b
On Wed, 14 Feb 2007, Erich Prinz wrote:
> While I can't even begin to venture a guess to the challenge, I find it quite
> fascinating and anxious to see some input to this thread. Learning is a great
> thing.
There are quite a few "HSM linux" hits on google, including one aborted
project from
On Wed, 14 Feb 2007 10:18:16 +0100, Marco Mandl wrote:
The problem was that WriteBootstrap is not done before the RunAfterJob has
finished. I solved it by scheduling the umount with at.
/m
> Hello,
>
> I want to unmount the target disk after the backup. The problem is that
> the WriteBootstrap
Alan,
While I can't even begin to venture a guess to the challenge, I find
it quite fascinating and anxious to see some input to this thread.
Learning is a great thing.
Erich
On Feb 14, 2007, at 6:52 AM, Alan Brown wrote:
>
> I know this is outside the scope of the list, (despite someone
Martin Simmons wrote:
>>
>> Hope this helps someone
>
> That file is generated by configure. Does rerunning configure break it again?
> What is on the broken line?
>
> __Martin
>
Running configure does break it yes. The line looks like this;
from ./src/host.h
#define HOST_OS "i686-pc-linux-g
> On Wed, 14 Feb 2007 13:22:16 +, Gavin Conway said:
>
> Gavin Conway wrote:
> > Apologies for the long email but I'm at my wits end on this one. I'm
> > using Ubuntu and have just removed Bacula using aptitude so that I can
> > install the updated version from source.
> >
> > Here's my
Yes, that was the solution. Thank you.
Brian Debelius wrote:
> Looks like tapeinfo can be built for BSD.
>
> http://www.bacula.org/dev-manual/Testing_Your_Tape_Drive.html#SECTION004037000
>
>
>
> mt -f /dev/nsa0 comp enable
>
> Does that do it?
>
> or
>
> http://www.bacula.org/de
On 2/14/07, Andreas Helmcke <[EMAIL PROTECTED]> wrote:
> Having an autochanger with 4 drives I found that the algorithm to find
> the drive to use for an job is somewhat "suboptimal":
> The storagedaemon always uses the first available drive without
> considering the loaded tape.
>
> This means: Wh
On Wed, 14 Feb 2007, Andreas Helmcke wrote:
> And worse:
> Today I had the problem, that the job which should run on drive 1 was
> held because of no available volume. So the drive was locked and the
> tape, which was needed for drive 0, was held in drive 1 blocking the job
> for drive 0.
This ha
>Solved!
It appears that a previously uninstallation on the 'new box' failed (maybe not
all admin rigths, ..?) so the HKLM\Software\Bacula keys did not get completly
deleted so when I ran the installer it was thinking it needed to make an
update/reinstall? not a full-fresh install thus I wasn't ab
Gavin Conway wrote:
> Apologies for the long email but I'm at my wits end on this one. I'm
> using Ubuntu and have just removed Bacula using aptitude so that I can
> install the updated version from source.
>
> Here's my compile time options;
>
> ./configure --prefix=/opt/bacula/ --enable-smart
Tirsdag 13 februar 2007 21:43 skrev Michel Meyers:
> Jason King wrote:
> > I'm seeing this error message every once in a while...what does it mean?
> >
> > 13-Feb 14:20 maint-dir: Warning: Cannot bind port 9101: ERR=Address
> > already in use: Retrying ...
>
> This usually means some other program
Alan Brown wrote, sometime around 14/02/07 12:52:
> I know this is outside the scope of the list, (despite someone having this
> on the wishlist), but I'm looking for a *nix-compatible (pref linux)
> hierarchical storage system.
I don't know if it's of much help, but XFS has an API for Hierarchi
On Wed, 14 Feb 2007, Jesper Krogh wrote:
> The attached Tape is an LTO-3(Quantum PX506) with has a reported rate at 80
> MB/s (I havent tested this). The network is a gigabit network, which I can
> put around 600 mbit/s through using nc in both ends on some junk-files.
Is that "native" or "compre
I know this is outside the scope of the list, (despite someone having this
on the wishlist), but I'm looking for a *nix-compatible (pref linux)
hierarchical storage system.
Explanation: A HFS (*) is a virtual filesystem, similar to unionfs.
Files are kept on "slow media" (tape or CD or DVD or
Did you select MySQL in the director configuration page in the installer?
Can you send me a copy of the install.log file from the \Program
Files\Bacula directory?
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:bacula-users-
> [EMAIL PROTECTED] On Behalf Of Marc Levy
> Sent: Wednes
Having an autochanger with 4 drives I found that the algorithm to find
the drive to use for an job is somewhat "suboptimal":
The storagedaemon always uses the first available drive without
considering the loaded tape.
This means: When I did a backup using concurrent jobs to use more then
one drive
Hi!
John Drescher wrote:
> On 2/13/07, Eduardo Júnior <[EMAIL PROTECTED]> wrote:
>
>> Hello,
>>
>> i'm with a problem.
>>
>> I'm using the bacula 1.38 for to make the backups of to servers of the place
>> where work.
>> Only that now, i'm lost with a new situation: one of our servers was placed
Hi!
[EMAIL PROTECTED] wrote:
> I successfully backed up my data last night while doing the tape test. I
> did the 3 backup jobs of the same directory, restarted bacula, then backed
> up the same director again. That worked great. I tried to restore the
> files to a tmp location and I get this erro
Hi,
I've been testing Bacula 2.0.2 for about a week now with the following config:
- All deamons on various Win32 boxes (2Ksrv, XP pro),
- sqlite for the catalog.
Now that everyting is working quite well, i'd like to use a MySQL database for
the catalog instead of the default sqlite.
So I install
Hi... I copied to the list this time... sorry about the last two times :)
I'm using RedHat Advanced Server v3. I think i've got the right RPM..
It installed fine anyway.
I've changed permissions of the bacula directory and its contained
files to owner "bacula" and group "bacula"
/etc/init.d/bacu
Apologies for the long email but I'm at my wits end on this one. I'm
using Ubuntu and have just removed Bacula using aptitude so that I can
install the updated version from source.
Here's my compile time options;
./configure --prefix=/opt/bacula/ --enable-smartalloc --with-mysql
--with-pid-dir
Hello,
I want to unmount the target disk after the backup. The problem is that
the WriteBootstrap takes quite long and fails if the RunAfterJob has
already unmounted the target disk.
Is there an elegant way too make the RanAfterJob wait until the backup job
is completely finished?
Regards,
Marco
Hello,
I want to unmount the target disk after the backup. The problem is that
the WriteBootstrap takes quite long and fails if the RunAfterJob has
already unmounted the target disk.
Is there an elegant way too make the RanAfterJob wait until the backup job
is completely finished?
Regards,
Marco
Hi,
> 2) bweb- When adding an autochanger into bweb, it references a Location
> object (nowhere else in the Bacula documentation references such an
> object)-
At this time, only bweb is using locations.
> my attempts to create a Location object have failed (although I
> can interact with th
Hi.
We've just upgraded our bacula-installation from 1.36 to 2.0 .. That worked
excellent.. I'm very impressed with the smooth transistion.
In the old installation we had transferrates around 30 MB/s (measured using
iptraf when bacula-fd was processing some big files) (never more, often
less).. a
47 matches
Mail list logo