I have done a lot more playing around with bacula and discovered a few
things, made progress, and run into new obstacles...

Tracy R Reed wrote:
> I am running Bacula from cvs as of yesterday on a dual-core AMD64 box
> running FC4. And I am having a lot of problems. Sometimes when I try to
> label a disk I get a strange error:
> 
> *label
> Using default Catalog name=MyCatalog DB=bacula
> Automatically selected Storage: DVD
> Enter new Volume name: 21
> Automatically selected Pool: Default
> Connecting to Storage daemon DVD at home:9103 ...
> Sending label command for Volume "21" Slot 0 ...
> 3000 OK label. Volume=21 Device="DVDWriter" (/dev/hda)
> Catalog record for Volume "21", Slot 0  successfully created.
> Requesting to mount DVDWriter ...
> 3907 Device "DVDWriter" (/dev/hda) cannot be mounted. ERR=mount: wrong
> fs type, bad option, bad superblock on /dev/hda,
>        missing codepage or other error
>        In some cases useful info is found in syslog - try
>        dmesg | tail  or so
> 
> 
> Do not forget to mount the drive!!!

I think this error is because it is trying to actually mount the ISO
image but the disk is blank and without a filesystem. Not sure why it
actually tries to mount it here. "mount" in bacula (usually meaning a
tape) and "mount" to the OS (meaning a filesystem) are different things
which adds to the confusion.

> I just labelled a second volume and it did not give me that erorr this
> time. Not sure what the difference would be. Maybe the first media was
> bad or something.

I suspect that there must have been a real fs on this one.

> I notice when trying to purge a volume it says:
> 
> Enter MediaId or Volume name:
>
> But it seems to only accept MediaID. I name my volumes by numbers which
> can cause confusion with this prompt also. I had a volume named 2 and a
> MediaID 2 but they weren't the same physical media.

This still confuses me.

> Quite often in bconsole that if I ctrl-z it the command line becomes
> very confused. And I often find myself stuck at a prompt with no way out
> except to ctrl-z and kill bconsole if I do not wish to proceed. An
> example of this is in the label command.

I've been told ctrl-z is not supported in bconsole. That's very odd.
Every unix app seems to support ctrl-z. I also noticed eventually that .
is how you abort a command. Easy to miss the note amid the other stuff
when the console starts up.

> I have around 40G of data on this machine to backup. I just tried the
> estimate command:
> 
> *estimate
> The defined Job resources are:
>      1: Client1
>      2: home
>      3: BackupCatalog
>      4: RestoreFiles
> Select Job resource (1-4): 2
> Connecting to Client home-fd at home:9102
> 2000 OK estimate files=611104 bytes=1,295,324,281,965
> 
> Am I reading that right? Nearly 1.3 TERABYTES? Something is quite wrong.

I had some sort of fs corruption which made one file way huge. I think
it may have been a sparse file, actually. So this isn't an issue
anymore. estimate now reports a reasonable number.

> Also, is there any way to see what files it is backing up in real time?

status client does this, just not very convenient.

> I just tried to do a backup again. It ran for a while, running growisofs
> judging by top, then it said this:
> 
> 27-Oct 02:58 home-sd: Remaining free space 4,689,526,784 on "DVDWriter"
> (/dev/hda)
> 27-Oct 02:58 home-sd: Recycled volume "22" on device "DVDWriter"
> (/dev/hda), all previous data lost.
> *
> *home-sd: dvd.c:376 dvd.c:375 Error while writing current part to the
> DVD: Running /usr/bin/growisofs -use-the-force-luke=notray -quiet
> -use-the-force-luke=4gms -A 'Bacula Data' -input-charset=default
> -iso-level 3 -pad -p 'dvd-handler / growisofs' -sysid 'BACULADATA' -R -Z
> /dev/hda /home/backup/22
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Error trying to open /dev/hda exclusively ... retrying in 1 second.
> Trying to open without locking.
> FATAL: /dev/hda already carries isofs!

This happened because gnome was accessing my disk whenever I put it in
trying to auto-run and auto-mount and other such fancy things. It turned
these options in gnome off.

I was having problems labelling disks. If I delete a disk from the db
and run growisofs on the disk to blank it and try to label it bacula
would complain that there was already a volume with that label. Through
much trial and error I found that if a file of the same name as the
volume still exists in the backup spool directory it would say it is
already labelled. After deleting a volume from the db I have to go
manually delete the file from the spool directory. Perhaps this is not
something that would happen much in normal operation but I have had many
failed backup attempts and much cruft left laying around as I work
through all of these problems trying to get bacula working.

I have another backup running now. Is is progressing at 353kb/s. Pretty
slow. Although now I think I am probably just badly thrashing the disk
since I have just one 250G SATA disk which I am backing up and spooling to.

-- 
Tracy R Reed
http://copilotconsulting.com


-------------------------------------------------------
This SF.Net email is sponsored by the JBoss Inc.
Get Certified Today * Register for a JBoss Training Course
Free Certification Exam for All Training Attendees Through End of 2005
Visit http://www.jboss.com/services/certification for more information
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to