Hello,

for backing up a server behind a firewall I set up a sshfs based connection.
During the backup sshfs crashes but bacula returns then without an error.
As a result a not to be used storage file exists which should not be 
used for restore.

How can bacula be forced to return with an error?

Now let's explain the details:
bacula storage, managment nodes and fd client are running version 2.2.8.
The servers client and bacula server are running under debian etch.

The protocol says:
03-Mär 03:12 rabbit-dir JobId 37833: BeforeJob: run command 
"/bacula/bin/check_diskusage.ksh /mnt/backup/b1/tarzan"
03-Mär 03:12 rabbit-dir JobId 37833: BeforeJob: Check directory 
/mnt/backup/b1/tarzan
03-Mär 03:12 rabbit-dir JobId 37833: BeforeJob: GOOD EXIT
03-Mär 03:12 rabbit-dir JobId 37833: BeforeJob: run command 
"/bacula/bin/sshfs-tunnel.ksh start /mnt/backup/b1/tarzan vtarzan1 22 
rabbit.wtz"

<<<<<< the sshfs mount is setup via a script, running as user root
03-Mär 03:12 rabbit-dir JobId 37833: BeforeJob: GOOD EXIT
03-Mär 03:12 rabbit-dir JobId 37833: BeforeJob: Exec '/usr/bin/sshfs 
[EMAIL PROTECTED]:/ /tmp/bacula/vtarzan1 -p 22 -o ro -o ServerAliveInterval=15'
03-Mär 03:12 rabbit-dir JobId 37833: Start Backup JobId 37833, 
Job=Backup_tarzan1.2008-03-03_03.00.55
03-Mär 03:12 rabbit-dir JobId 37833: Recycled volume "tarzan-inc-pool-0170"
03-Mär 03:12 rabbit-dir JobId 37833: Using Device "tarzanFileStorage"

<<<<<<< On the client side a lvm snapshot is done
03-Mär 03:12 rabbit JobId 37833: ClientRunBeforeJob: run command "ssh 
vtarzan1 /root/bin/bacula-backup.sh || umount /tmp/bacula/vtarzan1"
03-Mär 03:18 rabbit JobId 37833: ClientRunBeforeJob:
03-Mär 03:18 rabbit JobId 37833: ClientRunBeforeJob: SNAP Shot backup
03-Mär 03:18 rabbit JobId 37833: ClientRunBeforeJob: ----------------

<<<<< i.e. Save directories......
03-Mär 03:18 rabbit JobId 37833: ClientRunBeforeJob: Sichere Verzeichnis 
/backup/homebackup nach /backup/back1/backup-home.20080303.tar.gz
03-Mär 03:40 rabbit JobId 37833: ClientRunBeforeJob: Sichere Verzeichnis 
/backup/data3backup nach /backup/back1/backup-data3.20080303.tar.gz
03-Mär 03:48 rabbit JobId 37833: ClientRunBeforeJob: Loesche 
/backup/back1/backup-data3.20080302.tar.gz
03-Mär 03:48 rabbit JobId 37833: ClientRunBeforeJob: Sichere Verzeichnis 
/backup/data4backup nach /backup/back1/backup-data4.20080303.tar.gz
03-Mär 05:51 rabbit JobId 37833: ClientRunBeforeJob: Loesche 
/backup/back1/backup-data4.20080302.tar.gz
03-Mär 05:51 rabbit JobId 37833: ClientRunBeforeJob: Sichere Verzeichnis 
/backup/data5backup nach /backup/back1/backup-data5.20080303.tar.gz
03-Mär 05:53 rabbit JobId 37833: ClientRunBeforeJob: Loesche 
/backup/back1/backup-data5.20080302.tar.gz
03-Mär 05:53 rabbit JobId 37833: ClientRunBeforeJob: Sichere Verzeichnis 
/backup/data6backup nach /backup/back1/backup-data6.20080303.tar.gz
03-Mär 06:02 rabbit JobId 37833: ClientRunBeforeJob: Loesche 
/backup/back1/backup-data6.20080302.tar.gz
03-Mär 06:02 rabbit-sd JobId 37833: Labeled new Volume 
"tarzan-inc-pool-0170" on device "tarzanFileStorage" 
(/mnt/backup/b1/tarzan).
03-Mär 06:02 rabbit-sd JobId 37833: Wrote label to prelabeled Volume 
"tarzan-inc-pool-0170" on device "tarzanFileStorage" (/mnt/backup/b1/tarzan)
03-Mär 06:02 rabbit-dir JobId 37833: Max Volume jobs exceeded. Marking 
Volume "tarzan-inc-pool-0170" as Used.

<<<<< Somewhere in between the sshfs breaks (reason is not important here)
<<<<< that means the client can not be reached any more!

03-Mär 06:05 rabbit JobId 37833:      Could not stat 
/tmp/bacula/vtarzan1/usr/share/texmf-tetex/fonts/source/public/cm-bold: 
ERR=Der Socket ist nicht verbunden
...... i.e. socket is not connected
03-Mär 06:05 rabbit JobId 37833:      Could not stat 
/tmp/bacula/vtarzan1/usr/share/texmf-tetex/omega: ERR=Der Socket ist 
nicht verbunden
03-Mär 06:05 rabbit JobId 37833:      Could not stat 03-Mär 06:05 rabbit 
JobId 37833:      Could not stat /tmp/bacula/vtarzan1/vmlinuz: ERR=Der 
Socket ist nicht verbunden
03-Mär 06:05 rabbit-sd JobId 37833: Job write elapsed time = 00:02:45, 
Transfer rate = 44.49 K bytes/second
03-Mär 06:05 rabbit-dir JobId 37833: Bacula rabbit-dir 2.2.8 (26Jan08): 
03-Mär-2008 06:05:42
   Build OS:               i686-pc-linux-gnu debian 4.0
   JobId:                  37833
   Job:                    Backup_tarzan1.2008-03-03_03.00.55
   Backup Level:           Incremental, since=2008-03-02 06:37:46
   Client:                 "rabbit" 2.2.8 (26Jan08) 
i686-pc-linux-gnu,debian,4.0
   FileSet:                "FS tarzan1 node" 2008-02-08 13:55:55
   Pool:                   "tarzan-inc-pool" (From Job IncPool override)
   Storage:                "tarzanFile" (From Job resource)
   Scheduled time:         03-Mär-2008 03:00:00
   Start time:             03-Mär-2008 06:02:55
   End time:               03-Mär-2008 06:05:42
   Elapsed time:           2 mins 47 secs
   Priority:               20
   FD Files Written:       86
   SD Files Written:       86
   FD Bytes Written:       7,331,320 (7.331 MB)
   SD Bytes Written:       7,340,952 (7.340 MB)
   Rate:                   43.9 KB/s
   Software Compression:   83.7 %
   VSS:                    no
   Storage Encryption:     no
   Volume name(s):         tarzan-inc-pool-0170
   Volume Session Id:      143
   Volume Session Time:    1204196582
   Last Volume Bytes:      7,356,114 (7.356 MB)
   Non-fatal FD errors:    75
   SD Errors:              0
   FD termination status:  OK
   SD termination status:  OK
   Termination:            Backup OK -- with warnings

<<<<< the backup was not successful but bacula does not recognize this
situation!
Could this be forced to state that the backup did fail?

Thank you very much for your help (and time)

Reiner

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to