Hi all,

Here was the situation this evening:
I'm backing up to a 3-drive LTO2 autochanger. There were six jobs  
running, filling the max clients per drive (set at 2 concurrent jobs  
each). All were full backups using a "Fulls" pool.

Then, one of the full backups completes, opening the door for an  
incremental that was waiting for about 24 hours to run. When it  
finally gets its chance, it throws a fatal error because the volume  
mounted is in the "Fulls" pool, and it wants to use the  
"Incrementals" pool.

What I don't understand is 1) why it doesn't wait for the other job  
to complete, rather than dying and 2) why the job started to begin  
with if the director knew that the wrong tape was mounted for the job.

Is this a bug? Expected behavior?

Full job, just completing:
09-Aug 20:59 sbgrid-sd: New volume "000025L2" mounted on device  
"LTO2B" (/dev/nst1) at 09-Aug-2006 20:59.
10-Aug 00:19 sbgrid-dir: Bacula 1.38.11 (28Jun06): 10-Aug-2006 00:19:45
...

Incremental job, just starting:
10-Aug 00:19 sbgrid-dir: Start Backup JobId 308,  
Job=NMRL1.2006-08-09_06.00.00
10-Aug 00:14 nmrl1-fd: DIR and FD clocks differ by -350 seconds, FD  
automatically adjusting.
10-Aug 00:19 sbgrid-sd: NMRL1.2006-08-09_06.00.00 Fatal error:  
acquire.c:263 Wanted Volume "000001L2", but device "LTO2B" (/dev/ 
nst1) is busy writing on "000025L2" .

Thanks for any ideas, and let me know if you need more logs, config  
details, etc.

Ian

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to