hi

   I came across a problem recently after installing a new single tape
drive for backups. This is a HPE LTO-8 Ultrium machine connected to a
Redhat linux box: Linux swlx1.rdg.ac.uk 3.10.0-862.9.1.el7.x86_64

The problem occured whilst performing a backup that consists of several
millions of files which are several TB in total size. The backup
stopped after writing ~1.5TB with the director reporting the volume was
full and asking for a new labelled volume. LTO-8 should take at least
12TB (native). This was a surprise but I thought it might be a tape
problem so I unmounted the tape and tried to load a new tape to label
it and mount it to continue but I could not load the new blank tape.
It seemed like the machine continually tried to load the tape without
success and I had to keep pressing the eject button to extract the
tape.

Thinking this might be a hardware problem I stopped the backup shutdown
the bacula daemons and ran all the vendor tests which came back as
reporting no errors. On restarting the bacula daemons I found I was
able to load the tapes again and re-start the backup.

So my question is if this is not a hardware or tape problem what
prevents me loading a new tape and labelling during an ongoing backup
job, is there some way to pause the backup to allow a new tape to be
labelled?

My storage config is:

Device {
  Name = LTO-8
  Media Type = LTO-8
  Archive Device = /dev/nst0
  AutomaticMount = yes;              
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = no
  Spool Directory = /opt/bacula/working2 
  Maximum Spool Size = 100GB 
  Maximum Job Spool Size  = 50GB
}

Should the AutomaticMount be set to 'no' to stop attempts to
automatically mount any new tape even if it is not labelled?

The issue of the tape being labelled full well before its capacity is
still a mystery.

Thanks for any help

Kevin



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to