On Tuesday 18 July 2006 20:04, Alan Brown wrote:
> On Tue, 18 Jul 2006, Kern Sibbald wrote:
> > The only way I can correct it (if there is really a problem) is to write
> > a regression script that demonstrates the problem, preferably using the
> > virtual disk autochanger script, otherwise, I don't have enough
> > information, and I don't know how to repeat it.
>
> Does this help?
>
> I am manually running 5 jobs each from 2 clients,
> using spooling to /var/bacula/spool/drive[0|1]
> There are 2 pools involved
> Each of the backup sets is approx 1000Gb
> Each of the clients has 2-3 jobs for each pool
>
> (I appear to be I/O bound on the spool disk, but this shouldn't be
>   affecting anything other than generating some shoeshining.)
>
> Observations seem to indicate that when a job is using drive0, the storage
> director is fine, however when it's using drive1 it blocks access to
> drive0 and also blocks most status queries.

In looking over what you wrote and your conf file, I have the following 
comments:
1. It might be simpler if you pointed everything at /var/bacula/spool unless 
you are mounting the spooling subdirectories on separate filesystems.  Bacula 
automatically creates unique spool filenames for each job so multiple drives 
and multiple jobs can share the same spool directory.

2. If you are running on an older FreeBSD system (before 5.0 if I remember 
correctly), then most likely the source of problems you describe is that the 
pthreads were implemented completely in "user-land" rather than being kernel 
threads, and consequently, the threads do not run concurrently and don't even 
"timeshare" correctly, and you will see *exactly* the symptoms you describe 
-- i.e. one thread getting and keeping the control and "locking out" all the 
other threads including the console. Under Linux (all versions) and later 
versions of FreeBSD (5.0 or later) threads are truly concurrent and correctly 
timeshare among the threads.

I now have an update that will add additional information to the status 
output. I'll email it to you and Sebastian separately.  

>
> Configuration is as follows:
>
> Storage {                             # definition of myself
>    Name = Storage-sd
>    SDPort = 9103                  # Director's port
>    WorkingDirectory = "/var/bacula/working"
>    Pid Directory = "/var/run"
>    Maximum Concurrent Jobs = 100
>    Heartbeat Interval = 61s
> }
>
> Director {
>    Name = Director-dir
>    Password = "Gibberish"
> }
>
> Device {
>    Name = FileStorage
>    Media Type = File
>    Archive Device = /var/bacula/Filestorage
>    LabelMedia = yes;                   # lets Bacula label unlabeled media
>    Random Access = Yes;
>    AutomaticMount = yes;               # when device opened, read it
>    RemovableMedia = no;
>    AlwaysOpen = no;
> }
>
> Autochanger {
>    Name = MSL6000-changer
>    Device = MSL6000-0
>    Device = MSL6000-1
>    Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
>    Changer Device = /dev/sg16
> }
>
> Device {
>    Name = MSL6000-0                      #
>    Drive Index = 0
>    Media Type = LTO-2
>    AutoChanger = yes;
>    Changer Device = /dev/sg16
>    Archive Device = /dev/nst0
>    AutomaticMount = yes;               # when device opened, read it
>    AlwaysOpen = yes;
>    LabelMedia = yes;                   # lets Bacula label unlabeled media
>    RemovableMedia = yes;
>    RandomAccess = no;
>    Volume Poll Interval = 7200
>    # Enable the Alert command only if you have the mtx package loaded
>    Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
>    Spool Directory = /var/bacula/spool/MSL6000-0
>    Maximum Spool Size     = 200000000000 # 200Gb
>    Maximum Job Spool Size = 10000000000 # 10Gb
> }
> Device {
>    Name = MSL6000-1                      #
>    Drive Index = 1
>    Device Type = Tape
>    Media Type = LTO-2
>    AutoChanger = yes;
>    Changer Device = /dev/sg16
>    Archive Device = /dev/nst1
>    AutomaticMount = yes;               # when device opened, read it
>    AlwaysOpen = yes;
>    LabelMedia = yes;                   # lets Bacula label unlabeled media
>    RemovableMedia = yes;
>    RandomAccess = no;
>    Volume Poll Interval = 7200
>    # Enable the Alert command only if you have the mtx package loaded
>    Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
>    Spool Directory = /var/bacula/spool/MSL6000-1
>    Maximum Spool Size     = 200000000000 # 200Gb
>    Maximum Job Spool Size = 10000000000 # 10Gb
> }
>
> # A DVD device
> #
> Device {
>    Name = "DVD-Writer"
>    Media Type = DVD
>    Archive Device = /dev/hda
>    LabelMedia = yes;                   # lets Bacula label unlabeled media
>    Random Access = Yes;
>    AutomaticMount = yes;               # when device opened, read it
>    RemovableMedia = yes;
>    AlwaysOpen = no;
>    MaximumPartSize = 800M;
>    RequiresMount = yes;
>    MountPoint = /media/cdrecorder
>    MountCommand = "/bin/mount -t iso9660 -o ro %a %m";
>    UnmountCommand = "/bin/umount %m";
>    Spool Directory = /var/bacula/spool/DVD
>    Maximum Spool Size     = 200000000000 # 200Gb
>    Maximum Job Spool Size = 10000000000 # 10Gb
>    WritePartCommand = "/etc/bacula/dvd-handler %a write %e %v"
>    FreeSpaceCommand = "/etc/bacula/dvd-handler %a free"
> }
>
> Messages {
>    Name = Standard
>    director = Director-dir = all
> }
>
> >>> On 18.07.2006, at 16:04, Alan Brown wrote:
> >>>> I'm not sure about this yet....
> >>>>
> >>>> It appears that when running spooling and concurrent jobs on an
> >>>> autochanger with multiple tape drives, that the tape drives are being
> >>>> locked on a per-changer basis and not on a per-drive one.
> >>>>
> >>>>
> >>>> IE: Full spool files are only being flushed to one drive at a time,
> >>>> even when there are spool files ready for both tape drives.
> >>>>
> >>>>
> >>>> Can anyone else confirm?
> >>>>
> >>>> Kern?
> >>>>
> >>>> AB
> >>>>
> >>>>
> >>>> ----------------------------------------------------------------------
> >>>>-- - Take Surveys. Earn Cash. Influence the Future of IT
> >>>> Join SourceForge.net's Techsay panel and you'll get the chance to
> >>>> share your
> >>>> opinions on IT & business topics through brief surveys -- and earn
> >>>> cash
> >>>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEV
> >>>>DE V _______________________________________________
> >>>> Bacula-users mailing list
> >>>> Bacula-users@lists.sourceforge.net
> >>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>
> >> ------------------------------------------------------------------------
> >>- Take Surveys. Earn Cash. Influence the Future of IT
> >> Join SourceForge.net's Techsay panel and you'll get the chance to share
> >> your opinions on IT & business topics through brief surveys -- and earn
> >> cash
> >> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDE
> >>V _______________________________________________
> >> Bacula-users mailing list
> >> Bacula-users@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/bacula-users
> >
> > -------------------------------------------------------------------------
> > Take Surveys. Earn Cash. Influence the Future of IT
> > Join SourceForge.net's Techsay panel and you'll get the chance to share
> > your opinions on IT & business topics through brief surveys -- and earn
> > cash
> > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> > _______________________________________________
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to