Hi Jesper,

12.02.2008 09:56, Jesper Krogh wrote:
> Hi.
> 
> I'm runnning 2 concurrent jobs, and that works without any problems (after
> getting the 2 LTO3 drives on their own controller).
> 
> Is it "by design" og a misconfiguraion that the two concurrent jobs hit
> the spooldata size at the same time?

That's by design, as the spool size limit is per storage device... you 
can limit the spool size per job, too.

> It would be nice if the Spool-limit
> was pr. job or something.

It's in the manual :-)

At 
http://www.bacula.org/manuals/en/install/install/Storage_Daemon_Configuratio.html#DeviceResource
 
there is "Maximum Job Spool Size = bytes".

> Now, even I've got spooling, I'll actually put
> the 2 jobs to the same tape at the same time (this is a situation where
> the jobs go to the same pool)
> 
> 12-Feb 09:49 bacula-sd JobId 13174: User specified spool size reached.
> 12-Feb 09:49 bacula-sd JobId 13181: User specified spool size reached.
> 12-Feb 09:49 bacula-sd JobId 13181: Writing spooled data to Volume.
> Despooling 18,198,928,610 bytes ...
> 12-Feb 09:49 bacula-sd JobId 13174: Writing spooled data to Volume.
> Despooling 281,801,145,301 bytes ...

Actually, the writing of the data happens for one job first, then the 
other job. If you look at the 'status sd' output, you should find that 
one job is in state "despooling" while the other is in "despool_wait". 
At least that's what I see all the time.

So, in short - nothing to worry about.

Arno

> Thanks.
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to