It’s a single tape drive, and it used to fail until I set it to only 1 job on 
the drive.

Device {
  Name = T-LTO4
  Autochanger = no
  Drive Index = 0
  Media Type = LTO4
  Archive Device = /dev/nst0
  Device Type = Tape
  Maximum File Size = 20000000000 # 20 GB    Spool Directory = /mnt/spool/Q-LTO4
  Maximum Job Spool Size = 80000000000
  Maximum Spool Size = 160000000000
  Drive Crypto Enabled = Yes
  Query Crypto Status = yes
  Maximum Concurrent Jobs = 1
}

This was setup long ago, so maybe things have changed but the multiple jobs 
using the same sotrage devices by using spool was something that always got my 
attention for low bandwith long running jobs that just trickle. 




Brock Palen
[email protected]
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> On Jan 6, 2025, at 3:43 AM, Sebastian Sura <[email protected]> wrote:
> 
> Hi Brock,
> 
> Just In Time Reservation only delays the (first) reservation until the first 
> byte of data is sent from the client to the storage daemon.
> Afterwards everything proceeds as normal.
> 
> Regarding your despooling issues: What exactly does the log say for the job 
> that did not start ? Do you have `MaximumConcurrentJobs (SD->Device)` set ?
> Its hard to say what went wrong without more information.
> 
> Kind Regards and a happy new year!
> 
> Sebastian Sura
> 
> Am 03.01.25 um 22:07 schrieb Brock Palen:
>> I’m trying to understand the Just-In-Device scheduling and maybe how it 
>> interacts with Spooling.
>> 
>> I have always used spooling but the document says:
>> https://docs.bareos.org/TasksAndConcepts/DataSpooling.html
>> 
>> "This means that you can spool multiple simultaneous jobs to disk”
>> 
>> This never worked for me, if I tried to let more than one job run against a 
>> pool with a single tape drive even wtih spooling when the second job would 
>> try to start it would say no device was available and not start.
>> 
>> Does Just-In-Time address this?  Given spools are attached to devices it’s 
>> not clear how it would know what spool file to use or how to limit it’s size.
>> 
>> A few of my clients are road warriors and their home upload is very very 
>> slow, 3GB incremental might take 5 hours, this causes devices to be tied up 
>> for a long time doing nothing often causing Consolidation and Copy jobs to 
>> wait.
>> 
>> How does Just-In-Time and Spool work together to address this?  In a 
>> perferct world the FD wites to spools, in parallel up to conccurrent jobs 
>> allowed, but then is serilized to despool only one at a time on the tape 
>> drive, but interleave between despooling.  That is a job that needs to do 
>> multiple despools, while the second spooling is happening the device is 
>> freed to despool something else, and not be tied up the whole time.
>> 
>> Everything I found about Just-In-Time says it keeps tape drives busy, but it 
>> says ‘until ready to write’ but almost all our jobs are writing something 
>> quickly, it just might be slow because of the network.
>> 
>> Thanks I guess there just isn’t much in the documentation that fully 
>> explains the expected behavior.
>> 
>> Brock Palen
>> [email protected]
>> www.mlds-networks.com
>> Websites, Linux, Hosting, Joomla, Consulting
>> 
>> 
>> 
> -- 
> Sebastian Sura                  [email protected]
> Bareos GmbH & Co. KG            Phone: +49 221 630693-0
> https://www.bareos.com
> Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
> Komplementär: Bareos Verwaltungs-GmbH
> Geschäftsführer: Stephan Dühr, Jörg Steffens, Philipp Storz
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion visit 
> https://groups.google.com/d/msgid/bareos-users/b5fe6aa6-3647-4776-a1b2-0817b1320a28%40bareos.com.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/bareos-users/957B98DE-5BFF-4771-A68C-DCC421D6C650%40mlds-networks.com.

Reply via email to