Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> Another way that might be better for your case is to leave
> MaximumSpoolSize = 0 (unlimited) and specify a MaximumJobSpoolSize in
> the Device resource instead. The difference is that when one job reaches
> the MaximumJobSpoolS
On 6/24/24 11:04, Marco Gaiarin wrote:
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
Except when the MaximumSpoolSize for the Device resource is reached or
the Spool Directory becomes full. When there is no more storage space
for data spool files, all jobs writing to that d
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> Except when the MaximumSpoolSize for the Device resource is reached or
> the Spool Directory becomes full. When there is no more storage space
> for data spool files, all jobs writing to that device are paused and the
> spool fi
Mandi! Bill Arlofski via Bacula-users
In chel di` si favelave...
>> But, now, a question: this mean that in spool data get interleaved too? How
>> they are interleaved? File by file? Block by block? What block size?
> No. When you have jobs running, take a look into the SpoolDirectory. You will
On 6/20/24 18:58, Bill Arlofski via Bacula-users wrote:
On 6/20/24 8:58 AM, Marco Gaiarin wrote:
Once that is hit, the spoofles are written to tape, during which active
jobs have to wait because the spool is full.
There's no way to 'violate' this behaviour, right?! A single SD process
cannot
On 6/20/24 8:58 AM, Marco Gaiarin wrote:
But, now, a question: this mean that in spool data get interleaved too? How
they are interleaved? File by file? Block by block? What block size?
No. When you have jobs running, take a look into the SpoolDirectory. You will see a 'data' *.spool file and
Mandi! Gary R. Schmidt
In chel di` si favelave...
>> job involved (in the same pool, i think) start to write down spool to tape.
> MaximumSpoolSize is the total space used in the spool area, by all jobs.
After posting, i've looked more carefully at the log and understod that.
Sorry for the misu
On 17/06/2024 17:45, Marco Gaiarin wrote:
[SNIP]
> > So, literaly, if one of the job fill the 'MaximumSpoolSize' buffer,
*ALL*
job involved (in the same pool, i think) start to write down spool to tape.
MaximumSpoolSize is the total space used in the spool area, by all jobs.
Once that is hit,
Mandi! Bill Arlofski via Bacula-users
In chel di` si favelave...
> With DataSpooling enabled in all jobs, the only "interleaving" that you will
> have on your tapes is one big block of Job 1's
> de-spooled data, then maybe another Job 1 block, or a Job 2 block, or a Job 3
> block, and so on,
Mandi! Bill Arlofski via Bacula-users
In chel di` si favelave...
> Hope this helps!
Thanks to all for the hints and the explainings; bacula is really a bad
beast... there's ever room for improvement! ;-)
--
___
Bacula-users mailing list
Bacula-u
On 6/13/24 08:13, Gary R. Schmidt wrote:
On 13/06/2024 20:12, Stefan G. Weichinger wrote:
interested as well, I need to speedup my weekly/monthly FULL runs
(with LTO6, though: way slower anyway).
Shouldn't the file daemon do multiple jobs in parallel?
To tape you can only write ONE stream
On 13/06/2024 20:12, Stefan G. Weichinger wrote:
interested as well, I need to speedup my weekly/monthly FULL runs
(with LTO6, though: way slower anyway).
Shouldn't the file daemon do multiple jobs in parallel?
To tape you can only write ONE stream of data.
To the spooling disk there could
interested as well, I need to speedup my weekly/monthly FULL runs
(with LTO6, though: way slower anyway).
Shouldn't the file daemon do multiple jobs in parallel?
To tape you can only write ONE stream of data.
To the spooling disk there could be more than one stream.
Yes, that seems wrong:
On 6/11/24 10:45 AM, Marco Gaiarin wrote:
Sorry, i really don't understand and i need feedback...
I've read many time that tapes are handled better as they are, sequential
media; so they need on storage:
Maximum Concurrent Jobs = 1
Hello Marco,
If you are using DataSpooling for all
>> Not for a single job. When the storage daemon is writing a job's spooled
>> data to tape, the client must wait. However, if multiple jobs are
>> running in parallel, then the other jobs will continue to spool their
>> data while one job is despooling to tape.
>
> I come back on this. I've
On 11/06/2024 18:56, Stefan G. Weichinger wrote:
Am 06.06.24 um 15:57 schrieb Marco Gaiarin:
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
Not for a single job. When the storage daemon is writing a job's spooled
data to tape, the client must wait. However, if multiple jobs
Am 06.06.24 um 15:57 schrieb Marco Gaiarin:
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
Not for a single job. When the storage daemon is writing a job's spooled
data to tape, the client must wait. However, if multiple jobs are
running in parallel, then the other jobs will
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> Not for a single job. When the storage daemon is writing a job's spooled
> data to tape, the client must wait. However, if multiple jobs are
> running in parallel, then the other jobs will continue to spool their
> data while on
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> Not for a single job. When the storage daemon is writing a job's spooled
> data to tape, the client must wait. However, if multiple jobs are
> running in parallel, then the other jobs will continue to spool their
> data while on
Mandi! Gary R. Schmidt
In chel di` si favelave...
> And a sensible amount of RAM - millions of files on ZFS should not be a
> problem - unless you're doing it on a system with 32G of RAM or the like.
root@bpbkplom:~# free -h
totalusedfree shared buff/cache
Mandi! Heitor Faria
In chel di` si favelave...
> Is the ZFS local?
Yep.
> Does it have ZFS compression or dedup enabled?
Damn. Dedup no, but compression IS enabled... right! Never minded about
that... I've created a different mountpoint with compression disabled, i'll
provide feedback.
Than
> Damn. Dedup no, but compression IS enabled... right! Never minded about
> that... I've created a different mountpoint with compression disabled, i'll
> provide feedback.
OK, as supposed with ZFS compression disabled provide some performance
improvement, but little one, nothing dramatic.
Stil
On 20/05/2024 21:25, Heitor Faria wrote:
Hello Marco,
> Anyway, i'm hit another trouble. Seems that creating the spool file
took an
insane amount of time: source to backup are complex dirs, with millions of
files. Filesystem is ZFS.
Is the ZFS local? Does it have ZFS compression or dedup ena
Hello Marco,
> Anyway, i'm hit another trouble. Seems that creating the spool file took an
insane amount of time: source to backup are complex dirs, with millions of
files. Filesystem is ZFS.
Is the ZFS local? Does it have ZFS compression or dedup enabled? I wouldn't use
those options for da
On 5/17/24 06:29, Marco Gaiarin wrote:
I'm still fiddling with LTO9 and backup performances; finally i've managed
to test a shiny new server with an LTO9 tape (library indeed, but...) and i
can reach with 'btape test' 300MB/s, that is pretty cool, even if IBM
specification say that the tape co
I'm still fiddling with LTO9 and backup performances; finally i've managed
to test a shiny new server with an LTO9 tape (library indeed, but...) and i
can reach with 'btape test' 300MB/s, that is pretty cool, even if IBM
specification say that the tape could perform 400 MB/s.
Also, following sug
26 matches
Mail list logo