> I suspect that 'job counter' get resetted if and only if all jobs in a
> volume get purged; this lead me to think that my configuration simpy does
> not work in a real situation, because sooner or later jobs get 'scattered'
> between volumes and virtual job of consolidation stop to work, so job
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> Not for a single job. When the storage daemon is writing a job's spooled
> data to tape, the client must wait. However, if multiple jobs are
> running in parallel, then the other jobs will continue to spool their
> data while on
Mandi! Gary R. Schmidt
In chel di` si favelave...
> And a sensible amount of RAM - millions of files on ZFS should not be a
> problem - unless you're doing it on a system with 32G of RAM or the like.
root@bpbkplom:~# free -h
totalusedfree shared buff/cache
Mandi! Heitor Faria
In chel di` si favelave...
> Is the ZFS local?
Yep.
> Does it have ZFS compression or dedup enabled?
Damn. Dedup no, but compression IS enabled... right! Never minded about
that... I've created a different mountpoint with compression disabled, i'll
provide feedback.
Than
Mandi! Heitor Faria
In chel di` si favelave...
> You can use the btape speed test to verify the best values.
I've just done some btape tests with block size and file size, using block
size of 512K, 1M and 2M, but found that there's litle or no differences over
1M, so i can confirm the 'paper' y
> Damn. Dedup no, but compression IS enabled... right! Never minded about
> that... I've created a different mountpoint with compression disabled, i'll
> provide feedback.
OK, as supposed with ZFS compression disabled provide some performance
improvement, but little one, nothing dramatic.
Stil