Mandi! B. Smith
  In chel di` si favelave...

> I have a ZFS pool as a dedicated Bacula spool.

To be clear: the ZFS pool is only used for the bacula spool? Or 'spool' is
in 'loose' meaning, eg contain the data 'spooled' from other servers that
have to be put on LTO?

I'm fighting also on this, because ZFS is a bad beast and suffer for 'write
amplification'...

Starting from:
        
https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/

I've done:

1) (i cannot, not tried) destory the pool and rebuild with an higher ashift
 (sector size)

2) add a SSD cache disk to the pool (L2ARC); this probably is needed if your
 pool is also the data repository.

3) mount a specific ZFS filesystem, with compressione disabled and bigger
 blocksize for the bacula spool:
  mkdir -p /rpool-backup/bacula/spool
  zfs create -o mountpoint=/rpool-backup/bacula/spool -o compression=off -o 
recordsize=1M rpool-backup/bacula-spool
  chown -R bacula:tape /rpool-backup/bacula
  chmod 770 /rpool-backup/bacula /rpool-backup/bacula/spool

4) even if your data is local, enable bacula spool.

5) split your job in many ones, so you can spool different jobs and
 'interleave' writing (eg: where one job is writing to the tape, other are
bulding spool)

6) (to be done) snapshot the data repository, mount the snapshot readonly and
 do the backup from that.


I hope i was clear and useful...

-- 




_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to