Hello fellow users,

I am new to Bacula and currently having a hard time wrapping my head around all 
the concepts that Bacula offers to implement my backup strategy.

I intend to use Bacula Community Edition for my private machines, this is not 
for professional purposes, there is no customer involved. I am the admin and 
the customer at the same time.

So far I have only set up various client backup solutions (TimeMachine, 
HyperBackup, ghettoVCB), and Bacula is my first attempt to implement a 
centralized multi-tiered backup solution for a diverse set of clients.

I will try to explain, what I intend to do and where I am currently stuck, 
hoping someone would be interested to point me in the right direction and give 
their valuable advice.

I will only use disk storage and S3 buckets, I do not own any tape drives.

There are up to 10 FDs (Linux and macOS), one of the SDs runs on the same 
machine as the Director and the SD which is going to write to the disks and to 
the S3 bucket(s). The remote S3 bucket will also be implemented by myself using 
minIO.

The Linux FD running on the same host as the Dir and SD is going to have jobs 
for:
-onsite SMB shares for Synology servers in a DMZ, so that I do not need to run 
an FD in the DMZ which would need to reach into my LAN.
-diverse non-Bacula backups form diverse systems, such as ESXi ghettoVCB, macOS 
TimeMachine, onsite minIO S3 buckets used by Synology DSM HyperBackup

Each backup job is intended to create 3 redundant “copies” (not meaning the 
Copy concept of Bacula) for safety reasons:
- Tier 1 (fastest): 1 to n  internal SATA hard disk drives (ideally no manual 
intervention when individual disks 1,.., n-1 fill up)
- Tier 2 (medium speed): 1 to m  external USB hard disk drives (ideally no 
manual intervention when individual disks 1,.., n-1 fill up)
- Tier 3 (slowest): 1 or more offsite S3 buckets implemented using minIO Docker 
container on Synology DSM

For restore, tier 1 backups are preferred. If tier 1 volumes are corrupted, the 
redundant copies in tier 2 should be used. If the tier 2 copy also is corrupted 
(e.g. onsite fire or flood event), then tier 3 should be used.

Retention schemes would be nothing fancy, something along the line of
- Full once per month
- Differential once per week
- Incremental once per day

I am currently trying to understand how to configured storage devices and pools 
to implement storage tier 1 and tier 2.
I found examples in the documentation, so far all them only use 1 disk drive, 
assuming that all jobs fit on that drive.
As I have the luxury of more than 1 drive per tier, I would like to understand, 
if it is possible to define a pool on top of more than one disk device (e.g. 
using AutoChanger with disk devices instead of tape devices?), so that first 
one device is used and if it is full, further volumes are automatically written 
to the next disk device (without the need for manual intervention).

The other thing not yet clear to me is, what the best matching Bacula concepts 
are to write efficiently to all 3 tiers.
Is it possible to make a job write each file it is backing up to more than 1 
pool, so that it creates as many redundant copies? Or is that not possible, so 
that first tier 1 Backup jobs must be run, and then Copy jobs for Tier 2 and 
then Copy jobs for tier 3? Would my intended use as described above (prefer 
restore from tier 1, since it restores the fastest. Next priority is tier 2, 
last priority is tier 3) be possible on volume copies created by Copy jobs?

I would be grateful, if someone knows of a similar setup and can give me a 
pointer to publicly available example configs for that, or is willing to share 
(excerpts) from their own similar setup configs.

Also concrete improvements for or problems you see with my strategy are welcome 
feedback!

Thanks for you consideration and your time!
 JC











_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to