Hello Stefan!

I have a server that I back up via bacula that I can only access via SMB.
It is a high-end NAS appliance running gentoo linux, with absolutely no
shell access by anyone but the vendor who provides it. The appliance does
its job well, but I have no ability to run an FD on it, nor should I since
it is a highly tuned and customized environment. As such, I back it up via
SMB to LTO-8. This works fairly well.

I will emphasize that best practice as I understand it is to use a local
bacula FD whenever possible. So the windows FD you located may be a good
choice. However, you'll have to experiment with it as I have no experience
with windows FD software.

I have not found any issue backing up multiple shares in one job/fileset
definition. Ultimately I found that all shares in one job was best for my
case. Keep in mind for ongoing backups that if some data is static and
doesn't change much and other data changes very frequently, you might want
to back those different datasets up in different jobs so the large static
body of data isn't frequently written to tape again and again as part of
full jobs, when you could be running rare incrementals vs the large body of
static data and frequent fulls, differentials, and incrementals against
frequently changing data. Might not apply to your case, but I believe that
reasons to split share data among different jobs would primarily be for
other reasons besides bacula limitations. Bacula can back up the shares as
part of a single fileset just fine.

I did find that when backing up "large static dataset that doesn't change
often" separately from "small dataset that changes often" was slightly more
inconvenient when doing general data restores, because I needed to restore
data from both jobs/filesets/pools and the files from the small active
dataset weren't included in the large static dataset. I wound up adding the
shares for the small dataset to the fileset for the large dataset, but
didn't increase the backup frequency for the large dataset backup job. This
wouldn't be suitable for routine backups of the small dataset, but because
it is so small it really cost me nothing to back it up twice, routinely and
often in the job/fileset/pool for the small active dataset, and
occasionally for the large, static, and mostly unchanging dataset. Small
tweak, but convenient.

Good job using a mountpoint script! without that, bacula will probably
happily back up /mnt/s01/sharename even if unmounted, report "nothing to
back up! mission accomplished!" and exit 0. This sort of thing is one point
in favor of running a local windows fd (if they work well - i don't know
any details about them).

regarding fileset elegance, for includes you could simply use  a single
line: /mnt/bacula/s01/
After all, this folder contains all the mountpoints and should be
sufficient. Bacula will be backing up with the perspective that the file
paths all start with /mnt/bacula/s01/foo/bar so specifying individual
shares when every share in /mnt/bacula/s01/ is a backup target isn't really
necessary. This only gains you 2 lines. The excludes are much longer, and
I'm not sure there's a way to make them more elegant. You need to exclude
those items, after all.

You should be aware that even if not defined, File, Job, and Volume catalog
records all have default retention periods. If you back data up and don't
define those periods, a retention period will be enforced for you. If the
job records are pruned for a volume, the volume will be automatically
pruned as well. As such, be aware and define those retention periods as you
deem appropriate.

Practice restoring your bacula catalog now. In my general experience, the
restore process is fairly straightforward. You'll need to restore the file
from the backup job defined with the default install. It'll give you a
bacula.sql file. assuming you're using postgres sql, you'll have to run
something like 'psql bacula < bacula.sql'. My command syntax or even the
command used could be inaccurate. VERIFY EVERYTHING, I'm only typing this
from memory. I do recall that the bacula.sql file appeared to contain
everything needed to drop the existing postgres tables, create new ones,
and import all the relevant data.

Know that the bconsole purge and delete commands are DANGEROUS. They tell
you that, what they don't tell you is that there isn't much in the way of
confirmation before they go ahead and delete all your records for poorly
formatted / misunderstood command entries. The same level of care given to
the design when restoring or backing up files wasn't used when designing
the purge command at minimum. I expected some level of confirmation before
it did its business, but two levels in it happily announced "ok! I deleted
all the records associated with the FD you selected!" I was shocked. I was
also unharmed in the end because I knew how to restore my frequently backed
up bacula database.

The purge command is dangerous not just because of what it does (remove
catalog records without recognition of predefined retention periods), but
also because in my opinion the same level of care wasn't used to carefully
design the tool to prevent unintended harm. I can kind of understand why it
has sharp edges. It isn't used often. Nonetheless, you now have my rant on
the subject.

You won't need purge or delete commands with properly set up file, job, and
volume retention. I advise using proper retention periods wherever
possible.

Know that file, job, and volume retention values given under the pool
resource over-ride all other such directives elsewhere in the
bacula-dir.conf file.

I do recommend a read-only user or shares mounted read-only by linux. It is
my practice to mount the smb shares I must back up via bacula read only by
default. While I did start out using a read only user in the beginning, the
reality is that linux was very useful for administration and I eventually
needed RW access to the shares, so I adjusted my practice to use a user
capable of rw access and shares usually mounted RO. I do recommend
following the principle of least access where possible. Imagine the
complaints "YOUR BACKUP USER DELETED THE SERVER!" "How could it possibly
have done that without rw access?" "...Oh." ;)

While the past principle for bacula developers has been to ensure that FD
versions can be any version number <= bacula-dir/bacula-sd, recent posts to
this list indicate that some error reporting changes have broken this
functionality for bacula 13.x. For bacula 13.x I recommend using an FD at
least 13.x and not >bacula-sd/bacula-dir. As always, bacula-dir and
bacula-sd must be the same version. I don't think this is the cause of your
windows FD issue, but I thought you should know.

You should also be aware that extant ransomware now targets tape drives and
attempts to overwrite/erase tapes prior to demanding the ransom. I
recommend that you make it your practice to rotate tapes out of your
library routinely to protect against this threat (and rotate some tapes
offsite anyway as part of standard good backup practice).

Bacularis is a friendly fork of baculum. It's quite similar, is being
actively developed, and the dev (Marcin) is on this mailing list. You
certainly don't *have* to switch, but now you know it exists in case you
want to check it out. Bacularis is what I use and I'm happy with it.
https://bacularis.app/

The bacula-web project provides detailed reporting for bacula. It is also
under active development, and the dev (Davide) is on this list as well.
https://www.bacula-web.org

Welcome to my Friday evening bacula infodump. I hope some of this was
useful! Have fun!

Regards,
Robert Gerber
402-237-8692
r...@craeon.net


On Fri, Feb 23, 2024 at 2:51 AM Stefan G. Weichinger <li...@xunil.at> wrote:

>
> I am still learning my way to use bacula and could need some explanations.
>
> One goal of my customer is to backup an old Windows Server VM with ~15
> shares.
>
> My bacula-server is a Debian-VM with bacula-13.0.3, and baculum-11.0.6
>
> I have a config running, writing to a HP changer with 8 tapes etc
>
> My current approach:
>
> I have a JobDef for that server, with pre/post-scripts to mount the
> share "C$" (kind of a catchall-approach for the start):
>
>
> JobDefs {
>    Name = "Server_S01"
>    Type = "Backup"
>    Level = "Incremental"
>    Messages = "Standard"
>    Storage = "loader1"
>    Pool = "Default"
>    Client = "debian1-fd"
>    Fileset = "S01_Set1"
>    Schedule = "WeeklyCycle"
>    WriteBootstrap = "/var/lib/bacula/%c.bsr"
>    SpoolAttributes = yes
>    Runscript {
>      RunsWhen = "Before"
>      RunsOnClient = no
>      Command = "/etc/bacula/scripts/cifs_mount_s01.sh"
>    }
>    Runscript {
>      RunsWhen = "After"
>      RunsOnClient = no
>      Command = "/usr/bin/umount /mnt/bacula/s01/c_dollar"
>    }
>    Priority = 10
> }
>
> # fstab
>
> //192.168.0.11/C$  /mnt/bacula/s01/c_dollar cifs
> ro,_netdev,users,noauto,credentials=/var/lib/bacula/.smbcreds_s01 0 0
>
> # scripts/cifs_mount_s01.sh
>
> /usr/bin/mountpoint -q /mnt/bacula/s01/c_dollar || /usr/sbin/mount.cifs
> //192.168.x.y/C$ /mnt/bacula/s01/c_dollar  -o
> credentials=/var/lib/bacula/.smbcreds_s01
>
>
> A Fileset, that doesn't look very elegant to me. I edited it for privacy
> .. you get the picture:
>
> Fileset {
>    Name = "S01_Set1"
>    Include {
>      File = "/mnt/bacula/s01/c_dollar/A"
>      File = "/mnt/bacula/s01/c_dollar/B"
>      File = "/mnt/bacula/s01/c_dollar/C"
>      Options {
>        Signature = "Md5"
>      }
>    }
>    Exclude {
>      File = "/mnt/bacula/s01/c_dollar/Backu*"
>      File = "/mnt/bacula/s01/c_dollar/Dokumente*"
>      File = "/mnt/bacula/s01/c_dollar/pagefile.sys"
>      File = "/mnt/bacula/s01/c_dollar/Prog*"
>      File = "/mnt/bacula/s01/c_dollar/Reco*"
>      File = "/mnt/bacula/s01/c_dollar/System*"
>      File = "/mnt/bacula/s01/c_dollar/Windows"
>      File = "/mnt/bacula/s01/c_dollar/WSUS"
>    }
> }
>
> --
>
> Is that OK or is there a more elegant way to do this?
>
> The Job runs right now, and copies files, OK
>
> My CIFS-user should have admin rights, but for example I seem not to
> have read permissions when doing this:
>
> # ls /mnt/bacula/s01/c_dollar/A
>
> I let the job finish and check contents of backups later in the GUI.
>
> Sure, that's more of a Samba/CIFS-question -> permissions of users.
> Maybe we should add my user to the various shares as a read-user via
> ACLs or so.
>
> Being member of admins seems not enough.
>
> -
>
> I'd appreciate suggestions how to backup multiple CIFS-shares.
>
> One job per share? I would need pre/post-scripts for each of them?
>
> thanks in advance! Stefan
>
>
>
>
>
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to