If you are using disk volumes, you probably want to auto label to get new
volumes created as needed.
Note bareos tries to preserve as much data as possilbe and with disk volumes I
find it likes to fill the disk and eventually fail.
I run an ‘admin’ job that just runs a script on a schedule, you could use cron
etc that checks volumes so they correctly get pruned and then truncated (made
zero size) to free disk space, you will need to modify for your pool names:
#!/bin/bash
for x in `echo "list volumes pool=AI-Consolidated" | bconsole | grep -v "list
volumes" | grep AI-Consolid | awk -F\| '{print $3}'`
do
echo "prune volume=$x yes"
done | bconsole
# actaully free up disk space
echo "truncate volstatus=Purged pool=AI-Consolidated yes" \
| bconsole
As for the very large backup few ideas
* use globs and break it into multiple jobs (this won’t impact restores)
* number of files will dictate scan time for incremental rather than size of
data.
Test scan time with estimate: estimate job=<jobname> accurate=yes
level=Incremental
* Fulls are dominated by bandwidth
** Compression will cause CPU to peak. and limit performanceif not IO/Network
bound
** If using compression look at the low cpu/compression trade off options
** Maybe not compress your backup but use a migrate job with a compress
filter to compress all on the backup server
* fd compression is single threaded, if you break it into multiple jobs with
globs you can run multiple at a time
You're going to want to benchmark all along your system, I like dstat over
iostat/top etc for monitoring. but a 90TB single volume backup will take some
time for a full. If you have the space on your server maybe look at Always
Incremental, so you never actually make a full download of that volume again,
though you will copy 90TByte of data on your SD every few days depending on
settings, just like multiple fulls.
Myself I have ‘archive’ jobs where I take VirtualFull of each of my jobs in a
different volume on a different media monthly for safety. Bailed me out a lot
when I was learning.
Brock Palen
1 (989) 277-6075
[email protected]
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting
> On Jun 24, 2020, at 4:05 PM, Waanie Simon <[email protected]> wrote:
>
> Hi all
>
> I am working on building a backup solution with bareos. the system will be
> used to replace arkeia which is no longer supported. Arkeia backups were
> quite easy to install and monitor but we are.
>
> I have install bareos-director and bareos-sd on the same server. They are
> running on the same network interface.
>
> My initial backups went fine but as time went on the backups would queued.
> Including the catalog backup.
> I sometime see that the system is asking for a labeling to happen but it
> doesn't always fix the problem.
>
> mostly linux vms and proxmox physical servers.
>
> Some of the harder ones to backup is our file server with home volume of 90
> TB. This is something that we could never backup and has become a source of
> many problems.
>
> How would you backup such a volume. Should it be split? Since there is a lot
> of static content in there, should it be archived?
>
> I am not sure if config files are is set correctly.
>
> I have incremental backups happening daily and differentials happening weekly
> but I often have full backups happening multiple times during a month.
>
> Currently we have no tape library in place so all is running on disk.
>
> Thought I could add some code snippets to give an idea of what I have going
> on here
>
>
> client
>
>
> Client {
> Name = dr001-fd
> Address = 10.60.100.12
> Password = <passwd>
> }
>
> Job
>
> Job {
> Name = dr001-Daily-inc
> JobDefs = linux-daily-inc
> Type = backup
> Messages = Standard
> Client = dr001-fd
> }
>
>
> JobDefs {
> Name = linux-daily-inc
> Type = Backup
> Level = Incremental
> Storage = File
> Pool = Incremental
> FileSet = LinuxAll
> }
>
> each job has its own schedule. is this necessary
>
> Schedule {
> Name = dr001-daily-inc
> Description = Incremental Daily for dr001-fd
> Run = daily at 11:00
> }
>
>
> storage under the bareos-dir.d folder
> File.conf file
>
> Storage {
> Name = File
> Address = ctbackup.cape.saao.ac.za # N.B. Use a fully
> qualified name here (do not use "localhost" here).
> Password = <password>
> Device = FileStorage
> Media Type = File
> Maximum Concurrent Jobs = 5
> }
>
>
> The Pools are configured as
>
> Pool {
> Name = Incremental
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle
> Volumes
> AutoPrune = yes # Prune expired volumes
> Volume Retention = 30 days # How long should the Incremental
> Backups be kept? (#12)
> Maximum Volume Bytes = 150G # Limit Volume size to something
> reasonable
> Maximum Volumes = 30 # Limit number of Volumes in Pool
> Label Format = "Incremental-" # Volumes will be labeled
> "Incremental-<volume-id>"
> }
>
> Pool {
> Name = Differential
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle
> Volumes
> AutoPrune = yes # Prune expired volumes
> Volume Retention = 90 days # How long should the Differential
> Backups be kept? (#09)
> Maximum Volume Bytes = 100G # Limit Volume size to something
> reasonable
> Maximum Volumes = 60 # Limit number of Volumes in Pool
> Label Format = "Differential-" # Volumes will be labeled
> "Differential-<volume-id>"
> }
>
> Pool {
> Name = Full
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle
> Volumes
> AutoPrune = yes # Prune expired volumes
> Volume Retention = 365 days # How long should the Full Backups be
> kept? (#06)
> Maximum Volume Bytes = 350G # Limit Volume size to something
> reasonable
> Maximum Volumes = 100 # Limit number of Volumes in Pool
> Label Format = "Full-" # Volumes will be labeled
> "Full-<volume-id>"
> }
>
>
> unfortunately there has been a bit of a thumb suck regarding the numbers here.
>
> The storage config looks like this
>
>
> Devices
>
> Device {
> Name = FileStorage
> Media Type = File
> Archive Device = /data1/bareos/FileStorage
> LabelMedia = yes; # lets Bareos label unlabeled media
> Random Access = yes;
> AutomaticMount = yes; # when device opened, read it
> RemovableMedia = no;
> Collect Statistics = yes
> AlwaysOpen = no;
> Description = "File device. A connecting Director must have the same Name
> and MediaType."
> }
>
>
> bareos-sd.conf
>
> Storage {
> Name = bareos-sd
> Maximum Concurrent Jobs = 20
>
> # remove comment from "Plugin Directory" to load plugins from specified
> directory.
> # if "Plugin Names" is defined, only the specified plugins will be loaded,
> # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
> #
> # Plugin Directory = "/usr/lib/bareos/plugins"
> # Plugin Names = ""
> Collect Device Statistics = yes
> Collect Job Statistics = yes
> #Statistics Collect Intevals = 60
> }
> ~
>
>
>
> I know that my largest backups will probably not work since my capacity I can
> write to is only about 70 Tb
>
> I have the web gui working but there is no easy way to see progress on a job.
>
> Any improvements would be appreciated
>
> Regards
> Waanie
> ~
>
> --
> You received this message because you are subscribed to the Google Groups
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bareos-users/c8c2054a-3d75-4bad-9a97-f2674e08c4b0o%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/bareos-users/AB660171-824C-423C-A166-9A0C21DC460B%40mlds-networks.com.