Hi Again

* ran the script against Full pool*





















*The current Volume retention period is: 3 months prune volume=Full-0200 
yesThe current Volume retention period is: 3 months prune volume=Full-0201 
yesThe current Volume retention period is: 3 months prune volume=Full-0202 
yesThe current Volume retention period is: 3 months prune volume=Full-0203 
yesThe current Volume retention period is: 3 months prune volume=Full-0204 
yesThe current Volume retention period is: 3 months prune volume=Full-0205 
yesThe current Volume retention period is: 3 months prune volume=Full-0206 
yesThe current Volume retention period is: 3 months prune volume=Full-0207 
yesThe current Volume retention period is: 3 months prune volume=Full-0208 
yesThe current Volume retention period is: 3 months prune volume=Full-0209 
yes*

*then I ran the second script but the truncate  didn't seem to do anything*

root@ctbackup:/home/saaoit# ./bareos-purge.sh 
Connecting to Director localhost:9101
 Encryption: TLS_CHACHA20_POLY1305_SHA256
1000 OK: bareos-dir Version: 19.2.7 (16 April 2020)
bareos.org build binary
bareos.org binaries are UNSUPPORTED by bareos.com.
Get official binaries and vendor support on https://www.bareos.com
You are connected using the default console

Enter a period (.) to cancel a command.
truncate volstatus=Purged pool=Full yes
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
No results to list.
You have messages.


when I run 
list volumes

    291 | Full-0291  | Full      |       1 |  53,687,079,733 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
15:00:02 | File    |
|     292 | Full-0292  | Full      |       1 |  53,687,079,794 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
15:46:29 | File    |
|     293 | Full-0293  | Full      |       1 |  53,687,079,621 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
16:47:56 | File    |
|     294 | Full-0294  | Full      |       1 |  53,687,079,588 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
17:28:40 | File    |
|     295 | Full-0295  | Full      |       1 |  53,687,079,946 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
18:09:53 | File    |
|     296 | Full-0296  | Full      |       1 |  53,687,080,019 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
18:53:11 | File    |
|     297 | Full-0297  | Full      |       1 |  53,687,079,318 |       12 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-05-13 
22:02:23 | File    |
|     298 | Full-0298  | Full      |       1 | 161,061,238,585 |       37 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-06-15 
03:33:06 | File    |
|     299 | Full-0299  | Full      |       1 | 161,061,236,559 |       37 
|    7,776,000 |       1 |    0 |         0 | File      | 2020-06-12 
05:35:31 | File    |
|     318 | Full-0318  | Full      |       1 | 268,435,397,520 |       62 
|   31,536,000 |       1 |    0 |         0 | File      | 2020-06-15 
06:49:30 | File    |
|     319 | Full-0319  | Full      |       1 | 225,485,696,126 |       52 
|   31,536,000 |       1 |    0 |         0 | File      | 2020-06-15 
09:31:01 | File    |
|     320 | Full-0320  | Error     |       1 |               0 |        0 
|   31,536,000 |       1 |    0 |         0 | File      | 
NULL                | File    |
|     321 | Full-0321  | Error     |       1 |               0 |        0 
|   31,536,000 |       1 |    0 |         0 | File      | 
NULL                | File    |
|     322 | Full-0322  | Error     |       1 |               0 |        0 
|   31,536,000 |       1 |    0 |         0 | File      | 
NULL                | File    |
|     323 | Full-0323  | Error     |       1 |               0 |        0 
|   31,536,000 |       1 |    0 |         0 | File      | 
NULL                | File    |
|     324 | Full-0324  | Error     |       1 |               0 |        0 
|   31,536,000 |       1 |    0 |         0 | File      | 
NULL                | File    |

+---------+------------+-----------+---------+-----------------+----------+--------------+---------+------+-----------+-----------+---------------------

and when I look at the messages I get

26-Jun 21:17 bareos-dir JobId 0: Volume "" still contains jobs after 
pruning.
26-Jun 21:17 bareos-dir JobId 0: Volume "Full-0298" has Volume Retention of 
7776000 sec. and has 0 jobs that will be pruned
26-Jun 21:17 bareos-dir JobId 0: Pruning volume Full-0298: 0 Jobs have 
expired and can be pruned.
26-Jun 21:17 bareos-dir JobId 0: Volume "" still contains jobs after 
pruning.
26-Jun 21:17 bareos-dir JobId 0: Volume "Full-0299" has Volume Retention of 
7776000 sec. and has 0 jobs that will be pruned
26-Jun 21:17 bareos-dir JobId 0: Pruning volume Full-0299: 0 Jobs have 
expired and can be pruned.
26-Jun 21:17 bareos-dir JobId 0: Volume "" still contains jobs after 
pruning.
26-Jun 21:17 bareos-dir JobId 0: Volume "Full-0318" has Volume Retention of 
31536000 sec. and has 0 jobs that will be pruned
26-Jun 21:17 bareos-dir JobId 0: Pruning volume Full-0318: 0 Jobs have 
expired and can be pruned.
26-Jun 21:17 bareos-dir JobId 0: Volume "" still contains jobs after 
pruning.
26-Jun 21:17 bareos-dir JobId 0: Volume "Full-0319" has Volume Retention of 
31536000 sec. and has 0 jobs that will be pruned
26-Jun 21:17 bareos-dir JobId 0: Pruning volume Full-0319: 0 Jobs have 
expired and can be pruned.
26-Jun 21:17 bareos-dir JobId 0: Volume "" still contains jobs after 
pruning.
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incre-9012" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0014" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0019" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0020" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0300" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0301" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0302" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned
26-Jun 21:19 bareos-dir JobId 6059: Volume "Incremental-0303" has Volume 
Retention of 2592000 sec. and has 0 jobs that will be pruned



These are all snippets and hope te get some assistance

Thanks for the help so far



On Thursday, June 25, 2020 at 7:07:02 PM UTC+2, Brock Palen wrote:
>
> Bareos will keep labeling volumes into the future if you don’t force 
> recycling of them. That’s at least my experience,  even though I have auto 
> prune on I have to run prune on all volumes to get them to auto purge and 
> then truncate to get them recycled (truncate probably not required but 
> helped in my cases).  Compression will save space but could slow and won’t 
> solve the issue of Bareos eating space until it’s all used.  YMMV how you 
> handle this.   Go about checking how the older volumes are treated with 
>
> list volumes pool=<poolname> 
> prune volume=<volume> 
>
> etc.  If your setup is like mine volumes will not get pruned automatically 
> thus the need for my admin job to force it.  FYI I don’t think this is the 
> way bareos is supposed to work, but it works that way for me and probably 
> does for others also. 
>
>
> As for “globs”  I was thinking classic unix blobs 
>
> eg  ls abc*.txt 
>
> In the FileSet config Include you can use wild cards 
>
> https://docs.bareos.org/Configuration/Director.html#config-Dir_Fileset_Include_Options_Wilddir
>  
>
> You can setup jobs that use these wild options that way if you add top 
> level directories you don’t miss them. 
>
> It will be a huge list if your $HOME is anything like ours but you can use 
> echo "estimate job=<jobname> level=<Full|Incrementa> listing “  | bconsole 
> > listing.txt 
>
> To have baroes build the list it would backup for that job to make sure 
> nothing is missed. 
>
> Note if you ever change the Job definition it will trigger a new full 
> backup.  So plan for growth. 
>
>
>
> Brock Palen 
> 1 (989) 277-6075 
> [email protected] <javascript:> 
> www.mlds-networks.com 
> Websites, Linux, Hosting, Joomla, Consulting 
>
>
>
> > On Jun 25, 2020, at 12:21 PM, Waanie Simon <[email protected] 
> <javascript:>> wrote: 
> > 
> > Hi Brock 
> > 
> > Thanks for the quick response 
> > 
> > The filling up of diskspace seem to be the cause of the failing of my 
> jobs. I will apply compression to save on disk space. 
> > 
> > I have a question about globs you mention. I can't say I know what that 
> is. How would you apply this in the a config file? 
> > 
> > The large volume I picked is the home folder for users. So i thought of 
> creating a jobdef that includes certain number of folders and a second and 
> even a third. Then creating jobs for each of the separately. 
> > 
> > What is your opinion about this? 
> > 
> > Regards 
> > Waanie 
> > 
> > 
> > 
> > On Thursday, June 25, 2020 at 4:31:15 PM UTC+2, Brock Palen wrote: 
> > If you are using disk volumes, you probably want to auto label to get 
> new volumes created as needed. 
> > Note bareos tries to preserve as much data as possilbe and with disk 
> volumes I find it likes to fill the disk and eventually fail. 
> > 
> > I run an ‘admin’ job that just runs a script on a schedule, you could 
> use cron etc that checks volumes so they correctly get pruned and then 
> truncated (made zero size) to free disk space, you will need to modify for 
> your pool names: 
> > 
> > #!/bin/bash 
> > for x in `echo "list volumes pool=AI-Consolidated" | bconsole | grep -v 
> "list volumes" | grep AI-Consolid | awk -F\| '{print $3}'` 
> >  do 
> >  echo "prune volume=$x yes" 
> > done | bconsole 
> > 
> > # actaully free up disk space 
> > echo "truncate volstatus=Purged pool=AI-Consolidated yes" \ 
> > | bconsole 
> > 
> > 
> > As for the very large backup few ideas 
> > 
> > * use globs and break it into multiple jobs (this won’t impact restores) 
> > * number of files will dictate scan time for incremental rather than 
> size of data. 
> >     Test scan time with estimate:   estimate job=<jobname>  accurate=yes 
> level=Incremental 
> > * Fulls are dominated by bandwidth 
> >   ** Compression will cause CPU to peak. and limit performanceif not 
> IO/Network bound 
> >   ** If using compression look at the low cpu/compression trade off 
> options 
> >   ** Maybe not compress your backup but use a migrate job with a 
> compress filter to compress all on the backup server 
> > * fd compression is single threaded, if you break it into multiple jobs 
> with globs you can run multiple at a time 
> > 
> > You're going to want to benchmark all along your system,  I like dstat 
> over iostat/top etc for monitoring. but a 90TB single volume backup will 
> take some time for a full.  If you have the space on your server maybe look 
> at Always Incremental,  so you never actually make a full download of that 
> volume again, though you will copy 90TByte of data on your SD every few 
> days depending on settings, just like multiple fulls. 
> > 
> > Myself I have ‘archive’  jobs where I take VirtualFull of each of my 
> jobs in a different volume on a different media monthly for safety.  Bailed 
> me out a lot when I was learning. 
> > 
> > Brock Palen 
> > 1 (989) 277-6075 
> > [email protected] 
> > www.mlds-networks.com 
> > Websites, Linux, Hosting, Joomla, Consulting 
> > 
> > 
> > 
> > > On Jun 24, 2020, at 4:05 PM, Waanie Simon <[email protected]> wrote: 
> > > 
> > > Hi all 
> > > 
> > > I am working on building a backup solution with bareos. the system 
> will be used to replace arkeia which is no longer supported. Arkeia backups 
> were quite easy to install and monitor but we are. 
> > > 
> > > I have install bareos-director and bareos-sd on the same server. They 
> are running on the same network interface. 
> > > 
> > > My initial backups went fine but as time went on the backups would 
> queued. Including the catalog backup. 
> > > I sometime see that the system is asking for a labeling to happen but 
> it doesn't always fix the problem. 
> > > 
> > > mostly linux vms and proxmox physical servers. 
> > > 
> > > Some of the harder ones to backup is our file server with home volume 
> of 90 TB. This is something that we could never backup and has become a 
> source of many problems. 
> > > 
> > > How would you backup such a volume. Should it be split? Since there is 
> a lot of static content in there, should it be archived? 
> > > 
> > > I am not sure if config files are is set correctly. 
> > > 
> > > I have incremental backups happening daily and differentials happening 
> weekly but I often have full backups happening multiple times during a 
> month. 
> > > 
> > > Currently we have no tape library in place so all is running on disk. 
> > > 
> > > Thought I could add some code snippets to give an idea of what I have 
> going on here 
> > > 
> > > 
> > > client 
> > > 
> > > 
> > > Client { 
> > >   Name = dr001-fd 
> > >   Address = 10.60.100.12 
> > >   Password = <passwd> 
> > > } 
> > > 
> > > Job 
> > > 
> > > Job { 
> > >   Name = dr001-Daily-inc 
> > >   JobDefs = linux-daily-inc 
> > >   Type = backup 
> > >   Messages = Standard 
> > >   Client = dr001-fd 
> > > } 
> > > 
> > > 
> > > JobDefs { 
> > >   Name = linux-daily-inc 
> > >   Type = Backup 
> > >   Level = Incremental 
> > >   Storage = File 
> > >   Pool = Incremental 
> > >   FileSet = LinuxAll 
> > > } 
> > > 
> > > each job has its own schedule. is this necessary 
> > > 
> > > Schedule { 
> > >   Name = dr001-daily-inc 
> > >   Description = Incremental Daily for dr001-fd 
> > >   Run = daily at 11:00 
> > > } 
> > > 
> > > 
> > > storage under the bareos-dir.d folder 
> > > File.conf file 
> > > 
> > > Storage { 
> > >   Name = File 
> > >   Address = ctbackup.cape.saao.ac.za                # N.B. Use a 
> fully qualified name here (do not use "localhost" here). 
> > >   Password = <password> 
> > >   Device = FileStorage 
> > >   Media Type = File 
> > >   Maximum Concurrent Jobs = 5 
> > > } 
> > > 
> > > 
> > > The Pools are configured as 
> > > 
> > > Pool { 
> > >   Name = Incremental 
> > >   Pool Type = Backup 
> > >   Recycle = yes                       # Bareos can automatically 
> recycle Volumes 
> > >   AutoPrune = yes                     # Prune expired volumes 
> > >   Volume Retention = 30 days          # How long should the 
> Incremental Backups be kept?  (#12) 
> > >   Maximum Volume Bytes = 150G           # Limit Volume size to 
> something reasonable 
> > >   Maximum Volumes = 30               # Limit number of Volumes in Pool 
> > >   Label Format = "Incremental-"       # Volumes will be labeled 
> "Incremental-<volume-id>" 
> > > } 
> > > 
> > > Pool { 
> > >   Name = Differential 
> > >   Pool Type = Backup 
> > >   Recycle = yes                       # Bareos can automatically 
> recycle Volumes 
> > >   AutoPrune = yes                     # Prune expired volumes 
> > >   Volume Retention = 90 days          # How long should the 
> Differential Backups be kept? (#09) 
> > >   Maximum Volume Bytes = 100G          # Limit Volume size to 
> something reasonable 
> > >   Maximum Volumes = 60               # Limit number of Volumes in Pool 
> > >   Label Format = "Differential-"      # Volumes will be labeled 
> "Differential-<volume-id>" 
> > > } 
> > > 
> > > Pool { 
> > >   Name = Full 
> > >   Pool Type = Backup 
> > >   Recycle = yes                       # Bareos can automatically 
> recycle Volumes 
> > >   AutoPrune = yes                     # Prune expired volumes 
> > >   Volume Retention = 365 days         # How long should the Full 
> Backups be kept? (#06) 
> > >   Maximum Volume Bytes = 350G          # Limit Volume size to 
> something reasonable 
> > >   Maximum Volumes = 100               # Limit number of Volumes in 
> Pool 
> > >   Label Format = "Full-"              # Volumes will be labeled 
> "Full-<volume-id>" 
> > > } 
> > > 
> > > 
> > > unfortunately there has been a bit of a thumb suck regarding the 
> numbers here. 
> > > 
> > > The storage config looks like this 
> > > 
> > > 
> > > Devices 
> > > 
> > > Device { 
> > >   Name = FileStorage 
> > >   Media Type = File 
> > >   Archive Device = /data1/bareos/FileStorage 
> > >   LabelMedia = yes;                   # lets Bareos label unlabeled 
> media 
> > >   Random Access = yes; 
> > >   AutomaticMount = yes;               # when device opened, read it 
> > >   RemovableMedia = no; 
> > >   Collect Statistics = yes 
> > >   AlwaysOpen = no; 
> > >   Description = "File device. A connecting Director must have the same 
> Name and MediaType." 
> > > } 
> > > 
> > > 
> > > bareos-sd.conf 
> > > 
> > > Storage { 
> > >   Name = bareos-sd 
> > >   Maximum Concurrent Jobs = 20 
> > > 
> > >   # remove comment from "Plugin Directory" to load plugins from 
> specified directory. 
> > >   # if "Plugin Names" is defined, only the specified plugins will be 
> loaded, 
> > >   # otherwise all storage plugins (*-sd.so) from the "Plugin 
> Directory". 
> > >   # 
> > >   # Plugin Directory = "/usr/lib/bareos/plugins" 
> > >   # Plugin Names = "" 
> > >   Collect Device Statistics = yes 
> > >   Collect Job Statistics = yes 
> > >   #Statistics Collect Intevals = 60 
> > > } 
> > > ~ 
> > > 
> > > 
> > > 
> > > I know that my largest backups will probably not work since my 
> capacity I can write to is only about 70 Tb 
> > > 
> > > I have the web gui working but there is no easy way to see progress on 
> a job. 
> > > 
> > > Any improvements would be appreciated 
> > > 
> > > Regards 
> > > Waanie 
> > > ~   
> > > 
> > > -- 
> > > You received this message because you are subscribed to the Google 
> Groups "bareos-users" group. 
> > > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected]. 
> > > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/c8c2054a-3d75-4bad-9a97-f2674e08c4b0o%40googlegroups.com.
>  
>
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "bareos-users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/eed2aa23-cf27-406c-944a-f1d6a57823c7o%40googlegroups.com.
>  
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d8eedd36-522e-44ce-9d80-0c66cdef0d38o%40googlegroups.com.

Reply via email to