Yes Martin, I do see these messages. less than every other message it tells me 
that it has deleted jobs, mostly just 1 job, sometimes 3, very seldomly 4 jobs. 
It doesn’t tell me which jobs it has deleted though. 
At the same time it tells me it has upgraded the same number of Copy jobs to 
Backup jobs and tells me which they are. Having looked up those all of them 
were Copies of catalog backup jobs, and - the corresponding Backup jobs are 
missing - so I assume that the pruning deleted first the catalog backup job and 
then upgraded the corresponding copy of that backup job back to a backup job.

So yes, the pruning seems to do something for the catalog backup jobs, but for 
none of the other jobs of any other type (verify, copy) or from any other 
client, or any other backup job from the same client.

The question is, why does it merely work for the catalog Backup jobs?

> On 16. Oct 2024, at 18:47, Martin Simmons <mar...@lispworks.com> wrote:
> 
> Messages like "Begin pruning Jobs older than ..." should appear after the
> termination status of the job.  Do you see any messages like that?
> 
> __Martin
> 
> 
>>>>>> On Wed, 16 Oct 2024 14:15:24 +0200, Justin Case said:
>> 
>> I am wondering why I am seeing in my catalog lots of jobs older than the 
>> JobRetention defined in the pools, and also older than the default 
>> JobRetention assumed for the clients.
>> The volume recycling seems to work fine adhering to the VolumeRetention in 
>> the pools.
>> 
>> To me it is a mystery, probably be cause I overlook some dependencies I am 
>> not aware of.
>> Can someone please help me understanding this.
>> 
>> I hope I have provided all necessary ressources, including the Pruning job.
>> 
>> Cheers,
>> J/C
>> 
>> 
>> Job {
>> Name = "Pruning"
>> Description = "Prune all expired volumes"
>> Type = "Admin"
>> Schedule = "EveryNight"
>> JobDefs = "default-tier1"
>> PruneJobs = yes
>> PruneFiles = yes
>> PruneVolumes = yes
>> Enabled = yes
>> Runscript {
>> RunsWhen = "Before"
>> RunsOnClient = no
>> Console = "prune expired volume yes"
>> }
>> Priority = 15
>> AllowDuplicateJobs = no
>> }
>> 
>> JobDefs {
>>  Name = "default-tier1"
>>  Description = "Default backup job for Tier 1"
>>  Type = "Backup"
>>  Level = "Full"
>>  Messages = "Standard"
>>  Pool = "tier1-long"
>>  FullBackupPool = "tier1-long"
>>  IncrementalBackupPool = "tier1-short"
>>  Client = “filer-fd"
>>  Fileset = "EmptyFileset"
>>  WriteBootstrap = "/disaster-recovery/bootstrap/%c_%n.bsr"
>>  MaxFullInterval = 2678400
>>  SpoolAttributes = no
>>  Priority = 20
>>  AllowMixedPriority = yes
>>  AllowIncompleteJobs = no
>>  Accurate = yes
>>  AllowDuplicateJobs = no
>> }
>> 
>> Schedule {
>>  Name = "EveryNight"
>>  Run = at 22:00
>> }
>> 
>> Job {
>>  Name = "catalog-tier1"
>>  Description = "Backup Bacula MyCatalog to Tier 1 storage"
>>  Pool = "tier1-long"
>>  FullBackupPool = "tier1-long"
>>  Fileset = "Catalog"
>>  Schedule = "EveryNight-Full"
>>  JobDefs = "various-tier1"
>>  Enabled = yes
>>  Runscript {
>>    RunsWhen = "Before"
>>    RunsOnClient = no
>>    Command = "/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
>>  }
>>  Runscript {
>>    RunsWhen = "After"
>>    FailJobOnError = no
>>    RunsOnClient = no
>>    Command = "/opt/bacula/scripts/delete_catalog_backup"
>>  }
>>  Runscript {
>>    RunsWhen = "After"
>>    FailJobOnError = no
>>    RunsOnClient = no
>>    Console = "purge volume action=all allpools storage=unraid-tier1-storage"
>>  }
>>  Priority = 50
>>  AllowIncompleteJobs = no
>>  AllowDuplicateJobs = no
>> }
>> 
>> JobDefs {
>>  Name = "various-tier1"
>>  Type = "Backup"
>>  Level = "Full"
>>  Messages = "Standard"
>>  Pool = “tier1-long"
>>  FullBackupPool = "tier1-long"
>>  IncrementalBackupPool = "tier1-short"
>>  Client = “filer-fd"
>>  Fileset = "EmptyFileset"
>>  Schedule = "Third-Sat-Full_Even-Incr"
>>  WriteBootstrap = "/disaster-recovery/bootstrap/%c_%n.bsr"
>>  Priority = 20
>>  AllowMixedPriority = yes
>>  Accurate = yes
>> }
>> 
>> Pool {
>>  Name = "tier1-short"
>>  PoolType = "Backup"
>>  LabelFormat = "tier1-short-vol-"
>>  ActionOnPurge = "Truncate"
>>  MaximumVolumes = 500
>>  MaximumVolumeBytes = 20000000000
>>  VolumeRetention = 3456000
>>  NextPool = "tier2-short"
>>  Storage = "tier1-storage"
>>  ScratchPool = "Scratch"
>>  Catalog = "MyCatalog"
>>  FileRetention = 2592000
>>  JobRetention = 2592000
>> }
>> 
>> Pool {
>>  Name = "tier1-long"
>>  PoolType = "Backup"
>>  LabelFormat = "tier1-long-vol-"
>>  ActionOnPurge = "Truncate"
>>  MaximumVolumes = 500
>>  MaximumVolumeBytes = 20000000000
>>  VolumeRetention = 8640000
>>  NextPool = "tier2-long"
>>  Storage = "tier1-storage"
>>  ScratchPool = "Scratch"
>>  Catalog = "MyCatalog"
>>  FileRetention = 7776000
>>  JobRetention = 7776000
>> }
>> 
>> Client {
>>  Name = “filer-fd"
>>  Address = "127.0.0.1"
>>  FdPort = 9102
>>  Password = “redacted"
>>  Catalog = "MyCatalog"
>>  AutoPrune = yes
>>  MaximumConcurrentJobs = 5
>> }
>> 
>> Client defaults are AutoPrune yes, JobRetention 180d, FileRetention 60d
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>> 
> 



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to