You can get it here.

https://pastebin.com/XvQqfzNX

It will need adapting to your environment. I have a cloud+cache (backblaze
B2) and 2x local NAS (local CIFS mounts) SDs that are defined as variables
at the top of the script.

I run this manually to recover space whenever I need. It's not scheduled at
all. My usual process is to list what I have (-s), purge/truncate expired
(-p), delete defunct volumes from catalog (-Dp), delete files (directories
in the case of cloud) not in the catalog from disc (-Do).

This last step is probably not necessary since truncated volumes are small
anyway. It just looks tidier :).

Chris-


On Wed, 29 Nov 2023, 16:55 mdear, <md...@mpwrware.ca> wrote:

> That sounds really useful, I’d be quite happy to take and adapt it
> (perhaps on GitHub where others can see?).
>
> Removal of artifacts generated from historical or defunct jobs was my one
> of my feature requests towards Acronis before I moved on to Bacula, they
> just couldn’t manage my NAS space the way I needed them too. Garbage
> collection was certainly one of those bullet points.
>
>
>
> On Wed, Nov 29, 2023 at 7:03 AM, Chris Wilkinson <winstonia...@gmail.com
> <On+Wed,+Nov+29,+2023+at+7:03+AM,+Chris+Wilkinson+%3C%3Ca+href=>> wrote:
>
> I have a script that deletes physical disc and cloud volumes that are not
> present in the catalog, perhaps because a job was manually deleted or
> migrated. Is that what you want to achieve?
>
> It's part of a collection of Bacula scripts that I use. It's too big to
> post here but I'd be happy to share it. It's somewhat customised to my
> setup so you'd likely need to modify it for your own purposes.
>
> -Chris-
>
> On Wed, 29 Nov 2023, 11:46 Lionel PLASSE, <pla...@cofiem.fr> wrote:
>
>> Hello,
>>
>> I question regarding migration job and volume cleaning :
>>
>> For migration job, old jobs from migrated volume to next pool's volume
>> are deleted from the catalog, but the migrated volume file still contains
>> data (I use File volume on disk ). So the data amount is doubled. (The
>> catalog is well cleaned)
>> The volume might be cleaned in a future scheduled job if it passes from
>> "used" to "append" regarding retention periods.
>>
>> Is there a simple way to delete those data when the volume is used once
>> or contain only the migrated job's data. Effectively after the migration
>> there is no more catalogued job for this volume but the volume still
>> contains data physically.
>> Is it possible to clean the migrated volume (like a backup job do prior
>> to the backup operation when passing from "used" to "append") but at the
>> end of the migration that there is not twice as much the physical data.
>>
>> Should I use I bconsole script in a after run script ?
>>
>>
>>
>> By similar way, when a job went on fatal error for whatever cause.
>> However , the "Vol.Jobs" is already incremented so when the job is
>> rescheduled (or manually re-run) the max.vol.jobs can be reached and
>> thus can block the future schedule for backup for 1 missing Max vol job.
>> How to decrease the job count in order not to be in "used" state , when
>> fatal error occurs , with a bconsole script or a bash script, but I don't
>> want to increase the max.vol.jobs like I do now, because I should
>> remember I have done so and decrease after days the max.vol.jobs.
>>
>> If someone understand what I say.
>>
>> _______________________________________________
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to