It appears I'm out of luck.
Apprently, curlftpfs no longer supports the open+read+write operation
that Bacula requires, so it looks like I'll have to move to a different
storage provider.
https://sourceforge.net/p/curlftpfs/discussion/542750/thread/53e47b71/
Thanks for all the help so far, muc
You can get it here.
https://pastebin.com/XvQqfzNX
It will need adapting to your environment. I have a cloud+cache (backblaze
B2) and 2x local NAS (local CIFS mounts) SDs that are defined as variables
at the top of the script.
I run this manually to recover space whenever I need. It's not schedu
That sounds really useful, I’d be quite happy to take and adapt it (perhaps on
GitHub where others can see?).
Removal of artifacts generated from historical or defunct jobs was my one of my
feature requests towards Acronis before I moved on to Bacula, they just
couldn’t manage my NAS space the
On 11/29/23 05:47, MylesDearBusiness via Bacula-users wrote:
Hello, Bacula experts.
Due to message length limitations of this mailing list, I have been
unable to post the majority of necessary details, which is why I was
using my github gist system to store, apologies for the confusion or
i
I notice in your config you have:
/mnt/my_backup/backup/bacula/archive
But the logs keep showing:
/mnt/khapbackup/backup/bacula/archive
Have you made a change to the sd conf and not restarted the SD?
Or how about creating a symlink from /mnt/khapbackup to /mnt/my_backup ?
- Michel
From: MylesDea
Ok I meant recreating volume cause I am not using autolabel, I will activate it
in order to be sure having an append volume ready for further jobs.
But it is not the most important, the principal is not to over use diskspace
for backups, so deleting expired volumes, yes It rules. (y)
Just have
The script will purge expired volumes if that's what you mean by clean.
It doesn't recreate any volumes. It simply deletes those volumes that are
no longer needed because the catalog entry has gone, for whatever reason.
-Chris-
On Wed, 29 Nov 2023, 13:13 Lionel PLASSE, wrote:
> Yes, it's this
Yes, it's this kind of thing as I'm looking for the first part of the
optimization I want to achieve, I would have preferred to have something that
cleans the volume (RAZ clean) instead of removing, but that could be a solution.
Are you using auto-labeling to recreate the deleted volume on deman
I have a script that deletes physical disc and cloud volumes that are not
present in the catalog, perhaps because a job was manually deleted or
migrated. Is that what you want to achieve?
It's part of a collection of Bacula scripts that I use. It's too big to
post here but I'd be happy to share it
Hello,
I question regarding migration job and volume cleaning :
For migration job, old jobs from migrated volume to next pool's volume are
deleted from the catalog, but the migrated volume file still contains data (I
use File volume on disk ). So the data amount is doubled. (The catalog is we
Hello, Bacula experts.
Due to message length limitations of this mailing list, I have been unable to
post the majority of necessary details, which is why I was using my github gist
system to store, apologies for the confusion or inconvenience this caused. I
just thought it would be more confusi
11 matches
Mail list logo