Ok I meant recreating volume cause I am not using autolabel, I will activate it in order to be sure having an append volume ready for further jobs.
But it is not the most important, the principal is not to over use diskspace for backups, so deleting expired volumes, yes It rules. (y) Just have to specify what pool to look after in the script cause the INCR/DIFF and FULL pools are well rotated by schedule and even if expired should not be deleted for restoring. Only migrated vols use space for nothing, because data are, de facto, in the next pool’s volume now. In fact I was not sure that bconsole have a direct way to do it, cause the “PurgeMigrationJob” purge the catalog but not the data on disk Well, I see how you works with your script , so I think I am able to write mine 🤔. But If you find time to send me yours It will be appreciable and it will save me time :) De : Chris Wilkinson <winstonia...@gmail.com> Envoyé : mercredi 29 novembre 2023 15:40 À : Lionel PLASSE <pla...@cofiem.fr> Cc : bacula-users <bacula-users@lists.sourceforge.net> Objet : Re: [Bacula-users] Migration Job - Volume data deletion The script will purge expired volumes if that's what you mean by clean. It doesn't recreate any volumes. It simply deletes those volumes that are no longer needed because the catalog entry has gone, for whatever reason. -Chris- On Wed, 29 Nov 2023, 13:13 Lionel PLASSE, <pla...@cofiem.fr <mailto:pla...@cofiem.fr> > wrote: Yes, it's this kind of thing as I'm looking for the first part of the optimization I want to achieve, I would have preferred to have something that cleans the volume (RAZ clean) instead of removing, but that could be a solution. Are you using auto-labeling to recreate the deleted volume on demand, or does your script do it for you? By a bconsole commands script maybe ? or a bash script? Cause I asked myself If I should query the database by sql query and then delete volumes in consequence or if bconsole could be able to do it with inner commands. But I’d happy to adapt it to my conf. My mail address is open for sharing if you want. (zip, gz or even wetransfer) Great thanks, Lionel De : Chris Wilkinson <winstonia...@gmail.com <mailto:winstonia...@gmail.com> > Envoyé : mercredi 29 novembre 2023 13:03 À : Lionel PLASSE <pla...@cofiem.fr <mailto:pla...@cofiem.fr> > Cc : bacula-users <bacula-users@lists.sourceforge.net <mailto:bacula-users@lists.sourceforge.net> > Objet : Re: [Bacula-users] Migration Job - Volume data deletion I have a script that deletes physical disc and cloud volumes that are not present in the catalog, perhaps because a job was manually deleted or migrated. Is that what you want to achieve? It's part of a collection of Bacula scripts that I use. It's too big to post here but I'd be happy to share it. It's somewhat customised to my setup so you'd likely need to modify it for your own purposes. -Chris- On Wed, 29 Nov 2023, 11:46 Lionel PLASSE, <pla...@cofiem.fr <mailto:pla...@cofiem.fr> > wrote: Hello, I question regarding migration job and volume cleaning : For migration job, old jobs from migrated volume to next pool's volume are deleted from the catalog, but the migrated volume file still contains data (I use File volume on disk ). So the data amount is doubled. (The catalog is well cleaned) The volume might be cleaned in a future scheduled job if it passes from "used" to "append" regarding retention periods. Is there a simple way to delete those data when the volume is used once or contain only the migrated job's data. Effectively after the migration there is no more catalogued job for this volume but the volume still contains data physically. Is it possible to clean the migrated volume (like a backup job do prior to the backup operation when passing from "used" to "append") but at the end of the migration that there is not twice as much the physical data. Should I use I bconsole script in a after run script ? By similar way, when a job went on fatal error for whatever cause. However , the "Vol.Jobs" is already incremented so when the job is rescheduled (or manually re-run) the max.vol.jobs <http://max.vol.jobs> can be reached and thus can block the future schedule for backup for 1 missing Max vol job. How to decrease the job count in order not to be in "used" state , when fatal error occurs , with a bconsole script or a bash script, but I don't want to increase the max.vol.jobs <http://max.vol.jobs> like I do now, because I should remember I have done so and decrease after days the max.vol.jobs <http://max.vol.jobs> . If someone understand what I say. _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net <mailto:Bacula-users@lists.sourceforge.net> https://lists.sourceforge.net/lists/listinfo/bacula-users
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users