Hello,

I question regarding migration job and volume cleaning : 

For migration job, old jobs from migrated volume to next pool's volume are 
deleted from the catalog, but the migrated volume file still contains data (I 
use File volume on disk ).  So the data amount is doubled. (The catalog is well 
cleaned)
The volume might be cleaned in a future scheduled job  if it passes  from 
"used" to "append" regarding retention periods.

Is there a simple way to delete those data when the volume is used once or 
contain only the migrated job's data. Effectively after the migration   there 
is no more catalogued job for this volume but the volume still contains data 
physically.  
Is it possible to clean the migrated volume (like a backup job do prior to the 
backup operation when passing from "used" to "append") but at the end of the 
migration that there is not twice as much the physical data.

Should I use I bconsole script in a after run script ?



By similar way, when a job went on fatal error for whatever cause. However , 
the "Vol.Jobs" is already incremented so when the job is rescheduled (or 
manually re-run)  the max.vol.jobs can be reached and thus can block the future 
schedule for backup for 1 missing  Max vol job.  How to decrease the job count 
in order not to be in "used"  state , when fatal error occurs , with a bconsole 
script or a bash script,  but I don't want to increase the max.vol.jobs like I 
do now, because I should remember I have done so and decrease after days the 
max.vol.jobs.

If someone understand what I say.

_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to