Are you using "accurate" backups? It might be useful to see what these queries print:
select jobid,count(*) from file where jobid in (181371,181679,182032,182350,182668,182718,183093,183408,183724,184039,184354,184669,184984,185615) and fileindex = 0 group by jobid; select jobid,count(*) from file where jobid in (181371,181679,182032,182350,182668,182718,183093,183408,183724,184039,184354,184669,184984,185615) and fileindex > 0 group by jobid; __Martin >>>>> On Fri, 4 Oct 2019 08:58:20 -0600, Lloyd Brown said: > > Martin, > > (sorry for the delay; I was traveling) > > I suspect you're right. Here's the output of that query for reference: > > > > select jobid,count(*) from File where jobid in > > (181371,181679,182032,182350,182668,182718,183093,183408,183724,184039,184354,184669,184984,185615) > > group by jobid > > -> ; > > +--------+----------+ > > | jobid | count(*) | > > +--------+----------+ > > | 181371 | 11703 | > > | 184354 | 4 | > > | 185615 | 11703 | > > +--------+----------+ > > 3 rows in set (0.01 sec) > > > > > > > There is something weird with 185615, that I can't explain right now. > But other than that, 181371 is the initial full, and 184354 indeed has 4 > files in it. The rest do indeed have zero files in them. > > I'm still not sure what's going on. But I've manually run VFs (not > progressive VFs) for all the jobs in question, and modified the mounting > script to guarantee that at least one file will be changed in every > incremental. It'll take a while to build up the history to start the > PVFs again, but hopefully this workaround will fix it. > > Lloyd > > > > On 10/1/19 8:08 AM, Martin Simmons wrote: > > I think the fatal error is that it can't find records in the File table for > > any of the JobIds that it is consolidating. > > > > Have the file records for those jobs been pruned? You could check with an > > SQL > > query like this: > > > > select jobid,count(*) from file where jobid in > > (181371,181679,182032,182350,182668,182718,183093,183408,183724,184039,184354,184669,184984,185615) > > group by jobid; > > > > __Martin > > > > > >>>>>> On Mon, 30 Sep 2019 13:09:54 -0600, Lloyd Brown said: > >> Hi, all. > >> > >> I may be misunderstanding (or misconfiguring) something again. I'm > >> hoping someone can explain it to me. > >> > >> I'm trying to set up a progressive virtual full scheme. But in several > >> instances, the incrementals that would be consolidated by the VF, are > >> empty, meaning that the VF is failing with errors like this: > >> > >>> 30-Sep 08:25 backup-dir JobId 221077: Start Virtual Backup JobId > >>> 221077, Job=zhome_sw.2019-09-30_08.25.06_37 > >>> 30-Sep 08:25 backup-dir JobId 221077: Consolidating > >>> JobIds=181371,181679,182032,182350,182668,182718,183093,183408,183724,184039,184354,184669,184984,185615 > >>> 30-Sep 08:25 backup-dir JobId 221077: No files found to read. No > >>> bootstrap file written. > >>> 30-Sep 08:25 backup-dir JobId 221077: Found 0 files to consolidate > >>> into Virtual Full. > >>> 30-Sep 08:25 backup-dir JobId 221077: Fatal error: Could not get or > >>> create the FileSet record. > >> Now this is correct. While there are files in the initial full, there > >> are indeed zero files in those subsequent incrementals. But why is that > >> a fatal condition? > >> > >> The result of this being a fatal error, is that my existing full remains > >> at its original date/time Therefore, it will eventually reach the > >> retention period age, and be pruned. Then I have to start over with a > >> complete, non-Virtual Full again, and lose a few months of backup > >> history that would otherwise be usable. > >> > >> If there's a way of changing this behavior via config file, that would > >> be great (current config example attached). > >> > >> I do also have a workaround in mind. I'm just trying to understand if > >> I've done something wrong, and if not, what the reasoning was about this > >> being a fatal error. It was certainly unexpected. > >> > >> This is running on Bacula 9.4.3. > >> > >> Thanks, > >> > >> Lloyd > >> > >> -- > >> Lloyd Brown > >> HPC Systems Administrator > >> Office of Research Computing > >> Brigham Young University > >> http://marylou.byu.edu > >> > >> > > _______________________________________________ > > Bacula-users mailing list > > Bacula-users@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/bacula-users > > > > > _______________________________________________ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users > _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users