Sorry for slow reply,
> On Dec 16, 2020, at 10:01 AM, J. Echter <[email protected]> > wrote: > > Hi, > > have you set > https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_AlwaysIncrementalKeepNumber > in your config? Yes all AI clients have the same setting. Our disk volume isn’t big enough so we migrate jobs from the AI-Consolidated pool to an LTO5 drive. This works in most cases allowing us to do AI jobs with only a single Tape Drive, as consolidations are always written back to the AI-Consolidated pool and then migrated. # Always Incremental Settings Job Defaults AlwaysIncremental = yes AlwaysIncrementalJobRetention = 3 weeks Always Incremental Keep Number = 7 Always Incremental Max Full Age = 28 days Pool = AI-Incremental Full Backup Pool = AI-Consolidated Pool Settings: Pool { Name = AI-Consolidated Pool Type = Backup Recycle = yes # Bareos can automatically recycle Volumes Auto Prune = yes # Prune expired volumes Volume Retention = 1 months # How long should jobs be kept? Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Label Format = "AI-Consolidated-" Volume Use Duration = 2 days Storage = File Next Pool = Longterm # copy jobs write to this pool Action On Purge=Truncate Migration Time = 3 days Migration High Bytes = 500G Migration Low Bytes = 300G } Pool { Name = Offsite Pool Type = Backup Recycle = yes Recycle Pool = Scratch Auto Prune = yes Volume Retention = 6 months Volume Use Duration = 1 months Storage = Tand-LTO5-Lib Next Pool = pi-incremental } # copy job to long term tape Job { Name = "Migrate-To-Offsite-AI-Consolidated" Client = myth-fd Type = Migrate Purge Migration Job = yes Pool = AI-Consolidated Level = Full Next Pool = Offsite Schedule = WeeklyCycleAfterBackup Allow Duplicate Jobs = no Priority = 4 #before catalog dump Messages = Standard Selection Type = PoolTime # 3 days Spool Data = No Selection Pattern = "." RunAfterJob = "/usr/local/bin/prune.sh” } Job { Name = "Migrate-To-Offsite-AI-Consolidated-size" Client = myth-fd Type = Migrate Purge Migration Job = yes Pool = AI-Consolidated Level = Full Next Pool = Offsite Schedule = WeeklyCycleAfterBackup Allow Duplicate Jobs = no Priority = 4 #before catalog dump Messages = Standard Selection Type = PoolOccupancy Spool Data = No Selection Pattern = "." RunAfterJob = "/usr/local/bin/prune.sh" } > > Am 16.12.20 um 03:40 schrieb Brock Palen: >> I have had this happen with this same client several times. I did notice >> this time though that all the jobs to be consolidated were incremental that >> had no files bytes listed in the catalog. It is backing up a photo archive >> that many days has no new content. Thus the jobs will have no entries. >> >> It causes consolidation to fail. The only way to get it working again is to >> delete the jobs it wants and after that it will work for a while. >> >> 15-Dec 21:36 myth-dir JobId 36763: Start Virtual Backup JobId 36763, >> Job=mills-feldman-Photos.2020-12-15_21.36.48_28 >> 15-Dec 21:36 myth-dir JobId 36763: Consolidating JobIds 35877,35879 >> 15-Dec 21:36 myth-dir JobId 36763: Unable to get Job record. >> ERR=cats/sql_get.cc:273 No Job found for JobId 0 >> >> 15-Dec 21:36 myth-dir JobId 36763: No files found to read. No bootstrap file >> written. >> 15-Dec 21:36 myth-dir JobId 36763: Fatal error: Could not create bootstrap >> file >> 15-Dec 21:36 myth-dir JobId 36763: Error: Bareos myth-dir 19.2.7 (16Apr20): >> Build OS: Linux-3.10.0-1062.18.1.el7.x86_64 ubuntu Ubuntu >> 16.04.6 LTS >> JobId: 36763 >> Job: mills-feldman-Photos.2020-12-15_21.36.48_28 >> Backup Level: Virtual Full >> Client: "mills-feldman-fd" 18.2.5 (30Jan19) Microsoft >> Windows 8 (build 9200), 64-bit,Cross-compile,Win64 >> FileSet: "Windows Mills Feldman Photos" 2019-08-24 18:03:47 >> Pool: "AI-Consolidated" (From Job Pool's NextPool resource) >> Catalog: "myth_catalog" (From Client resource) >> Storage: "File" (From Storage from Pool's NextPool resource) >> Scheduled time: 15-Dec-2020 21:36:48 >> Start time: 24-Nov-2020 14:42:22 >> End time: 24-Nov-2020 14:43:46 >> Elapsed time: 1 min 24 secs >> Priority: 4 >> SD Files Written: 0 >> SD Bytes Written: 0 (0 B) >> Rate: 0.0 KB/s >> Volume name(s): >> Volume Session Id: 0 >> Volume Session Time: 0 >> Last Volume Bytes: 0 (0 B) >> SD Errors: 0 >> SD termination status: >> Accurate: yes >> Bareos binary info: bareos.org build: Get official binaries and vendor >> support on bareos.com >> Termination: *** Backup Error *** >> >> Brock Palen >> [email protected] >> www.mlds-networks.com >> Websites, Linux, Hosting, Joomla, Consulting >> >> >> > > -- > You received this message because you are subscribed to the Google Groups > "bareos-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/bareos-users/5be054b0-69b8-bcec-6e00-b4a2e0f45649%40echter-kuechen-elektro.de. -- You received this message because you are subscribed to the Google Groups "bareos-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/3027D7B0-7EC8-4068-906E-1EE9F87A1E3A%40mlds-networks.com.
