I'm getting far too tired to follow this one... :) I noticed that a Copy to Tape job was putting my incremental backups into a full pool. That is, the original job was level = incremental. The copy to tape was run at level = full.
I wasn't sure why, but I think I've found it. This is the original job output: JobId: 52873 Job: nyi_maildir.2011-02-17_08.00.00_27 Backup Level: Incremental, since=2011-02-17 04:00:09 Client: "nyi-fd" 5.0.3 (04Aug10) i386-portbld-freebsd7.3,freebsd,7.3-STABLE FileSet: "nyi mymaildir" 2008-05-31 19:08:23 Pool: "IncrFile" (From Job IncPool override) Catalog: "MyCatalog" (From Client resource) Storage: "MegaFile" (From Pool resource) This is the copy to tape: Prev Backup JobId: 52873 Prev Backup Job: nyi_maildir.2011-02-17_08.00.00_27 New Backup JobId: 52932 Current JobId: 52875 Current Job: CopyToTape-Inc.2011-02-17_08.32.00_29 Backup Level: Full Client: kraken-fd FileSet: "Full Set" 2007-03-18 13:48:18 Read Pool: "FullFile" (From Job FullPool override) Read Storage: "MegaFile" (From Pool resource) Write Pool: "Fulls" (From Job Pool's NextPool resource) Write Storage: "DigitalTapeLibrary" (From Storage from Pool's NextPool resource) Catalog: "MyCatalog" (From Client resource) Start time: 17-Feb-2011 09:56:43 This is the original job: Job { Name = "nyi maildir" JobDefs = "DefaultJobRemote" Schedule = "Maildir" Pool = IncrFile Client = "nyi-fd" FileSet = "nyi mymaildir" Write Bootstrap = "/home/bacula/working/nyi-fd-maildir.bsr" } And its jobdefs: JobDefs { Name = "DefaultJobRemote" Type = Backup Level = Incremental Client = polo-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = MegaFile Messages = Standard Pool = FullFile # required parameter for all Jobs Full Backup Pool = FullFile Differential Backup Pool = DiffFile Incremental Backup Pool = IncrFile Priority = 20 Spool Data = no Spool Attributes = yes } This is the copy to tape job: Job { Name = "CopyToTape-Inc" Type = Copy Level = Incremental Pool = IncrFile JobDefs = "DefaultJobCopyDiskToTape" Priority = 410 Selection Type = SQL Query Selection Pattern = " SELECT DISTINCT J.JobId, J.StartTime FROM Job J, Pool P WHERE P.Name = 'IncrFile' AND P.PoolId = J.PoolId AND J.Type = 'B' AND J.JobStatus IN ('T','W') AND J.jobBytes > 0 AND J.JobId NOT IN (SELECT PriorJobId FROM Job WHERE Type IN ('B','C') AND Job.JobStatus IN ('T','W') AND PriorJobId != 0) ORDER BY J.StartTime " } These are the Job Defs for that job: JobDefs { Name = "DefaultJobCopyDiskToTape" Type = Backup Level = Incremental Client = kraken-fd FileSet = "Full Set" Schedule = "WeeklyCycleForCopyingToTape" Storage = DigitalTapeLibrary Messages = Standard Pool = FullFile # required parameter for all Jobs # # since this JobDef is meant to be used with a Copy Job # these Pools are the source for the Copy... not the destination. # The Destination is determined by the Next Pool directive in # the respective Pools. # Full Backup Pool = FullFile Differential Backup Pool = DiffFile Incremental Backup Pool = IncrFile Priority = 400 # don't spool date when backing up to tape from local disk Spool Data = no Spool Attributes = yes RunAfterJob = "/home/dan/bin/dlt-stats-kraken" # no sense spooling local data Spool Data = no Spool Attributes = yes Maximum Concurrent Jobs = 6 } And here is the schedule... I think this is why the copy job went into the full pool: Schedule { Name = "WeeklyCycleForCopyingToTape" Run = Level=Full at 8:32 } But then, how do we explain this job, which is pretty much the same as the original job. JobId: 52869 Job: supernews_basic.2011-02-17_05.55.03_23 Backup Level: Incremental, since=2011-02-16 07:51:40 Client: "supernews-fd" 5.0.3 (04Aug10) amd64-portbld-freebsd8.1,freebsd,8.1-STABLE FileSet: "basic backup" 2010-09-09 02:42:37 Pool: "IncrFile" (From Job IncPool override) Catalog: "MyCatalog" (From Client resource) Storage: "MegaFile" (From Pool resource) The copy job for this one has the right level, Incrementals, goes into the right pool, Incrementals. Prev Backup JobId: 52869 Prev Backup Job: supernews_basic.2011-02-17_05.55.03_23 New Backup JobId: 52927 Current JobId: 52926 Current Job: CopyToTape-Inc.2011-02-17_08.32.04_20 Backup Level: Incremental Client: kraken-fd FileSet: "Full Set" 2007-03-18 13:48:18 Read Pool: "IncrFile" (From Job IncPool override) Read Storage: "MegaFile" (From Pool resource) Write Pool: "Incrementals" (From Job Pool's NextPool resource) Write Storage: "DigitalTapeLibrary" (From Storage from Pool's NextPool resource) Catalog: "MyCatalog" (From Client resource) Start time: 17-Feb-2011 09:54:15 End time: 17-Feb-2011 09:54:46 -- Dan Langille - http://langille.org/ ------------------------------------------------------------------------------ The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users