> 
> On 06/28/11 09:37, jon pounder wrote:
> > On 06/28/2011 09:32 AM, Phil Stracchino wrote:
> >> On 06/28/11 08:40, John Drescher wrote:
> >>> On Tue, Jun 28, 2011 at 8:27 AM, Phil
Stracchino<ala...@metrocast.net>
> wrote:
> >>>> Adding a data limit to a Bacula job really won't do a lot to work
around
> >>>> the unreliability of the link, it'll just make the job terminate
early
> >>>> if you COULD have completed it in one shot.  I'm not sure this
idea
> >>>> makes sense.
> >>>>
> >>> I think the idea is to terminate early without error. So that the
next
> >>> incremental ... can pickup where the full left off.
> >> Oh, I totally get that, yes.  I just think it's the wrong way to
solve
> >> the problem.  It's a Band-Aid approach.
> >>
> >> The better way to solve the problem would be to come up with some
kind
> >> of resume-job-from-checkpoint functionality, but that of course
would be
> >> a fairly major project.
> >>
> >>
> >
> > Doesn't that kind of mess up a backup integrity if both incrementals
> > don't cover the same time period ? By definition would the second
> > incremental not have to start over again ?
> 
> Well, up to a point, yes.  It would require careful thought as to how
to
> implement it.  I envision the logic as something like this:
> 
> INCLUDE FILE IF
>     [file in inclusion list and not in exclusion list]
> AND
>     [file modified since last job]
>     OR
>         [last job was incomplete]
>         AND
>         [file modified since previous job]
>         AND
>         [file not successfully backed up in last job]
> 
> But then you get into "What if the previous job was ALSO incomplete?"
> You could potentially wind up with a chain of recursive checks.
> 
> Another approach - although possibly one that would require extensive
> changes - might be to have the file daemon start each job by scanning
> the filesystem and deciding what it's going to back up, then save that
> list of files to be backed up in that job.  For each file successfully
> sent and acknowledged (does the FD get a "file successfully stored"
> acknowledgement back?), it would then "check that off" its list.  If
the
> job ends up incomplete, then the FD could begin the next job by
scanning
> as normal and *appending* any files due for backup *that are not
already
> on the remaining list* to the end of the list, then starting sending
> files from the beginning of the list.  Perhaps to speed things up,
there
> could be an option to define an "assumed consistency" window, so that
if
> a new job was begun within [defined consistency window] of a previous
> interrupted job, the FD would not take time to rescan the filesystem
at
> all, but just resume sending files from the already-saved list from
the
> point of interruption.
> 

I think "Accurate" already solves all of this - it is quite happy to
notice new files that weren't in the previous job even if they have last
modified dates that predate the previous job. It's way past my bedtime
though so it's possible I've missed an escape somewhere though.

James

------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
Bacula-devel mailing list
Bacula-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-devel

Reply via email to