> > On 06/28/11 08:40, John Drescher wrote: > > On Tue, Jun 28, 2011 at 8:27 AM, Phil Stracchino <ala...@metrocast.net> > wrote: > >> Adding a data limit to a Bacula job really won't do a lot to work around > >> the unreliability of the link, it'll just make the job terminate early > >> if you COULD have completed it in one shot. I'm not sure this idea > >> makes sense. > >> > > > > I think the idea is to terminate early without error. So that the next > > incremental ... can pickup where the full left off. > > Oh, I totally get that, yes. I just think it's the wrong way to solve > the problem. It's a Band-Aid approach. > > The better way to solve the problem would be to come up with some kind > of resume-job-from-checkpoint functionality, but that of course would be > a fairly major project. >
That would be pretty cool but really hard and a little restrictive (eg resuming a backup of a pipe plugin would not be possible). Another approach would be to allow reconnect after interruption which covers short lived DSL outages etc but not users turning off laptops. That would be resuming the same job though, which is different to starting a new job at the point where the interrupted job stopped. James ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ Bacula-devel mailing list Bacula-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-devel