Hi,

04.11.2007 19:15,, Kern Sibbald wrote::
> On Sunday 04 November 2007 18:31, David Boyes wrote:
>>> 1.  Allow Duplicate Jobs  = Yes | No | Higher   (Yes)
>> Looks OK. One question: if a lower level job is running, and a higher
>> level job attempts to start, does this imply that the lower level job is
>> cancelled, and the higher level one is run instead? I think this is
>> desirable behavior if possible (or maybe it might require an additional
>> option of Supercede for this directive to enable this behavior).
> 
> I hadn't planned to cancel the lower priority job, but I had thought about 
> the 
> possibility.  However, now that you mention it, I think we need some keyword 
> to do this.  Any suggestions?  -- CancelLower, HigherWithCancel ?

Your project sounds good to me, and Davids suggestion is what I'd 
prefer. The "CancelLower" keyword sounds correct to me - it describes 
what will be happening.

> By the way, if a lower priority job is scheduled, but not running, I was 
> planning to cancel it.  A new keyword would certainly clarify what should 
> happen.
> 
>>> 2. Duplicate Job Interval = <time-interval>   (0)
>> The semantics look fine, but a usability suggestion:
>>
>> Since you're really setting a proximity guard interval, it might be
>> easier to understand for the end user if you used Duplicate Job
>> Proximity rather than Interval. That would give the implication of "no
>> closer than" that I think you want, and it might translate better.
> 
> Yes, I was thinking of calling it Duplicate Job Delay, but that was 
> confusing.  
> I think "Duplicate Job Proximity" will be the most clear.  Thanks.

I agree.

> Another point is that I could do a lot of this in the scheduler -- i.e. just 
> not run jobs if one was already running, but I think I will actually "start" 
> the job, print a message, then cancel it if it is of equal or lower priority 
> (i.e. not supposed to run).  That way, there will be some record.   We might 
> even want to invent a new message class (for example: AutoCancelled) so that 
> the job output from such jobs could be sent elsewhere (i.e. down the bit 
> bucket if one wants).

The scheduler would be the logical place for this, I think (assuming 
that manually started jobs are feed through the scheduler, i.e. also 
affect the suggested behaviour, and are affected by it).

>>> PS: I think the default for Allow Duplicate Jobs should be Higher, but
>>> that
>>> would change current behavior ...
>> I agree with you. It's dumb to do an incremental followed immediately by
>> a full backup if they're going to dump the same data in roughly the same
>> timeframe.
> 
> Yes, but I always hesitate to make non-compatible changes ...

I'd also prefer to keep the defaults in a way that reproduces the 
current behaviour.

Arno

> Thanks for the ideas,
> 
> Kern
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to