On Monday 25 February 2008 10.45:16 Bastian Friedrich wrote: > Hi, > > I'll keep my answer on this subject short, as you are obviously currently > working on the topic in a larger scope; I'll add some comments on your new > thread on bacula-devel. > > On Friday 22 February 2008, Kern Sibbald wrote: > > On Friday 22 February 2008 15.44:56 Bastian Friedrich wrote: > > > On Wednesday 20 February 2008, Bastian Friedrich wrote: > > > > when configuring multiple "Run"s in a schedule that occur at the same > > > > time, they are run sequentially: > > > > Run = Full Pool = PoolSpecial w01 mon at 8:00 > > > > Run = Full Pool = Pool mon at 8:00 > > > > results in two executions of the job that refers to the schedule. > > > > > > [...] > > > > > > > I'd have a need for exclusiveness in this case: run the job only in > > > > case it is not referenced before. Would you also regard this as a > > > > sensible modification? > > [...] > > > > What do you think about this idea/code? > > > > Well, I like the idea (the feature), but I'm a little concerned about the > > solution you propose from several stand points: > > > > 1. I found it hard to understand what you wanted to do until I read the > > code, so I am concerned that this concept could be hard to document > > correctly. > > Full ACK. During implementation, I came up with at least three different > names for the topic. Finding a good term to express the restriction of > duplicate job queuing is not simple... Your argument is exactly the reason > why I did not supply a documentation patch in the first place. > > If I rewrote these few lines, I'd rather name the flag > "PreventDuplicateQueuing". > > > 2. The current list of Run directives are essentially ANDed. That is > > Bacula will walk down the list and schedule all that apply. > > Minor problem here: In fact, Bacula "walks up" the list, in a way, at least > compared to the sequence of statements in the config file; in other words: > the list is read "bottom up".
Ah, that is quite typical of my implementations (add to head of single linked list) when it "doesn't matter" > > Should we consider reversing the logics in store_run(), i.e. append new Run > statements to the end of the list instead of prepending them? Yes, that is rather trivial to do -- no problem. > > I'd prefer a semantics of "else", rather than "overwriting" existing Runs. > > > I think what > > you really are trying to do is to setup two Run directives to be ORed, > > and perhaps that could be handled by a slightly different syntax such as: > > > > Or { > > Run = Level=Full Pool = PoolSpecial w01 mon at 8:00 > > Run = Level=Full Pool = Pool mon at 8:00 > > } > > or perhaps some other keyword such as OneOf ... wouldn't that be much > > clearer? > > There would be a bit more work to implement this (not really hard) but > > it seems to me it would be much clearer to the user. > > The "Or" (or OneOf, or whatever) ressource would then be living inside a > Schedule statement? Interesting idea; it would then be possible to do > something like > > Schedule { > Name = foo > Or { > Run = ... > Run = ... > } > Or { > Run = ... > Run = ... > } > } > > On the other hand, I frankly do not have any requirement for this added > complexity, and one new keyword/flag would be sufficient for me. Please give us an exact example of the Schedule directives with jobs you want ANDed and ones that you want ORed. Then I will have a better idea of what you are proposing. I am all for something simpler as long as the sytnax and semantics are clear. > > > 3. It is interesting that this comes just at this moment, because just > > tonight I was starting to work on the new "Duplicate Jobs" directive > > group for Jobs. That is a fairly comprehensive set of directives that > > tell Bacula how to deal with duplicate jobs. > > So your "Duplicate Jobs" directive is meant to deal with duplicate > executions of jobs, rather than with duplicate queuings, i.e. would rather > not prevent the jobs to be queued in the first place? In Bacula language, we usually speak of "scheduling" rather than queuing. My Duplicate Jobs is meant to deal with avoiding duplicate scheduling, but also dealing with killing running jobs if another more important job is scheduled. > > In that case, it might in fact be sensible to have both solutions. Yes, it could solve both problems, but the Duplicate Jobs code is a bit kludgie for dealing with the kind of problem you have -- it seems to me that it is much better handled at the scheduling level, otherwise, one job may start and actually begin execution before the second job starts even if scheduled at the same time, and then you are in a situation of having to possibly kill a running job. It would be much better if Bacula would know that only one of the two (or more) jobs should actually be scheduled. So, I think we should continue discussing possible modifications to the Schedule resource ... Regards, Kern > > Best > Bastian ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users