On 7 February 2011 02:05, David Gilbert <david.gilb...@linaro.org> wrote:

> On 4 February 2011 21:53, Paul Larson <paul.lar...@linaro.org> wrote:
> >
> > Hi Mirsad, I'm looking at the recent edits to
> > https://wiki.linaro.org/Platform/Validation/Specs/ValidationSchedulerand
> > wanted to start a thread to discuss.  Would love to hear thoughts from
> > others as well.
> >
> > We could probably use some more in the way of implementation details, but
> > this is starting to take shape pretty well, good work.  I have a few
> > comments below:
> >
> >> Admin users can also cancel any scheduled jobs.
> > Job submitters should be allowed to cancel their own jobs too, right?
> >
> > I think in general, the user stories need tweaking.  Many of them center
> > around automatic scheduling of jobs based on some event (adding a
> machine,
> > adding a test, etc).  Based on the updated design, this kind of logic
> would
> > be in the piece we were referring to as the driver.  The scheduler
> shouldn't
> > be making those decisions on its own, but it should provide an interface
> for
> > both humans to schedule jobs (web, cli) as well as and api for machines
> > (driver) to do this.
>
> I'd like to add as user stories:
>   Dave wants to rerun a test on a particular machine to see if a
> failure is machine specific.
>

An initial idea we had was to run jobs based on machine type, i.e.
BeagleBoard, not on a particular machine, i.e. BeagleBoard_ID001. The
dispatcher would choose on which particular machine to run, depending on
availability. I understand your point when running on a particular machine
is desirable, but maybe this feature should be enabled for admins trying to
track a deviating hardware? Or maybe this is a user story for dashboard, to
have a feature comparing and presenting results from all machines of the
same type, or even in broader aspect for chosen/all machine types we
support?

  Dave wants to run the same test on a set of machines to compare the
> results.
>

This is almost same as first. Maybe the better solution, as I wrote above,
is to go to dashboard and compare all the existing results there instead?
This assumes of course that there are results already reported for wanted
hardware, which I think would be a case if looking at weekly execution
intervals, but probably not daily. What do you think, is this reasonable
enough or am I missing something important?


> I'd also like for there to be history available for each machine stuff
> has run on; e.g. knowing
> that a machine has just been reinstalled or been updated might help
> you understand a failure.
>

Exactly, I agree. I think this will be solved by the dispatcher when
reporting test results to the dashboard. The results in the dashboard should
include that information, and even keep history, so I guess it is only to
present the information in the desirable format.


> Dave
>

Thanks for your comments Dave!
_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to