On 10 February 2011 04:30, David Gilbert <david.gilb...@linaro.org> wrote:

> On 10 February 2011 12:19, Mirsad Vojnikovic
> <mirsad.vojniko...@linaro.org> wrote:
> <snip
>  > That I wrote:
>
> >> I'd like to add as user stories:
> >>   Dave wants to rerun a test on a particular machine to see if a
> >> failure is machine specific.
> >
> > An initial idea we had was to run jobs based on machine type, i.e.
> > BeagleBoard, not on a particular machine, i.e. BeagleBoard_ID001. The
> > dispatcher would choose on which particular machine to run, depending on
> > availability. I understand your point when running on a particular
> machine
> > is desirable, but maybe this feature should be enabled for admins trying
> to
> > track a deviating hardware? Or maybe this is a user story for dashboard,
> to
> > have a feature comparing and presenting results from all machines of the
> > same type, or even in broader aspect for chosen/all machine types we
> > support?
>
> I'm talking here of the case where the user has run a set of tests and
> one is showing
> up as bad and they are trying to work out why; lets say they run the
> test again and it
> works on a different machine; they might reasonably want to see if the
> original machine fails.
> Then the second subcase is that we've identified that a particular machine
> always fails a particular test but no one can explain why; you've been
> given the job
> of debugging the test and figuring out why it always fails on that machine.
> This might not be a hardware/admin issue - it might be something really
> subtle.
>

I understand what you aim at. The question is then to allow or not allow
users to submit jobs to particular machine(s). I have no particular problem
with allowing it, we can include it in our solution. We can have both
choices: run on particular machine(s) or let the system choose one or more
from given machine type(s). Anyone else, comments on this?


>
> >>   Dave wants to run the same test on a set of machines to compare the
> >> results.
> >
> > This is almost same as first. Maybe the better solution, as I wrote
> above,
> > is to go to dashboard and compare all the existing results there instead?
> > This assumes of course that there are results already reported for wanted
> > hardware, which I think would be a case if looking at weekly execution
> > intervals, but probably not daily. What do you think, is this reasonable
> > enough or am I missing something important?
>
> OK, there were a few cases I was thinking here:
>  1)  A batch of new machines arrives in the data centre; they are
> apparently
> identical - you want to run a benchmark on them all and make sure the
> variance
> between them is within the expected range.
>  2) Some upgrade has happened to a set of machines (e.g. new kernel/new
> linaro
> release) rolled out to them all - do they still all behave as expected?
>  3) You've got a test, it's results seem to vary wildly from run to run -
> is it
> consistent across machines in the farm?
>

OK, I understand better now. For me this is still at test result level, i.e.
dashboard (launch-control) should produce such kind of reports. Cannot see
where this fits on scheduler level? When we give the possibility to run jobs
on specific boards, it should be easy to retrieve all needed test reports
from the dashboard.


>
> Note these set of requirements come from using a similar testing farm.
>

> Dave
>
_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to