On Tue, May 19, 2020 at 1:20 PM Gregory Nutt <spudan...@gmail.com> wrote:
> Or don't develop custom hardware.  Use COTS (Commerical Off-The-Shelf) only.

That's the easiest (and possibly best) option.

Custom boards, if we ever get there, would serve multiple purposes,
one being testing, another being a reference platform, and a third
(and not less important than the other two) is publicity and
advertising.

> > (3) When #2 is complete, add software-only automated test suite under
> > simulation.
>
> What would be the relationship between PR checks?  In past Xiao Xiang
> has proposed this as a step in validated PRs.

I think none, at least not in the near term. These kind of tests would
probably take a *long* time to run. I don't think we want to wait
hours on end for a PR pre-check (which I see as more of a sanity
check). So, I would make this part of the nightly tests. If a problem
is found, it's limited to commits from the previous day, so we know
where to look for it.

> My interested would be in setting up a custom standalone test harness
> that I could use in my office, independent of github tests.  I don't
> think those should be mutually exclusive and I don't think your steps
> apply to the latter.

Agreed.

> I don't think we need a rigid sequence of steps.  I see nothing wrong
> with skipping directly to 3, skipping 1 and 2.  I see nothing wrong with
> some people doing 3 while others are doing 4 concurrent.  These sequence
> is not useful.
>
> What would be useful would be:
>
> 1. Selection of a common tool,
> 2. Determination of the requirements for a test case, and
> 3. A repository of retaining share-able test cases.
>
> If we have those central coordinating resources then the rest can be a
> happy anarchy like everything else done here.

Yes, it can be an anarchy; my suggestion for steps was not in terms of
when to implement the different parts, but rather, when to make it a
*requirement* that new changes must pass those tests. (Or when to make
it a requirement that a release must pass all those tests.) We still
have nightly build tests failing almost every night, so it doesn't
make sense to add even more tests to our *requirements* until we get
that sorted out, but nothing stops us from designing and implementing
those tests.

Hope that clears up what I was trying to say earlier.

Nathan

Reply via email to