> -----Original Message-----
> From: Gregory Nutt <spudan...@gmail.com>
> Sent: Wednesday, May 20, 2020 1:20 AM
> To: dev@nuttx.apache.org
> Subject: Re: Automated Testing
>
> >> Adding hardware raises complexities or design, manufacturing,
> >> distribution, and support. Not insumountble, but not simple either.
> > This is why I suggested that we "grow" into it.
> Or don't develop custom hardware. Use COTS (Commerical Off-The-Shelf) only.
> >
> > (1) Perfect the current coding standard and build test.
> >
> > (2) When #1 is complete, add static analysis.
> >
> > (3) When #2 is complete, add software-only automated test suite under
> > simulation.
>
> What would be the relationship between PR checks? In past Xiao Xiang has
> proposed this as a step in validated PRs.
>
PR check could run a smoke test to ensure the basic quality. Full test has to
be run in the nightly build.
> My interested would be in setting up a custom standalone test harness that I
> could use in my office, independent of github tests. I
> don't think those should be mutually exclusive and I don't think your steps
> apply to the latter.
>
Yes, this is the goal for QEMU and SIM target at least. Actually, PR check can
run locally with one command:
cibuild.sh -i testlist/all.dat
> > (4) When #3 is complete, add hardware testing.
>
> I don't think we need a rigid sequence of steps. I see nothing wrong with
> skipping directly to 3, skipping 1 and 2. I see nothing wrong
> with some people doing 3 while others are doing 4 concurrent. These sequence
> is not useful.
>
> What would be useful would be:
>
> 1. Selection of a common tool,
> 2. Determination of the requirements for a test case, and 3. A repository of
> retaining share-able test cases.
>
> If we have those central coordinating resources then the rest can be a happy
> anarchy like everything else done here.
>
> Greg
>