On Mon, May 18, 2020 at 12:03 AM Gregory Nutt <spudan...@gmail.com> wrote:
> Build testing is important but does not address functionality. Things
> can be completely broken and still build perfectly.

Correct.

> Currently our functional testing strategy is simply to ask people to
> check releases.  But no problems have ever been reported from that so I
> think it is not working.  The related strategy is just to wait awhile
> and if no one complains it most be okay.  My experience is that bugs
> fester in the repository for weeks or months before the are detected and
> reported.

At the end of the day, no test suite can ever exercise every single
possible codepath, particularly when dealing with concurrency issues.
This is not necessarily a "problem," it's just the reality of software
testing. That is why we depend on downstream users and developers to
participate in the project by sharing their experiences with us on the
mailing lists. Also, other users out there will put the code through
its paces by using it differently than the NuttX devs might have
considered. So feedback from users is crucial. I think that as a
project, we need to push for more participation by downstream
stakeholders to test our release candidates during the soak period in
their own applications and communicate their findings with the
project. I think this can also help increase participation in general.

Having said all of this:

> Is this something we want to invest in?  Understanding that it would be
> a long term investment.

If we want to be a serious contender in the RTOS space, I think the
answer is yes.

Yes, it would be a long-term effort. It won't happen overnight. So
before anyone starts banging out testing code, I think we need to have
a discussion as a community and come up with a realistic strategy to
gradually build up a test suite, along with determining the premise of
how that test suite works.

On that point:

> Xiao Xiang was suggested some automated testing based on the simulator.
> At other times in the past we have talked about developing a test suite
> around some reference hardware board. Those discussions did not go anywhere.

We may need to have several tiers of testing.

* The lowest tier could be coding standard and build testing. (Which
  we're already doing.)

* The next tier could be static analysis of the code.

  There are free analyzers, as well as cloud based commercial ones
  that are free for open source projects to use.

* The next tier could be an automated test suite that runs in a
  simulator.

  This would be accessible to the greatest numbers of developers and
  contributors, because there's nothing to buy.

* The highest tier could be testing on a reference hardware board.

We could begin by perfecting the lowest tier and then gradually
climbing up the ladder by adding static analysis, etc.

About discussions not going anywhere, that's bound to happen in any
project run by volunteers, but as long as we're watchful and revive
those discussions from time to time, something will come of it.

So, let's talk about it!

Nathan

Reply via email to