tmedicci commented on issue #15730: URL: https://github.com/apache/nuttx/issues/15730#issuecomment-2679095609
Just some additional thoughts about our CI organization. Although it's recommended to keep the distributed build farm, we should avoid as many as possible bad commits from being merged upstream. To do that, we need to test every single PR. This costs a lot, so we need to make it more efficient. _How?_ By splitting the CI into more workflows prone to fail (and stop subsequent jobs). First, we build the most complete `defconfig` for each chip (or board). Then, we test it (runtime testing). After that, we can continue and build all the other `defconfigs` (and, eventually, test some of these configs on QEMU and/or real HW). Let's use our GH runners to build the firmware and run the QEMU testing. If QEMU testing is successful, we can even use self-hosted runners to test the HW (see, the security concerns here are mitigated as it'd only be tested after QEMU). I created a simple diagram about what I think should be our optimal CI in the future:  It doesn't matter that much if It takes 3 or more hours to run the complete CI as long as it fails as soon as it detects a failure. Is this possible? I don't know, we have to make it step-by-step. My current proposal is implementing the following:  And evaluate how much GH runner's usage we'd save by doing that... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@nuttx.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org