tmedicci commented on issue #15730:
URL: https://github.com/apache/nuttx/issues/15730#issuecomment-2679095609

   Just some additional thoughts about our CI organization. Although it's 
recommended to keep the distributed build farm, we should avoid as many as 
possible bad commits from being merged upstream. To do that, we need to test 
every single PR. This costs a lot, so we need to make it more efficient. _How?_ 
By splitting the CI into more workflows prone to fail (and stop subsequent 
jobs).
   
   First, we build the most complete `defconfig` for each chip (or board). 
Then, we test it (runtime testing). After that, we can continue and build all 
the other `defconfigs` (and, eventually, test some of these configs on QEMU 
and/or real HW).
   
   Let's use our GH runners to build the firmware and run the QEMU testing. If 
QEMU testing is successful, we can even use self-hosted runners to test the HW 
(see, the security concerns here are mitigated as it'd only be tested after 
QEMU).
   
   I created a simple diagram about what I think should be our optimal CI in 
the future:
   
   
![Image](https://github.com/user-attachments/assets/e440cedd-f60a-4e06-8199-f108882e514e)
   
   It doesn't matter that much if It takes 3 or more hours to run the complete 
CI as long as it fails as soon as it detects a failure. Is this possible? I 
don't know, we have to make it step-by-step. My current proposal is 
implementing the following:
   
   
![Image](https://github.com/user-attachments/assets/1e11c85a-d6f9-4c2a-92b6-21a8802746cb)
   
   And evaluate how much GH runner's usage we'd save by doing that...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@nuttx.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to