"Jonathan Frederickson" <jonat...@terracrypt.net> writes: > I frequently end up with Guix attempting to build packages on my > lower-powered machines when there are no substitutes > available. However, a common reason that substitutes aren't available > for a package is that the package failed to build in CI! And I usually > discover this when the package fails to build locally, usually for the > same reason, and usually after a relatively long build process.
I am also annoyed by this artifact of nix-based systems. Some systems are physically incapable of building their binaries; for example, kernel of a microcomputer — absolutely necessary, yet the device does not have enough memory. This is why I believe that a clean solution is to guarantee proper substitute availability for systems that require it. > Would it make sense to have some mechanism for substitute servers to > be able to provide a sort of "non-existence proof" for a given > package? Something that the CI system could publish to indicate that > its build attempt for that package failed, and that clients could use > to optionally abort without attempting a local build? I have carried the following idea for a long time with the intent of actually implementing it before sharing it ("if you want something done, do it yourself" mentality). But seeing other's frustration with this problem I could at least share it. Here it is: The proof of availability is in workflow itself. The project committers NEVER commit anything to the master branch. Only the CI system does. Instead, the committers push to a "pre-main" branch, and the CI system picks the commits up one by one and attempts to build them as usual. IMPORTANT POINT: *if* the commit builds correctly, it gets pushed by CI to master branch, and the substitute is already available. *If* the commit does not build, it gets rejected, and it never goes to master. I currently do not know enough about Git to confidently propose a solution to the problem of how to handle the reordering of the queued work on a build failure, but I have a feeling it is not that hard to solve. One could argue that this process delays availability of software updates, but I believe this is the correct price to pay. The CI latency would still be neglible when compared to the latency of developers who perform the real work of software maintenance. There is also the issue of software bugs which cause problems at runtime. However, this is an independent problems, which should be managed by other QA processes. The art of good engineering is to find the simplest mechanisms possible that achieve the tasks well. And this requires to break down problems into atomic parts.