Simon Josefsson wrote:
>Philip Hands <p...@hands.com> writes:
>
>> Let's say that we could make that image bit-for-bit reproducible with an
>> image that was produced by taking the normal with-nonfree-firmware
>> image, and filtering it somehow (e.g. overwriting the non-free bits with
>> zeros, say).
>>
>> Would you object if the normal way of generating the image was to apply
>> the filter, rather than build it in parallel?
>>
>> If that would be OK, then one could have something that applied the
>> filter to our normal images to provide people with the images you want,
>> while not require duplication of building and storage.
>>
>> People could always confirm that they were getting the same result as
>> building without the nonfree firmware by doing it themselves, and
>> checking things matched.
>>
>> Is that something that would work for you?

Yeesh. That gets messy - live images include firmware in the squashfs,
for example. Simply replacing things with zeroes is not *quite* enough
here.

...

>I don't think the above fully resolve my concerns though.  The mere
>presence of official documented hooks to load non-free software is
>problematic from a freedom perspective.  They are the enabler of the
>slippery slope that leads to including non-free software by default.

Sigh. That's the same argument for removing the option to even load
firmware. We must be *so* sure of purity that we can't even
acknowledge that users might need to use/run anythinh that we don't
consider pure enough. Let's stop them!

>Meanwhile I looked into the debian-cd project to experiment with
>building images myself.  Why aren't the images built in a Salsa
>pipeline?  Yes I understand that 20 years ago the resources required to
>build the images were large.  But today people build large projects in
>GitHub Actions.  What artifact size are we talking about?  Looking at
>the summary of official images at https://www.debian.org/CD/http-ftp/ it
>seems like around 50GB?

Haha, no. Using the last bookworm release as a guide, we created a
grand total of 284 images (mix of d-i and live) totalling ~1.7T.

>What is the build time on a powerful machine?

That build took ~4h end-to-end on casulana,d.o, which I believe is the
project's biggest server box. The build process needs a complete local
mirror and a *lot* of I/O and CPU.

>I would start to worry about the design feasability of running this in a
>pipeline when the total size of artifacts from a single build is larger
>than 4TB or if a single serialized job would have to run for more than a
>couple of days.  I'm probably missing some combinatorical explosion of
>varaints that increase the total size, but there is also no requirement
>that all images are built like this.  I would be satisfied if the
>"netinst" variant for all architectures could be reproduced from purely
>free software, in a Salsa pipeline, and that seems to be a 5-10GB
>artifact unless my math is off.
>
>I worry that the builds require other non-reproducible/free steps
>though.  A signed boot shim where the private key is not replaceable by
>a user-controlled key is equally problematic as non-free firmware.
>Trisquel and Guix avoids these, and I recall seeing stuff like that in
>Debian -- https://tracker.debian.org/pkg/shim-signed -- but it is good
>to know that we have more things to work.

Sigh.

-- 
Steve McIntyre, Cambridge, UK.                                st...@einval.com
Can't keep my eyes from the circling sky,
Tongue-tied & twisted, Just an earth-bound misfit, I...

Reply via email to