Marc,

> I would very much be in favor of people providing hosting services where this
> does not exist, and to have an actual FAQ of things to tell upstream so that
> they prepare actual properly tagged releases on platforms such as github.

I think an FAQ page to explain to porters whats going on and what to do
would be good, in addition to an FAQ page to link to upstream
explaining to them why it is important and how they can proceed.

I have some experience with technical writing, and would be willing to
spend some time writing these pages. If that sounds good to you or
someone else here, reach out to me via email and we can discuss in more
detail. I'm also unfamiliar with the process for contributing to the
FAQ; is there an FAQ about updating the FAQ? 

-----

Antoine,

> I'm slightly worried because each time we add pieces to bsd.port.mk, it
> becomes slightly harder to mainain.
> 
> 
> 
> Just, if things break later (bad checksums, dpb support needed), I expect
> some help!
> 
> Specifically, if we end up requiring extra support for fetch-time dependencies
> to create verifiable tarballs or whatever, don't expect me to do the
> heavy-weight pulling.

I agree that adding a bunch of extra dependencies just to be able to
fetch ports is not ideal.

> If that happens and the distfiles prove unstable we will need to do
> *something* though .. and we won't be the only ones, any packagers that
> check distfiles (either by hashes or by pgp signatures as is more common
> on Linux) will need stable files to do that.

What doe the Debian guys do? I know they are pretty big on re-
producible builds. I was not able to figure out how they validate the
distfiles from the about page for the re-producible builds initiative.
Maybe we could reach out to the Debian mailing list about this - they
surely have the same problem and have either solved it, or are working
on solving it.

-----

Stuart,

> > At a certain point, we either need to start mirroring all of these
> > releases somewhere, or find a solution to work around the problems of
> > this style of software release.

> The alternative is to mirror stable distfiles.

Mirror them where? Someone will have to pony up the developer time and
infrastructure (hardware, disk space, bandwidth) to make that happen at
any kind of scale.

> Those are simply to make the URLs and directory names easier to work with
> and don't do anything to increase distfile stability.

I am aware. The point I was getting at, which in retrospect I didn't
explain very clearly, is that this appears something that the GitLab
folks care about to some extent. Raising the issue with them might be
worthwhile.

> Github at least has uploadable release assets which allow projects to
> provide stable distfiles without looking for alternative hosting.
> (And guess what, that doesn't use the GH_* support because it's just
> a simple download and most upstreams have sensible directory names
> etc). The best you can do with Gitlab is the dirty hack of committing
> distfiles themselves to a repo.

I believe the public GitLab instance also has a GitHub-Pages work-
alike. In principle one could have CI script to upload release
artifacts there. That's a pretty gross solution though.

> On-the-fly tarball generation is simply not compatible with checking
> distfile hashes to ensure that the downloaded files are not corrupted
> or backdoored.

Admittedly I'm not deeply familiar with the tar binary format, but I
don't see why GitHub (et al.) couldn't make on-the-fly generation
produce tarballs with consistent hashes if they wanted to. Taking the
same set of files and generating a tar with them multiple times aught
to produce the same binary as a result no? This doesn't prevent the
upstreams from misbehaving by rebasing over an existing tag, but then
again they could also delete and re-upload an artifact with the same
name - in either case that *should* cause a checksum failure.

~ Charles


Reply via email to