Alright, fair point.
Still, more automated testing/packaging isn't a bad thing. What exactly
does the CI do, right now? I looked in .buildbot in the main repo, and I
guess it just tries to build and install GNUnet from source on whatever
OS hosts Buildbot? Couldn't see that much automated testing/packaging.
I'll say again that not having GNUnet running on Debian's CI is a big
missed opportunity. Being able to deploy and test on Debian Unstable
automatically would surely make it easier to keep the Debian package up
to date.
I'm not sure about the exact process, but I get the impression from
reading about the subject that it could just be a matter of creating a
new version, which could trigger building Debian packages, which then go
to Debian Unstable, and are then used with autopkgtest on ci.debian.net.
Best wishes,
Willow
On 03/06/2022 20:44, Christian Grothoff wrote:
Having many packages doesn't usually make it easier for packagers, it
just means that now they have to deal with even more sources, and create
more package specifications. Moreover, build times go up, as you now
need to run configure many times. Worse, you then need to find out in
which order to build things, and what are dependencies. It basically
makes it worse in all aspects.
Another big issue is that right now, I at least notice if I break the
build of an application and can fix it. Same if I run analysis tools:
they at least get to see the entire codebase, and can warn us if
something breaks. If we move those out-of-tree, they'll be even more
neglected. What we can (and do do) is mark really badly broken
applications as 'experimental' and require --with-experimental to build
those. That's IMO better than moving stuff out of tree.
Also, you probably don't want to split things as you proposed: GNS
depends on VPN and SETU! SET is supposed to become obsolete, but
consensus still needs it until SETU is extended to match the SET
capabilities.
Finally, as for build times, have you even tried 'make -j 16' or
something like that? Multicore rules ;-).
Happy hacking!
Christian
On 6/2/22 17:29, Willow Liquorice wrote:
Right. Perhaps the onus is on the developers (i.e. us) to make things
a bit easier, then?
To be honest, I barely understand how the GNUnet project is put
together on a source code level, let alone how packaging is done. One
of the things I'm going to do with the Sphinx docs is provide a
high-level overview of how the main repo is structured.
On the subject of complexity, I attempted to disentangle that awful
internal dependency graph a while ago, to get a better idea of how
GNUnet works. I noticed that it's possible to divide the subsystems up
into closely-related groups:
* a "backbone" (CADET, DHT, CORE, and friends),
* a VPN suite,
* a GNS suite,
* and a set operations suite (SET, SETI, SETU).
A bunch of smaller "application layer" things (psyc+social+secushare,
conversation, fs, secretsharing+consensus+voting) then rest on top of
one or more of those suites.
I seem to recall that breaking up the main repo has been discussed
before, and I think it got nowhere because no agreement was reached on
where the breaks should be made. My position is that those
"applications" (which, IIRC, are in various states of "barely
maintained") should be moved to their own repos, and the main repo be
broken up into those four software suites.
As Maxime says, GNUnet takes a long time to compile (when it actually
does - I'm having problems with that right now), and presumably quite
a while to test too. The obvious way to reduce those times is to
simply *reduce the amount of code being compiled and tested*. Breaking
up the big repo would achieve that quite nicely.
More specifically related to packaging, would it be a good idea to
look into CD (Continuous Delivery) to complement our current CI setup?
It could make things easier on package maintainers. Looks like Debian
has a CI system we might be able to make use of, and all we'd need to
do is point out the test suite in the package that goes to the Debian
archive.