Re: How to build Rust packages

2024-12-08 Thread indieterminacy
I should point out that I am packaging Scryer-Prolog, which uses Rust in 
its underbelly.


As it stands, the divergences of crate versioning means that my package 
definition is nearly 60k LOC and Ive had to validate over 1.1k packages 
already.
Naturally, some of these are duplicates of Guix package definitions, as 
well as updates.
Nethertheless, I have a considerable bounty to unfurl once I reach 
maturity with this initiative.


It will be a job in itself to provide the actual patches and push them 
to you so there will be some lags (weeks even).

I should brush up on patch workflow in Emacs-Magit for flow.

Naturally, there are some edge cases regarding what Im packaging but Ive 
been trying to minimize attention.
Once I hit a wall should I query at the Guix-Help ML or for such a large 
package environment should I use this ML?


I suppose a link to a scm file on a git-forge (with commit) is apt 
rather than providing a file?


Oh, my TXR parsing of Guix packages is ticking along which I am doing 
this project!
I reckon it can be adapted nicely for a comparative method between 
different config files.


On 2024-12-08 09:20, Efraim Flashner wrote:

On Thu, Dec 05, 2024 at 11:13:07AM +0100, Ludovic Courtès wrote:

Hello,

Efraim Flashner  skribis:

> I still have a copy of the code on my machine but unfortunately it no
> longer builds due to the constant churn of rust packages.
>
> One thing I remember explicitly about it was that building end packages
> was faster than the current method, and that was before taking into
> account reusing build artifacts.
>
> https://notabug.org/maximed/cargoless-rust-experiments

Neat.

> Another idea which I'm not in love with is what Debian does. They grab
> all of the sources into one build environment and then build everything.
> It simplifies the dependency management of the sources but for us it
> would make it so that we can't touch anything in rust without causing a
> full rebuild of everything.

I believe this is also what Nixpkgs does, as discussed in this thread:

  https://toot.aquilenet.fr/@civodul/113532478383900515


I'm pretty sure they parse the Cargo.lock file and download the crates
at build time.

I’m not a fan either.  But I think one of the main criteria here 
should

be long-term maintainability, which is influenced by internal design
issues and by how we design our relation with the external packaging
tool.

By internal issues I mean things like #:cargo-inputs instead of 
regular

inputs, which makes the whole thing hard to maintain and causes
friction.  (See .)

As for the relation with Cargo and crates.io, the question is should 
we

map packages one-to-one?  Is it worth it?  If the answer is yes, do we
have the tools to maintain it in the long run.


As it stands now the package name is effectively prepending 'rust-' and
switching any underscores to dashes.  Most of the actual packaging work
is making sure the cargo-inputs from patches correctly match the
versions in Cargo.toml, checking the metadata (license, home-page,
synopsis/description), and seeing if any code needs to be removed (such
as from *-sys packages).  If there are any "real" packages then they
normally don't have the rust- prefix.

I don't want to go and parse Cargo.lock, automagically generate 
packages
based on that, and then download those as cargo-inputs for packages. 
Not

only does that potentially pull in old versions of libraries which may
have necessary updates or patches, it doesn't check them for license
data or vendored C libraries.

I also don't want to keep a collection of "difficult" crates that need 
a

human touch and have everything else be autogenerated at package build
time.

I am jealous of the cran updater and all the work Rekado has put into
making it work well, and I know I need to actually fix a bunch of stuff
with the crates.  An updater and also the etc/committer.scm file.  
There

are too many crates to actually package them all, so that wouldn't be
something workable to automatically package all of them.

I have a script that goes through the crates and lists how many
dependencies there are per file, and I have used it in the past to
remove unused crates.  I have also come back and added them back in 
when

something else needed them.

My workflow is I work on 20-50 crates at once, and when they all build
correctly I then break them into the appropriate number of commits.

I'm not sure where to go from here.  I don't even remember if the
antioxidant build system correctly shows the dependency path between
crates, which IMO is one of the big things missing now.




Re: How to build Rust packages

2024-12-08 Thread Efraim Flashner
On Thu, Dec 05, 2024 at 11:13:07AM +0100, Ludovic Courtès wrote:
> Hello,
> 
> Efraim Flashner  skribis:
> 
> > I still have a copy of the code on my machine but unfortunately it no
> > longer builds due to the constant churn of rust packages.
> >
> > One thing I remember explicitly about it was that building end packages
> > was faster than the current method, and that was before taking into
> > account reusing build artifacts.
> >
> > https://notabug.org/maximed/cargoless-rust-experiments
> 
> Neat.
> 
> > Another idea which I'm not in love with is what Debian does. They grab
> > all of the sources into one build environment and then build everything.
> > It simplifies the dependency management of the sources but for us it
> > would make it so that we can't touch anything in rust without causing a
> > full rebuild of everything.
> 
> I believe this is also what Nixpkgs does, as discussed in this thread:
> 
>   https://toot.aquilenet.fr/@civodul/113532478383900515

I'm pretty sure they parse the Cargo.lock file and download the crates
at build time.

> I’m not a fan either.  But I think one of the main criteria here should
> be long-term maintainability, which is influenced by internal design
> issues and by how we design our relation with the external packaging
> tool.
> 
> By internal issues I mean things like #:cargo-inputs instead of regular
> inputs, which makes the whole thing hard to maintain and causes
> friction.  (See .)
> 
> As for the relation with Cargo and crates.io, the question is should we
> map packages one-to-one?  Is it worth it?  If the answer is yes, do we
> have the tools to maintain it in the long run.

As it stands now the package name is effectively prepending 'rust-' and
switching any underscores to dashes.  Most of the actual packaging work
is making sure the cargo-inputs from patches correctly match the
versions in Cargo.toml, checking the metadata (license, home-page,
synopsis/description), and seeing if any code needs to be removed (such
as from *-sys packages).  If there are any "real" packages then they
normally don't have the rust- prefix.

I don't want to go and parse Cargo.lock, automagically generate packages
based on that, and then download those as cargo-inputs for packages. Not
only does that potentially pull in old versions of libraries which may
have necessary updates or patches, it doesn't check them for license
data or vendored C libraries.

I also don't want to keep a collection of "difficult" crates that need a
human touch and have everything else be autogenerated at package build
time.

I am jealous of the cran updater and all the work Rekado has put into
making it work well, and I know I need to actually fix a bunch of stuff
with the crates.  An updater and also the etc/committer.scm file.  There
are too many crates to actually package them all, so that wouldn't be
something workable to automatically package all of them.

I have a script that goes through the crates and lists how many
dependencies there are per file, and I have used it in the past to
remove unused crates.  I have also come back and added them back in when
something else needed them.

My workflow is I work on 20-50 crates at once, and when they all build
correctly I then break them into the appropriate number of commits.

I'm not sure where to go from here.  I don't even remember if the
antioxidant build system correctly shows the dependency path between
crates, which IMO is one of the big things missing now.

-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: Regarding the vertical alignment in the record definitions

2024-12-08 Thread Maxim Cournoyer
Hello,

indieterminacy  writes:

> Hello,
>
> On 2024-12-05 07:13, Maxim Cournoyer wrote:
>> Hi Tomas,
>> ...
>> I agree it's a bit tedious, both manually and also in diffs.  My
>> personal preference is to leave just one space between the field name
>> and the value, that also holds for variable bounds in lets, etc., to
>> avoid the problem (at the cost of some visual clarity, I guess).
>> One day maybe we'll have a general tool like 'scheme-fmt' to run on
>> a
>> file save hook that'd fix the format question for good, like Python has
>> with 'black' or 'ruff', etc.  I have on mind to work on such a tool,
>> but
>> it's low in the list.
>
> FWIW, Im slowly cobbling together some parsing-expression-grammars for
> Guix package definitions.
> Im doing it in the Lisp TXR atm but Im going to create a hybrid
> approach with Prolog to make it more general.
>
> In addition to outputting a unified formatting, Im planning on
> altering descriptions so that that that content is outputted in the
> format Texinfo.
>
> Please dont expect anything soon but should anybody start any other
> initiative then I may possibly some useful assets or details by then.

Sounds neat/useful!

-- 
Thanks,
Maxim