Re: Adding Substitute Mirrors page to installer

2021-12-01 Thread zimoun
Hi,

On Wed, 1 Dec 2021 at 07:13, raid5atemyhomework
 wrote:

> Any chance of this getting reviewed and merge within the next five years?

I understand your frustration.  Could you please point which patch number ?

Cheers,
simon



Re: 01/03: gnu: freecad: Switch to vtk-9.

2021-12-01 Thread Mathieu Othacehe


Hey Thiago!

> This duplicates the line for ‘freedink-engine-fix-sdl-hints.patch’.

Thanks for the heads up, fixed with:
854120d01ffe8b209fa8d897ba0fcf39ce2cdf32.

Mathieu



Software Heritage fifth anniversary event

2021-12-01 Thread Ludovic Courtès
Hello Guix!

I had the pleasure to attend the Software Heritage fifth anniversary
event yesterday at the UNESCO headquarters (fancy!) and at Inria in
Paris.

I learned about things others are doing with SWH (notably in the
cultural and scientific fields) and had discussions with hackers (people
who work on Subversion, CVS, Mercurial, and Bazaar “loaders”, for
instance).  I gave a 10–15mn talk on how Guix uses SWH, what Disarchive
is, what the current status of the “preservation of Guix” is, and what
remains to be done:

  
https://git.savannah.gnu.org/cgit/guix/maintenance.git/plain/talks/swh-unesco-2021/talk.20211130.pdf

(There was a great talk about Maneage¹ right before mine.)

I chatted with the SWH tech team; they’re obviously very busy solving
all sorts of scalability challenges :-) but they’re also truly
interested in what we’re doing and in supporting our use case.  Off the
top of my head, here are some of the topics discussed:

  • ingesting past revisions: if we can give them ‘sources.json’ for
past revisions, they’re happy to ingest them;

  • rate limit: we can find an arrangement to raise it for the purposes
of statistics gathering like Simon and Timothy have been doing (we
can discuss the details off-list);

  • Disarchive: they’d like to better understand the “unknowns” in the
PoG plots (I wasn’t sure if it was non-tar.gz tarballs or what) and
to work on the definitely-missing origins that show up there;
they’re not opposed to the idea of eventually hosting or maintaining
the Disarchive database (in fact one of the developers thought we
were hosting it in Git and that as such they were already archiving
it—maybe we could go back to Git?);

  • bit-for-bit archival: there’s a tension between making SWH a
“canonical” representation of VCS repos and making it a faithful,
bit-for-bit identical copy of the original, and there are different
opinions in the team here; our use case pretty much requires
bit-for-bit copies, and fortunately this is what SWH is giving us in
practice for Git repos, so checkout authentication (for example)
should work even when fetching Guix from SWH.

There were other discussions about Guix and Nix and I was pleased to see
people were enthusiastic about functional package management and about
our whole endeavor.

Anyway I think we can take this as an opportunity to increase bandwidth
with the SWH developers!

Thanks,
Ludo’.

¹ https://maneage.org/



How to compute SWHID? (with Guix/Disarchive)

2021-12-01 Thread zimoun
Hi,

Giving a look at Disarchive, I found how to compute Git-based
serialization hash and somehow serialization methods of "guix hash"
needs some clearning; considering '--recursive' is 'nar' serialization
which is a better name.  Anyway, see [1]. :-)

I would like to add SWH-based serialization hash but I do not find if
a function already does the hard work.  Any pointer?

Moreover, I would like to add* a new export format to "guix show"
using BibTeX format proposed by SWH.  It would help when writing
paper. ;-)

*add: discussed long time ago but because I was not writing a paper
using tools deployed by Guix, I was not bitten enough to complete. ;-)

1: 


Cheers,
simon



Re: 04/05: gnu: petsc: Update to 3.16.1.

2021-12-01 Thread Mathieu Othacehe


Hey,

> * gnu/packages/maths.scm (petsc): Update to 3.16.1.
> [native-inputs]: Use PYTHON instead of PYTHON-2.  Add WHICH.
> [arguments]: Rewrite using gexps.  Pass '--with-openblas-dir'.  In
> 'configure' phase, modify "config/example_template.py".

I noticed that petsc-* packages are now part of every evaluation of
c-u-f in Cuirass. Any chance this is related to this patch?

Thanks,

Mathieu



[core-updates-frozen] Tryton broken

2021-12-01 Thread zimoun
Hi,

The branch core-updates-frozen will be merged soon.  Among some
breakage here or there, one block of broken packages is about 'tryton'
[1]: each points are sorted by alphabetical order, and the mouse on
the red point should provide the name of the package, then click leads
to the evaluation and from there you can access to the list of
dependencies, find the tryton broken one, click again, for instance
trytond-country, and last access to the log [2].

Well, this dashboard allow to quickly see the broken blocks.  To
directly find trytond-country, it is easy to go via:

https://ci.guix.gnu.org/search?query=spec%3Acore-updates-frozen+system%3Ax86_64-linux+trytond-country

The issue with 'trythond-*' is the new phase `sanity-check' for
python-build-system.

Any chance that someone give a look before the merge?  Otherwise,
these broken packages could land to master; which would be sad.

Cheers,
simon

1: 
2: 



Re: Update on bordeaux.guix.gnu.org

2021-12-01 Thread Ludovic Courtès
Hi,

Ricardo Wurmus  skribis:

> Ludovic Courtès  writes:
>
>>> The disk space usage trend is pretty much
>>> linear, so if things continue without any change, I think it will
>>> be
>>> necessary to pause the agents within a month, to avoid filling up
>>> bayfront entirely.
>>
>> Ah, bummer.  I hope we can find a solution one way or another.
>> Certainly we could replicate nars on another machine with more disk,
>> possibly buying the necessary hardware with the project funds.
>
> Remember that I’ve got three 256G SSDs here that I could send to
> wherever bayfront now sits.  With LLVM or a RAID configuration 
> these could just be added to the storage pool — if bayfront has
> sufficient slots for three more disks.

Good to know.  In that case we’d need to come up with (1) an updated
Guix System config with LVM, and (2) a way to copy the existing store
over to the new storage, which sounds tricky if the existing disk is to
be kept.  (Also I think we’re down to 1.5 person who could go on
site. :-/)

Ludo’.



Re: Flag day for simplified package inputs

2021-12-01 Thread Ludovic Courtès
Hi,

Jelle Licht  skribis:

> I will work on that. Do we already have a suitable 'bulk change' in the
> repo? Or should we first run `guix style', and subsequently use that
> commit as the first entry in the .git-blame-ignore-revs file?

The latter I guess.

Thanks,
Ludo’.



Re: Desktops on non-x86_64 systems

2021-12-01 Thread Ludovic Courtès
Hi!

Maxim Cournoyer  skribis:

> I've updated the branch wip-cross-built-rust; it seems to build and run
> OK (although running the binary produced by compiling hello.rs with the
> cross-built i686-linux rustc in a 32 bit VM took 47 sec (!?)),
> apparently hanging on something before outputting correctly the message
> and exiting with 0.
>
> I'd now like to figure out the top-level plumbing required to get this
> rust-i686-linux x86-64 package accepted in the real of i686-linux
> packages (cross the architecture boundary).  Is this even possible in
> Guix?
>
> In other words, I'd like the i686 architecture to be able to use this
> rust-i686-linux cross built from x86_64 as if it was a *native* package.

It’s not possible as it would imply that i686 is able to run x86_64
code.

What we’d need to do is “cut the dependency graph” at the architecture
boundary, similar to what’s described in
.

Concretely, we’d cross-build Rust for i686 once; we’d put it in a
tarball, store it at ftp.gnu.org, and make the rust 1.54 package (or
whatever that is) be equal so that tarball, unpacked, when the current
system is i686.  (Similar to the ‘guile-bootstrap’ package.)

It does mean that the cross-built Rust must be statically linked.

To reduce the risks associated with binary blobs, the Rust build should
ideally be reproducible, so that anyone can verify that the thing we put
at ftp.gnu.org is indeed Rust as cross-compiled from x86_64.

How long is the road ahead in your opinion?

Thanks for working on it!

Ludo’.



Re: Software Heritage fifth anniversary event

2021-12-01 Thread Timothy Sample
Ludovic Courtès  writes:

> I gave a 10–15mn talk on how Guix uses SWH, what Disarchive is, what
> the current status of the “preservation of Guix” is, and what remains
> to be done:
>
>   
> https://git.savannah.gnu.org/cgit/guix/maintenance.git/plain/talks/swh-unesco-2021/talk.20211130.pdf

Wow – great work!

> I chatted with the SWH tech team; they’re obviously very busy solving
> all sorts of scalability challenges :-) but they’re also truly
> interested in what we’re doing and in supporting our use case.  Off the
> top of my head, here are some of the topics discussed:
>
>   • ingesting past revisions: if we can give them ‘sources.json’ for
> past revisions, they’re happy to ingest them;

This is something I can probably coax out of the Preservation of Guix
database.  That might be the cheapest way to do it.  Alternatively, when
we get “sources.json” built with Cuirass, we could tell Cuirass to build
out a sample of previous commits to get pretty good coverage.  (Side
note: eventually we could verify the coverage of the sampling approach
using the Data Service, which has a processed a very exhaustive list of
commits.)

>   • rate limit: we can find an arrangement to raise it for the purposes
> of statistics gathering like Simon and Timothy have been doing (we
> can discuss the details off-list);

Cool!  So far it hasn’t been a concern for me, but it would help in the
future if want to try and track down Git repositories that have gone
missing.

>   • Disarchive: they’d like to better understand the “unknowns” in the
> PoG plots (I wasn’t sure if it was non-tar.gz tarballs or what) and
> to work on the definitely-missing origins that show up there;

Many of the unknowns are there for me to track Disarchive progress.
It’s not really the clearest reporting, but it tracks more what Guix can
handle automatically than what we could theoretically know about.
Basically something is “known” if it can be downloaded from upstream,
and either: it’s a non-recursive Git reference; or it’s something
Disarchive can handle.  Hence, we know nothing about other version
control systems and, say, “.tar.bz2” archives.  Also, all these things
are based on heuristics.  :)  As we get closer to 100% known, we can
start analyzing everything more closely.

> they’re not opposed to the idea of eventually hosting or maintaining
> the Disarchive database (in fact one of the developers thought we
> were hosting it in Git and that as such they were already archiving
> it—maybe we could go back to Git?);

It’s a possibility, but right now I’m hopeful that the database will be
in the care of SWH directly before too long.  I’d rather wait and see at
this point.  I’m sure we could manage it, but the uncompressed size of
the Disarchive specification of a Chromium tarball is 366M.  Storing all
the XZ specifications uncompressed is over 20G.  It would be a big Git
repo!

>   • bit-for-bit archival: there’s a tension between making SWH a
> “canonical” representation of VCS repos and making it a faithful,
> bit-for-bit identical copy of the original, and there are different
> opinions in the team here; our use case pretty much requires
> bit-for-bit copies, and fortunately this is what SWH is giving us in
> practice for Git repos, so checkout authentication (for example)
> should work even when fetching Guix from SWH.

That’s interesting.  I’m sure most of us in the Guix camp are on team
bit-for-bit, but I’m sure we can all agree that it’s not easy to get
there.

> There were other discussions about Guix and Nix and I was pleased to see
> people were enthusiastic about functional package management and about
> our whole endeavor.
>
> Anyway I think we can take this as an opportunity to increase bandwidth
> with the SWH developers!

Good idea.  It’s nice when our efforts and experience produce something
useful to the broader free software community.  :)


-- Tim



Preservation of Guix Report 2021-11-30

2021-12-01 Thread Timothy Sample
Hi Guix!

Here’s a new version of the Preservation of Guix Report:



I actually made one a month ago but my message about it never made it to
the list somehow.  The most important part of that message was to
highlight how well we are doing for Git sources.

Here’s what I wrote:

>  This version has a breakdown by different origin types.  The good
>  news is that Git origins are doing very well.  We’ve confirmed that
>  97.2% of the 9,272 Git origins that we’re tracking are in the SWH
>  archive.  Most of the progress there is due to zimoun wading through
>  the missing packages and telling SWH to store them – thanks, zimoun!

That’s still basically true this month, but we have a few more missing
Git sources.  Actually, we are starting to lose sources!  If you look at
the graph of commits, you can see a sharp increase in missing sources
for recent commits.  It looks like a problem on the SWH side.  Visiting
[1] and selecting “Show all visits”, you can see that the nixguix loader
has been having trouble loading our “sources.json” recently.

[1] 


I will try and get in touch with SWH about this.  While it’s troubling,
it certainly is a good confirmation that doing some basic monitoring is
important!

That’s the bad news.  The good news is I’ve added support for XZ to
Disarchive (to be officially released in Disarchive soon).  That means
that we have information about 4K more sources.  We now know the status
of 80% of our sources.  Unfortunately, 40% of the XZ sources are
missing!  Most of them are old, as can be seen in this (secret) graph:



(The filename format is “{tar-gz,tar-xz,git}-{rel,abs}-hist.svg” if you
want to see all the secret graphs.)

Lastly, if you scroll to the bottom of the report and select “View
Schema”, I’ve added some example queries that generate lists of
interesting sources.  For example – if you’re so inclined – you could
look at the 128 “unknown”, non-recursive Git sources that we should know
about and figure out what’s going on.  ;)


-- Tim



Re: Desktops on non-x86_64 systems

2021-12-01 Thread Maxim Cournoyer
Hi Ludovic,

Ludovic Courtès  writes:

> Hi!
>
> Maxim Cournoyer  skribis:
>
>> I've updated the branch wip-cross-built-rust; it seems to build and run
>> OK (although running the binary produced by compiling hello.rs with the
>> cross-built i686-linux rustc in a 32 bit VM took 47 sec (!?)),
>> apparently hanging on something before outputting correctly the message
>> and exiting with 0.
>>
>> I'd now like to figure out the top-level plumbing required to get this
>> rust-i686-linux x86-64 package accepted in the real of i686-linux
>> packages (cross the architecture boundary).  Is this even possible in
>> Guix?
>>
>> In other words, I'd like the i686 architecture to be able to use this
>> rust-i686-linux cross built from x86_64 as if it was a *native* package.
>
> It’s not possible as it would imply that i686 is able to run x86_64
> code.
>
> What we’d need to do is “cut the dependency graph” at the architecture
> boundary, similar to what’s described in
> .
>
> Concretely, we’d cross-build Rust for i686 once; we’d put it in a
> tarball, store it at ftp.gnu.org, and make the rust 1.54 package (or
> whatever that is) be equal so that tarball, unpacked, when the current
> system is i686.  (Similar to the ‘guile-bootstrap’ package.)

OK!  Good to know that it's been done before!  Thanks for the pointer.

> It does mean that the cross-built Rust must be statically linked.

OK.  That's probably not too difficult, given the cozy relationship Rust
enjoys with static linking.  Where does this requirement come from
though?  And would we need to use something else than glibc, as IIUC it
cannot be completely statically linked in the produced binaries.

> To reduce the risks associated with binary blobs, the Rust build should
> ideally be reproducible, so that anyone can verify that the thing we put
> at ftp.gnu.org is indeed Rust as cross-compiled from x86_64.
>
> How long is the road ahead in your opinion?

I currently have this runtime problem with the build, where the
correctly compiled hello.rs program below:

--8<---cut here---start->8---
$ cat hello.rs 
// This is a comment, and is ignored by the compiler
// You can test this code by clicking the "Run" button over there ->
// This is a comment, and is ignored by the compiler
// You can test this code by clicking the "Run" button over there ->
// or if you prefer to use your keyboard, you can use the "Ctrl + Enter" 
shortcut

// This code is editable, feel free to hack it!
// You can always return to the original code by clicking the "Reset" button ->

// This is the main function
fn main() {
// Statements here are executed when the compiled binary is called

// Print text to the console
println!("Hello World!");
}

$ time rustc hello.rs

real0m3.465s
user0m1.113s
sys 0m1.217s

$ file hello
hello: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically 
linked, interpreter /lib/ld-linux.so.2, 
BuildID[sha1]=5458fb195357d02ff6de3d429201d69c16f03e1b, for GNU/Linux 2.6.32, 
with debug_info, not stripped

$ time ./hello
Hello World!

real0m41.361s
user0m41.319s
sys 0m0.028s
--8<---cut here---end--->8---

41 s to print hello world, eh.

The problem seems to lie somewhere in the cross-compiled glibc, which
spends lots of time on initializing libpthreads and acquiring mutexes:

--8<---cut here---start->8---
# perf record --call-graph dwarf /path/to/hello_world
# perf report --no-inline
Samples: 12K of event 'cycles', Event count (approx.): 85948101927
  Children  Self  Command  Shared Object   Symbol
+   88.62% 0.00%  hellolibpthread-2.33.so  [.] _init
+   88.62%11.37%  hellolibpthread-2.33.so  [.] 
__pthread_initialize_minimal_internal
+   41.84%34.58%  hellolibpthread-2.33.so  [.] __pthread_mutex_lock_full
+   35.37%35.18%  hellolibpthread-2.33.so  [.] __pthread_mutex_lock
+   11.19%11.16%  hellolibpthread-2.33.so  [.] __x86.get_pc_thunk.di
+7.10% 7.02%  hellolibpthread-2.33.so  [.] __x86.get_pc_thunk.si
 0.59% 0.14%  hello[kernel.kallsyms]   [k] apic_timer_interrupt
 0.45% 0.00%  hello[kernel.kallsyms]   [k] smp_apic_timer_interrupt
 0.35% 0.00%  hello[kernel.kallsyms]   [k] hrtimer_interrupt
 0.28% 0.02%  hello[kernel.kallsyms]   [k] __hrtimer_run_queues
 0.25% 0.00%  hello[kernel.kallsyms]   [k] tick_sched_timer
 0.19% 0.00%  hello[kernel.kallsyms]   [k] tick_sched_handle
 0.19% 0.01%  hello[kernel.kallsyms]   [k] update_process_times
 0.16% 0.00%  hello[unknown]   [k] 0xf4a15ff8
 0.13% 0.01%  hello[kernel.kallsyms]   [k] scheduler_tick
 0.05% 0.01%  hello[kernel.kallsyms]   [k] irq_exit
 0.05% 0.00%  hello[kernel.kallsyms]   [k] tick_sched_do_timer

Re: Update on bordeaux.guix.gnu.org

2021-12-01 Thread Ricardo Wurmus

Hi,

[space is running out on bayfront, so I wrote:]


Remember that I’ve got three 256G SSDs here that I could send to
wherever bayfront now sits.  With LVM or a RAID configuration 
these could just be added to the storage pool — if bayfront has

sufficient slots for three more disks.


You wrote in response:

Good to know.  In that case we’d need to come up with (1) an 
updated
Guix System config with LVM, and (2) a way to copy the existing 
store
over to the new storage, which sounds tricky if the existing 
disk is to

be kept.


We could first install Guix System with the adjusted bayfront 
config on a separate machine (e.g. on a build node at the MDC), 
onto a volume with LVM (using as many of the SSDs as needed). 
Copy signing keys etc from bayfront.  Then we’d pretty much 
export/import the bayfront store over the network.  Once 
everything has been copied, we turn off bayfront, swap the disks, 
boot it up again.  If everything works all right we add the 
original disk (and any unused left-over disks) to the LVM volume 
to extend the storage pool.


The trickiest bit is to minimize the time between finishing the 
sync and swapping the disks.



(Also I think we’re down to 1.5 person who could go on
site. :-/)


Not great :-/

--
Ricardo



Re: 04/05: gnu: petsc: Update to 3.16.1.

2021-12-01 Thread Ludovic Courtès
Hi,

Mathieu Othacehe  skribis:

> Hey,
>
>> * gnu/packages/maths.scm (petsc): Update to 3.16.1.
>> [native-inputs]: Use PYTHON instead of PYTHON-2.  Add WHICH.
>> [arguments]: Rewrite using gexps.  Pass '--with-openblas-dir'.  In
>> 'configure' phase, modify "config/example_template.py".
>
> I noticed that petsc-* packages are now part of every evaluation of
> c-u-f in Cuirass. Any chance this is related to this patch?

Oops indeed; fixed in 36f18626a9f8e9ba287e0fd3f1d0400345ca5ee7.

It’s not ‘petsc’ that was affected but rather its variants; they’d lead
to a different derivation every time (!), one that fails to build:

  $ ./pre-inst-env guix build -d petsc-openmpi
  /gnu/store/lszhaiiyrkbdlldp42hqfhmaxcvqpfq7-petsc-openmpi-3.16.1.drv
  $ ./pre-inst-env guix build -d petsc-openmpi
  /gnu/store/r2gam9iwv67qvx8hq9sj9rj430qwa31c-petsc-openmpi-3.16.1.drv
  $ ./pre-inst-env guix build -d petsc-openmpi
  /gnu/store/iw7jkd9dqsfxiz8qij53wdhqg9xzgi5g-petsc-openmpi-3.16.1.drv

Fun.  :-)

Ludo’.



Re: Adding Substitute Mirrors page to installer

2021-12-01 Thread raid5atemyhomework
Hi zimoun,


> > Any chance of this getting reviewed and merge within the next five years?
>
> I understand your frustration. Could you please point which patch number ?


>From 41b174da1e38b71563405f1be48331fbe0e5700d Mon Sep 17 00:00:00 2001
From: raid5atemyhomework 
Date: Tue, 16 Mar 2021 23:45:37 +0800
Subject: [PATCH] gnu: Add substitute mirrors page to installer.

* gnu/installer/services.scm (system-service) [snippet-type]: New field.
(%system-services): Add substitute mirrors.
(service-list-service?): New procedure.
(modify-services-service?): New procedure.
(system-services->configuration): Add support for services with
`'modify-services` snippets.
* gnu/installer/newt/services.scm (run-substitute-mirror-page): New
procedure.
(run-services-page): Call `run-substitute-mirror-page`.
* gnu/services/base.scm (guix-shepherd-service)[start]: Accept second
argument, a space-separated list of substitute URLs.
* gnu/installer/final.scm (%user-modules): New variable.
(read-operating-system): New procedure.
(install-system): Read the installation configuration file and extract
substitute URLs to pass to `guix-daemon` start action.
* gnu/installer/tests.scm: Add new page in testing.
---
 gnu/installer/final.scm | 37 +++-
 gnu/installer/newt/services.scm | 26 +-
 gnu/installer/services.scm  | 62 -
 gnu/installer/tests.scm | 12 +--
 gnu/services/base.scm   | 15 ++--
 5 files changed, 136 insertions(+), 16 deletions(-)

diff --git a/gnu/installer/final.scm b/gnu/installer/final.scm
index fc0b7803fa..2324c960f2 100644
--- a/gnu/installer/final.scm
+++ b/gnu/installer/final.scm
@@ -22,9 +22,13 @@
   #:use-module (gnu installer steps)
   #:use-module (gnu installer utils)
   #:use-module (gnu installer user)
+  #:use-module (gnu services)
+  #:use-module (gnu services base)
   #:use-module (gnu services herd)
+  #:use-module (gnu system)
   #:use-module (guix build syscalls)
   #:use-module (guix build utils)
+  #:use-module (guix ui)
   #:use-module (gnu build accounts)
   #:use-module (gnu build install)
   #:use-module (gnu build linux-container)
@@ -38,6 +42,20 @@
   #:use-module (ice-9 rdelim)
   #:export (install-system))

+;; XXX duplicated from guix/scripts/system.scm, but that pulls in
+;; (guix store database), which requires guile-sqlite which is not
+;; available in the installation environment.
+(define %user-module
+  ;; Module in which the machine description file is loaded.
+  (make-user-module '((gnu system)
+  (gnu services)
+  (gnu system shadow
+
+(define (read-operating-system file)
+  "Read the operating-system declaration from FILE and return it."
+  (load* file %user-module))
+;; XXX
+
 (define %seed
   (seed->random-state
(logxor (getpid) (car (gettimeofday)
@@ -174,6 +192,16 @@ or #f.  Return #t on success and #f on failure."
   options
   (list (%installer-configuration-file)
 (%installer-target-dir
+ ;; Extract the substitute URLs of the user configuration.
+ (os  (read-operating-system 
(%installer-configuration-file)))
+ (substitute-urls (and (operating-system? os)
+   (and=> (find
+(lambda (service)
+  (eq? guix-service-type
+   (service-kind service)))
+(operating-system-services os))
+  (compose 
guix-configuration-substitute-urls
+   service-value
  (database-dir"/var/guix/db")
  (database-file   (string-append database-dir "/db.sqlite"))
  (saved-database  (string-append database-dir "/db.save"))
@@ -206,8 +234,15 @@ or #f.  Return #t on success and #f on failure."
(lambda ()
  ;; We need to drag the guix-daemon to the container MNT
  ;; namespace, so that it can operate on the cow-store.
+ ;; Also we need to change the substitute URLs to whatever
+ ;; the user selected during setup, so that the mirrors are
+ ;; used during install, not just after install.
  (stop-service 'guix-daemon)
- (start-service 'guix-daemon (list (number->string (getpid
+ (start-service 'guix-daemon
+`(,(number->string (getpid))
+  ,@(if substitute-urls
+`(,(string-join substitute-urls))
+'(

  (setvbuf (current-output-port) 'none)
  (setvbuf (current-error-port) 'none)
diff --git a/gnu/installer/newt/services.scm b/gnu/installer/newt/services.scm
index 74f28

Re: Desktops on non-x86_64 systems

2021-12-01 Thread Maxim Cournoyer
Hi again, Ludovic et al!

I'm trying another direction in my reply here based on recent findings;

Ludovic Courtès  writes:

> Hi!
>
> Maxim Cournoyer  skribis:
>
>> I've updated the branch wip-cross-built-rust; it seems to build and run
>> OK (although running the binary produced by compiling hello.rs with the
>> cross-built i686-linux rustc in a 32 bit VM took 47 sec (!?)),
>> apparently hanging on something before outputting correctly the message
>> and exiting with 0.
>>
>> I'd now like to figure out the top-level plumbing required to get this
>> rust-i686-linux x86-64 package accepted in the real of i686-linux
>> packages (cross the architecture boundary).  Is this even possible in
>> Guix?
>>
>> In other words, I'd like the i686 architecture to be able to use this
>> rust-i686-linux cross built from x86_64 as if it was a *native* package.
>
> It’s not possible as it would imply that i686 is able to run x86_64
> code.

Does it?  Since the package was cross-compiled, the resulting binary is
executable on i686 (and dynamically linked to other cross-compiled
shared libraries which are executable there as well) -- it seems natural
that a cross-compiled binary for architecture X should be allowed to
become a part in the dependency graph of a package on that architecture.
I understand that an i686-linux machine wouldn't be able to fully
bootstrap itself -- it would rely on a x86_64-linux machine (either via
offloading or pre-built substitutes) to provide the cross-compiled
rustc; inconvenient, but preferable to some arbitrary binary blob
fetched from the internet (and not that different from using a bootstrap
binary from ftp.gnu.org).

> What we’d need to do is “cut the dependency graph” at the architecture
> boundary, similar to what’s described in
> .
>
> Concretely, we’d cross-build Rust for i686 once; we’d put it in a
> tarball, store it at ftp.gnu.org, and make the rust 1.54 package (or
> whatever that is) be equal so that tarball, unpacked, when the current
> system is i686.  (Similar to the ‘guile-bootstrap’ package.)
>
> It does mean that the cross-built Rust must be statically linked.

The above is a show stopper for rustc, I just learned.  Rust has this
feature called proc macros (procedural macros) that are implemented as
dynamic libraries; and it's a rather core feature, used by the main
serialization/deserialization facilities in Rust (and needed by rustc to
bootstrap itself).  So a statically linked rustc appears near-useless.

> To reduce the risks associated with binary blobs, the Rust build should
> ideally be reproducible, so that anyone can verify that the thing we put
> at ftp.gnu.org is indeed Rust as cross-compiled from x86_64.

Reproducibility should not be an issue; our rust bootstrap chain is
reproducible, except perhaps for the first mrustc-produced 1.40 rust.

Thanks,

Maxim