Re: Desktops on non-x86_64 systems

2021-11-28 Thread Ricardo Wurmus



Tobias Platen  writes:


On Sat, 2021-11-27 at 19:43 -0800, John Soo wrote:

Hi Guix,

I had the same thought as Maxim. In my quest for arm support 
for ghc,
I thought about using a cross-compiled version. Is this 
possible or
even desirable?  I think for rust and ghc it would be very 
helpful -
if somewhat less principled than a bootstrap all the way up (on 
the

same computer).

I'm curious what the consensus is here.

Kindly,

John

 
I have a Talos II, on which I have rust running. But that one is 
not in

Debian or GUIX, it uses its own installer.


How was it bootstrapped?

--
Ricardo



Re: 13/30: daemon: Print the line whence we expect an integer.

2021-11-28 Thread Ludovic Courtès
Hi Tobias!

guix-comm...@gnu.org skribis:

> --- a/nix/libstore/local-store.cc
> +++ b/nix/libstore/local-store.cc
> @@ -839,7 +839,8 @@ template T 
> LocalStore::getIntLineFromSubstituter(Agent & run)
>  {
>  string s = getLineFromSubstituter(run);
>  T res;
> -if (!string2Int(s, res)) throw Error("integer expected from stream");
> +if (!string2Int(s, res))
> +throw Error(format("integer expected from stream: %1%") % s);

I would not print ‘s’ as-is because it could contain garbage, which in
turn could have undesirable side effects.

My 2¢,
Ludo’.



Re: Update on bordeaux.guix.gnu.org

2021-11-28 Thread Ludovic Courtès
Hello,

Christopher Baines  skribis:

> I've been doing some performance tuning, submitting builds is now more
> parallelised, a source of slowness when fetching builds has been
> addressed, and one of the long queries involved in allocating builds has
> been removed, which also improved handling of the WAL (Sqlite write
> ahead log).
>
> There's also a few new features. Agents can be deactivated which means
> they won't get any builds allocated. The coordinator now checks the
> hashes of outputs which are submitted, a safeguard which I added because
> the coordinator now also supports resuming the uploads of outputs. This
> is particularly important when trying to upload large (> 1GiB) outputs
> over slow connections.
>
> I also added a new x86_64 build machine. It's a 4 core Intel NUC that I
> had sitting around, but I cleaned it up and got it building things. This
> was particularly useful as I was able to use it to retry building
> guile@3.0.7, which is extremely hard to build [2]. This was blocking
> building the channel instance derivations for x86_64-linux.
>
> 2: 
> https://data.guix.gnu.org/gnu/store/7k6s13bzbz5fd72ha1gx9rf6rrywhxzz-guile-3.0.7.drv

Neat!  (Though I wouldn’t say building Guile is “extremely hard”,
especially on x86_64.  :-))  The ability to keep retrying is much
welcome.

> On the related subject of data.guix.gnu.org (which is the source of
> derivations for bordeaux.guix.gnu.org, as well as a recipient of build
> information), there have been a couple of changes. There was some web
> crawler activity that was slowing data.guix.gnu.org down significantly,
> NGinx now has some rate limiting configuration to prevent crawlers
> abusing the service. The other change is that substitutes for the latest
> processed revision of master will be queried on a regular basis, so this
> page [3] should be roughly up to date, including for ci.guix.gnu.org.
>
> 3: 
> https://data.guix.gnu.org/repository/1/branch/master/latest-processed-revision/package-substitute-availability

That’s good news.  That also means that things like

should be more up-to-date, which is really cool!  This can have a
drastic impact in how we monitor and address reproducibility issues.

> Now for some not so good things:
>
> Submitting builds wasn't working quite right for around a month, one of
> the changes I made to speed things up led to some builds being
> missed. This is now fixed, and all the missed builds have been
> submitted, but this was more than 50,000 builds. This, along with all
> the channel instance derivation builds that can now proceed mean that
> there's a very large backlog of x86 and ARM builds which will probably
> take at least another week to clear. While this backlog exists,
> substitute availability for x86_64-linux will be lower than usual.

At least it’s nice to have a clear picture of which builds are missing,
how much of a backlog we have, and what needs to be rebuilt.

> Space is running out on bayfront, the machine that runs the coordinator,
> stores all the nars and build logs, and serves the substitutes. I knew
> this was probably going to be an issue, bayfront didn't have much space
> to begin with, but I had hoped I'd be further forward in developing some
> way to allow moving the nars around between multiple machines, to remove
> the need to store all of them on bayfront. I have got a plan, there's
> some ideas I mentioned back in February [4], but I haven't got around to
> implementing anything yet. The disk space usage trend is pretty much
> linear, so if things continue without any change, I think it will be
> necessary to pause the agents within a month, to avoid filling up
> bayfront entirely.

Ah, bummer.  I hope we can find a solution one way or another.
Certainly we could replicate nars on another machine with more disk,
possibly buying the necessary hardware with the project funds.

Thanks for the update!

Ludo’.



Re: Help with package AppImage support

2021-11-28 Thread Ludovic Courtès
Hi,

The log reads:

--8<---cut here---start->8---
-- Found PkgConfig: 
/gnu/store/krpyb0zi700dcrg9cc8932w4v0qivdg9-pkg-config-0.29.2/bin/pkg-config 
(found version "0.29.2") 
-- Importing target libfuse via pkg-config (fuse, shared)
-- Checking for module 'fuse'
--   Found fuse, version 2.9.9
-- Importing target libssl via pkg-config (openssl, shared)
-- Checking for module 'openssl'
--   Found openssl, version 1.1.1j
-- Using system mksquashfs
CMake Error at src/build-runtime.cmake:19 (message):
  TARGET NOT found libsquashfuse
Call Stack (most recent call first):
  src/CMakeLists.txt:16 (include)


-- Configuring incomplete, errors occurred!
See also 
"/tmp/guix-build-appimagekit-13.drv-0/build/CMakeFiles/CMakeOutput.log".
command "cmake" "../source" "-DCMAKE_BUILD_TYPE=RelWithDebInfo" 
"-DCMAKE_INSTALL_PREFIX=/gnu/store/9z0n0kia0kp63vdlvpdlm2qcky67x00y-appimagekit-13"
 "-DCMAKE_INSTALL_LIBDIR=lib" "-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE" 
"-DCMAKE_INSTALL_RPATH=/gnu/store/9z0n0kia0kp63vdlvpdlm2qcky67x00y-appimagekit-13/lib"
 "-DCMAKE_VERBOSE_MAKEFILE=ON" "-DXZ_EXTERNAL=On" "-DUSE_SYSTEM_MKSQUASHFS=On" 
failed with status 1
--8<---cut here---end--->8---

Maybe “TARGET NOT found” is CMake’s original way of saying that you’re
missing a dependency (libsquashfuse), no?

BTW, since your ultimate goal is to have ‘guix pack -f appimage’, you
could look at the AppImage spec.  It might be that you can do the heavy
lifting of creating a file in that format without resorting to
AppImageKit.

HTH,
Ludo’.



Re: build system option to allow CPU optimizations?

2021-11-28 Thread Ludovic Courtès
Hi,

zimoun  skribis:

> On Wed, 24 Nov 2021 at 13:10, Ricardo Wurmus  wrote:
>
>> The build phases that patch out these features would have to check 
>> for that build system option, much like they check the TESTS? 
>> option before attempting to run tests.
>
> Then it could be a transformation.   The idea sounds good to me.

I’ve been working on it last week with my HPC hat on.

To be clear, I think in may cases, passing ‘-march’ like you suggest is
the wrong approach; instead software should use (and usually does use)
function multi-versioning:

  https://hpc.guix.info/blog/2018/01/pre-built-binaries-vs-performance/

I found one case though where this is not possible: C++ header-only
libraries such as Eigen contain hand-optimized vectorized routines,
selected at build time, but we end up compiling Eigen users as the
x86_64/AArch64 baseline, which is a waste.  (If you do know of other
problematic cases, I’m interested in taking a look!)

My solution to that is “package multi-versioning” via a transformation
option.  Hopefully I’ll submit preliminary patches within a week or so!

Thanks,
Ludo’.



Re: Derivations differ between computers?

2021-11-28 Thread Ludovic Courtès
Hi!

zimoun  skribis:

> Oh, indeed!  Nothing weird in fact. :-) The derivations are different
> (the way to compute) but the outputs are the same; recursively.

Lesson: always compare the output file names in the .drv files before
digging further.  (I made that mistake a number of times!)

Ludo’.



Re: Help with package AppImage support

2021-11-28 Thread Ekaitz Zarraga
> Hi,
>
> The log reads:
>
> --8<---cut here---start->8---
> -- Found PkgConfig: 
> /gnu/store/krpyb0zi700dcrg9cc8932w4v0qivdg9-pkg-config-0.29.2/bin/pkg-config 
> (found version "0.29.2")
> -- Importing target libfuse via pkg-config (fuse, shared)
> -- Checking for module 'fuse'
> --   Found fuse, version 2.9.9
> -- Importing target libssl via pkg-config (openssl, shared)
> -- Checking for module 'openssl'
> --   Found openssl, version 1.1.1j
> -- Using system mksquashfs
> CMake Error at src/build-runtime.cmake:19 (message):
>   TARGET NOT found libsquashfuse
> Call Stack (most recent call first):
>   src/CMakeLists.txt:16 (include)
>
>
> -- Configuring incomplete, errors occurred!
> See also 
> "/tmp/guix-build-appimagekit-13.drv-0/build/CMakeFiles/CMakeOutput.log".
> command "cmake" "../source" "-DCMAKE_BUILD_TYPE=RelWithDebInfo" 
> "-DCMAKE_INSTALL_PREFIX=/gnu/store/9z0n0kia0kp63vdlvpdlm2qcky67x00y-appimagekit-13"
>  "-DCMAKE_INSTALL_LIBDIR=lib" "-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE" 
> "-DCMAKE_INSTALL_RPATH=/gnu/store/9z0n0kia0kp63vdlvpdlm2qcky67x00y-appimagekit-13/lib"
>  "-DCMAKE_VERBOSE_MAKEFILE=ON" "-DXZ_EXTERNAL=On" 
> "-DUSE_SYSTEM_MKSQUASHFS=On" failed with status 1
> --8<---cut here---end--->8---
>
> Maybe “TARGET NOT found” is CMake’s original way of saying that you’re
> missing a dependency (libsquashfuse), no?

I mean, I already reached that conclusion but if you check the packages in the
original message, the dependency is included. I can't find why isn't it finding
the library.

I need there some help making CMake find the dependency I already included and
packaged.

> BTW, since your ultimate goal is to have ‘guix pack -f appimage’, you
> could look at the AppImage spec.  It might be that you can do the heavy
> lifting of creating a file in that format without resorting to
> AppImageKit.

There are some possible options for this.

Appimage is basically an ELF with the contents on an Squashfs image inside.
When it runs it decompresses itself and runs the contents. Appimagekit is
basically one possible runtime for this behavior, which is just a simple
shell-like launcher.

I just wanted to add this package too, for those who want to generate appimages
by themselves and later decide if I want to use it or not in the actual
implementation.

> HTH,
> Ludo’.

Thanks!
Ekaitz




Re: Desktops on non-x86_64 systems

2021-11-28 Thread Ludovic Courtès
Hi,

Maxim Cournoyer  skribis:

> I'd like to revise my position, as I got confirmation that it ought to
> be possible to cross-build rustc for other architectures from our
> (cleanly bootstrapped) x86_64 rustc!

[...]

> I haven't yet done any reading, but if Mutabah (the author of mrustc)
> says it's possible, I believe it!

I suspect there’s a difference between “it’s possible” and “we’ve
successfully cross-compiled Rust.”  :-)

We could try that, but IMO we first need a solution within days—we just
cannot reasonably let this branch go on for longer than that.  The
librsvg 2.40 hack would give us Xfce (maybe GNOME?) on i686 today.

Perhaps we can address all this in several steps:

  1. apply the librsvg 2.40 hack now so we can merge
 ‘core-updates-frozen’ this week for real;

  2. later on, introduce some Rust binary for non-x86_64; that would
 lead to rebuilds only on those architectures;

  3. eventually, update mrustc (and have it call gcc with -O0 to reduce
 its memory footprint), or use GCC-Rust instead of that’s viable.

WDYT?

I think we agree on the strategy and just need to agree on tactics.
:-)

Thanks,
Ludo’.



Re: Desktops on non-x86_64 systems

2021-11-28 Thread Ricardo Wurmus



Ludovic Courtès  writes:

We could try that, but IMO we first need a solution within 
days—we just
cannot reasonably let this branch go on for longer than that. 
The
librsvg 2.40 hack would give us Xfce (maybe GNOME?) on i686 
today.


Perhaps we can address all this in several steps:

  1. apply the librsvg 2.40 hack now so we can merge
 ‘core-updates-frozen’ this week for real;

  2. later on, introduce some Rust binary for non-x86_64; that 
  would

 lead to rebuilds only on those architectures;

  3. eventually, update mrustc (and have it call gcc with -O0 to 
  reduce
 its memory footprint), or use GCC-Rust instead of that’s 
 viable.


WDYT?


This sounds sensible.  Merging core-updates-frozen does *not* mean 
that it needs to be ready for release.  It’s been delayed for too 
long and further delays just serve to taint our morale and drain 
our energy, applying fixes again and again with no end in sight.


These ongoing delays have made core-updates-frozen grow so much in 
scope that we cannot afford to delay a merge any longer.  Let’s 
merge asap, even if that means using an older librsvg right now. 
Then add rust for non-x86_64 — either by cross-building it 
ourselves or getting an existing binary to restore feature parity. 
Then work on a long-term solution.



--
Ricardo



Re: build system option to allow CPU optimizations?

2021-11-28 Thread Ricardo Wurmus



Ludovic Courtès  writes:


Hi,

zimoun  skribis:

On Wed, 24 Nov 2021 at 13:10, Ricardo Wurmus 
 wrote:


The build phases that patch out these features would have to 
check 
for that build system option, much like they check the TESTS? 
option before attempting to run tests.


Then it could be a transformation.   The idea sounds good to 
me.


I’ve been working on it last week with my HPC hat on.

To be clear, I think in may cases, passing ‘-march’ like you 
suggest is
the wrong approach; instead software should use (and usually 
does use)

function multi-versioning:

  https://hpc.guix.info/blog/2018/01/pre-built-binaries-vs-performance/


It may very well be the wrong approach in principle, but I also 
think that it’s a neat escape hatch for specific use cases. 
Separating reproducibility patching makes the package 
transformation mechanism more powerful and appealing.  Much like 
respecting TESTS? makes it easy for users of modified packages to 
bypass a failing test suite, making patching of Makefiles to 
remove CPU tuning conditional would make for much less complex 
custom package definitions.


I found one case though where this is not possible: C++ 
header-only
libraries such as Eigen contain hand-optimized vectorized 
routines,
selected at build time, but we end up compiling Eigen users as 
the
x86_64/AArch64 baseline, which is a waste.  (If you do know of 
other

problematic cases, I’m interested in taking a look!)

My solution to that is “package multi-versioning” via a 
transformation
option.  Hopefully I’ll submit preliminary patches within a week 
or so!


Oh, exciting!

--
Ricardo



Re: Help with package AppImage support

2021-11-28 Thread Ricardo Wurmus



Ekaitz Zarraga  writes:

Maybe “TARGET NOT found” is CMake’s original way of saying that 
you’re

missing a dependency (libsquashfuse), no?


I mean, I already reached that conclusion but if you check the 
packages in the
original message, the dependency is included. I can't find why 
isn't it finding

the library.

I need there some help making CMake find the dependency I 
already included and

packaged.


Does the CMakeLists.txt or the files under cmake/ mention 
libsquashfuse?  There should either be a Find* macro that 
describes the tests CMake will perform to determine certain 
variables for using libsquashfuse, or it will use a conventional 
way to do that: via pkg-config or using .cmake files in 
libsquashfuse.


So there coulde be different problems here: libsquashfuse doesn’t 
install the expected cmake files or installs them in the wrong 
place; or this package tells CMake to search using pkg-config but 
you don’t have pkg-config among the inputs; or this is all the 
case and pkg-config fails because a library isn’t propagated when 
it should be, etc.


The first step should be to figure out if CMake uses one of these 
Find* macros or some other way.  If it’s a Find* macro, determine 
if it is provided by libsquashfuse or appimagekit.


--
Ricardo



Re: Update on bordeaux.guix.gnu.org

2021-11-28 Thread Ricardo Wurmus



Ludovic Courtès  writes:


The disk space usage trend is pretty much
linear, so if things continue without any change, I think it 
will be
necessary to pause the agents within a month, to avoid filling 
up

bayfront entirely.


Ah, bummer.  I hope we can find a solution one way or another.
Certainly we could replicate nars on another machine with more 
disk,

possibly buying the necessary hardware with the project funds.


Remember that I’ve got three 256G SSDs here that I could send to 
wherever bayfront now sits.  With LLVM or a RAID configuration 
these could just be added to the storage pool — if bayfront has 
sufficient slots for three more disks.


--
Ricardo