Re: question regarding substitute* and #t
Mark H Weaver writes: > After we switch to using 'invoke' everywhere, or more precisely, after > we arrange to never return #false from any phase or snippet, then > there should be one more step before removing the vestigial #true > returns: we should change the code that calls phases or snippets to > ignore the value(s) returned by those procedures. When that is done, > then the #t's will truly be vestigial. Does that make sense? I think we should start removing the vestigial #true right away. Why wait until we can make the code that calls phases ignore the values returned by those phases? As it stands, that code errors out only when a phase returns #false, not when it returns any other value (even unspecified). WDYT? The #true is already vestigial. In fact, #true being vestigial is what annoyed me and made me start the original thread discussing ways to get rid of it. https://lists.gnu.org/archive/html/guix-devel/2017-12/msg00235.html And, it so happened that we concluded the best way to go forward was to deprecate boolean results altogether and transition to an exception based system.
Re: Cuirass news
Hey Ludo, > And! This brings a whole set of new bugs that I’m hunting notably on > berlin (which may thus lag behind…). Overall I think it’ll make Cuirass > easier to work with and more “introspectable”. I went through your recent Cuirass commits and it seems awesome. I'll try yo update Cuirass on my build server and give you some feedback. Thanks, Mathieu
Re: Cuirass news
BTW, when I try to access berlin's Cuirass HTTP API, I have the following error: --8<---cut here---start->8--- curl -i -H "Accept: application/json" http://berlin.guixsd.org/api/latestbuilds?nr=10 HTTP/1.1 502 Bad Gateway Server: nginx/1.12.2 Date: Thu, 25 Jan 2018 12:55:20 GMT Content-Type: text/html Content-Length: 173 Connection: keep-alive 502 Bad Gateway 502 Bad Gateway nginx/1.12.2 --8<---cut here---end--->8--- Mathieu
Re: Cuirass news
Mathieu Othacehe skribis: > BTW, when I try to access berlin's Cuirass HTTP API, I have the following > error: Yeah, a few things are still brittle and I’m starting/stopping Cuirass on berlin quite frequently. Don’t upgrade your server yet. :-) Ludo’.
aarch64 machines donated by ARM!
Hello Guix! In December, Richard Henwood of ARM Holdings kindly donated two SoftIron OverDrive 1000: https://softiron.com/development-tools/overdrive-1000/ These are 4-core, pretty fast machines. Both are currently at my place and I recently added one to the berlin.guixsd.org build farm. It started building packages from the ‘core-updates’ branch, though it’s not working at full speed yet due to the Cuirass developments going on. The second machine needs a replacement of its power supply unit. Unfortunately, SoftIron stopped answering my messages after initially offering to provide a replacement. I started looking for a compatible PSU in on-line shops but the form factor is quite unusual (160x65x65mm). If you know where to get that, I’m all ears! Anyway, we’ll now be able to continuously provide binaries for aarch64, and that’s really great news. I suppose we’ll need to increase the build capacity for aarch64 eventually so we can keep up with the change rate, but that’s a great start. Thanks a lot to ARM and to Richard for this donation! Ludo’. signature.asc Description: PGP signature
Re: Errors encountered in building guix from source.
Hello Fis, Fis Trivial writes: [...] > * Add --pure option to `guix environment` This is what I do even on GuixSD for Guix's Git repository, too. > Then I tried again the added --pure option to `guix environment`: > $ guix environment guix --ad-hoc help2man git strace --pure > > During the process, following questions were emitted by command-not-found > facility provided by Fedora: > > Install package 'cargo' to provide command 'cargo'? [N/y] n [...] > Even if I answer 'y', those packages won't be successfully installed by Fedora > since I already have them. Installing a missing package by guessing from non-existing command is a Fedora's “feauture” of Bash. I believe this is a reason of following failures. You probably could avoid this by starting a Bash process with bash --noprofile [...] > If I ignore the failure and then try: > $ sudo ./pre-inst-env guix-daemon --build-users-group=guixbuild > > I will be told that sudo command is not availabile. > Adding sudo as a dependency in environment will not work, due to this error: > > sudo: /gnu/store/p1fgwswygbw0fgbnpajdhxb0ylmqa20i-profile/bin/sudo must be > owned by uid 0 and have the setuid bit set Please, run 'sudo' not from 'guix environment'. (Press Ctrl+D to exit from an 'guix environment'). [...] Oleg. signature.asc Description: PGP signature
Re: Guix Workflow Language ?
Dear Roel, Thank you for your comments. I was imaging your point 2. And the softwares come from Guix. The added benefit was: a controlled and reproducible environment. In other words, the added benefit came from the GuixWorkflow (the engine of workflow), and not from the Language (lisp EDSL). But maybe it is a wrong way. >From my experience, the classical strategy of writing pipelines is to adapt an already existing workflow for one another particular question. We fetch bits here and there, do some ugly and dirty hacks to have some results; then depending on them, a cleaner pipeline is written (or not! :-) or other pieces are tested. Again from my experience, there is (at least) 3 issues: the number of tools to learn and know enough to be able to adapt; the bits/pieces already available; the environment/dependencies and how they are managed. In this context, since 'lispy' syntax is not mainstream (and will never be), it appears to me as a hard position. That's why I asked if a Guix-backend workflow engine for CWL specs is doable. Run CWL specs workflow on the top of the GWL engine. However, I got your point, I guess. You mean: it is a lot of work with unclear benefits over existing engines. Therefore, your point 1. reverses "my issue". Once the pipeline is well-established, write it with GWL! :-) Next, if it is possible to convert this GWL specs pipeline to CWL one [+ Docker] (with softwares coming from Guix), then we can enjoy the CWL-world engine capabilities. The benefit of that is from two sides: run the pipeline with different engines; and produce a clean docker image. So , instead of working on improving the GWL engine (adding features about efficiency, Grid, Amazon, etc.) which is a very tough task, the doable plan would be to add an "exporter". Right ? Another question, do you think it is doable to write "importers" ? I am not sure that the metaphor is good enough, but do you think it is a feasible goal from the existing GWL to go towards a kind of `Pandoc of workflows` ? also packing the softwares. And a start should be: - write a parser for (subset of) CWL yaml file and obtain the GWL representation of the workflow - write a exporter to CWL + Docker image What do you think ? About the parser, I haven't found yet an easy-to-use Guile lib for parsing YAML-like files. Any pointer ? Adapt some Racket ones ? Thank you for your insights. All the best, simon
[PATCH] Add SELinux policy for guix-daemon.
Hi Guix, attached is a patch that adds an SELinux policy for the guix-daemon. The policy defines the guix_daemon_t domain and specifies what labels may be accessed and how by processes running in that domain. These file labels are defined: * guix_daemon_conf_t for Guix configuration files (in localstatedir and sysconfdir) * guix_daemon_exec_t for executables spawned by the daemon (which are allowed to run in the guix_daemon_t domain) * guix_daemon_socket_t for the daemon socket file * guix_profiles_t for the contents of the profiles directory The “filecon” statements near the bottom of the file specify which labels are to be used for what file names. I tested this with “guix build --no-grafts --check hello”, “guix build samtools”, “guix gc -C 1k”, and “guix package -p ~/foo -i hello”; no operations were blocked by SELinux. If you want to test this on Fedora, set SELinux to permissive, and make sure to configure Guix properly (i.e. set localstatedir, prefix, and sysconfdir). Then install the policy with “sudo semodule -i etc/guix-daemon.cil”. Then relabel the filesystem (at least /gnu, $localstatedir, $sysconfdir, and $prefix) with something like this: sudo restorecon -R /gnu $localstatedir $sysconfdir $prefix This will take a very long time (a couple of hours). Restart the daemon. Check that it now runs in the guix_daemon_t context: ps -Zax | grep /bin/guix-daemon This should return something like this system_u:system_r:guix_daemon.guix_daemon_t:s0 14886 ? Ss 0:00 /root/.guix-profile/bin/guix-daemon --build-users-group=guix-builder Check the audit log for violations: sudo tail -f /var/log/audit/audit.log | grep x-daemon And then use Guix: guix build --no-grafts --check hello The audit log shouldn’t show you any complaints. At this point you could probably switch to enforcing mode, but I haven’t tested this myself for no particular reason. Open issues: * guix_daemon_socket_t isn’t actually used. All of the socket operations that I observed involve contexts that don’t have anything to do with guix_daemon_socket_t. It doesn’t hurt to have this unused label, but I would have preferred to define socket rules for only this label. Oh well. * “guix gc” cannot access arbitrary links to profiles. By design, the file label of the destination of a symlink is independent of the file label of the link itself. Although all profiles under $localstatedir are labelled, the links to these profiles inherit the label of the directory they are in. For links in the user’s home directory this will be “user_home_t” (for which I’ve added a rule). But for links from root’s home directory, or /tmp, or the HTTP server’s working directory … this won’t work. “guix gc” would be prevented from reading and following these links. * I don’t know if the daemon’s TCP listen feature still works. I didn’t test it and assume that it would require extra rules, because SELinux treats network sockets differently from files. * Is this all correct? I don’t know! I only just learned about the SELinux Common Intermediate Language (CIL), and the documentation is very sparse, so I have no idea if I did something stupid. It seems fine to me, but I must admit that I find it a bit uncomfortable to see so many access types in the rules. * I allowed type transitions from init_t to guix_daemon_t via guix_daemon_exec_t, but also from guix_store_content_t to guix_daemon_t via guix_daemon_exec_t. Type transitions are necessary to get from an allowed entry point to a domain. On Fedora “init_t” is the domain in which processes are that are spawned by the init system. With the first type transition I permit these processes to transition to the guix_daemon_t domain when the executables are labeled as guix_daemon_exec_t (such as the daemon executable itself, and all the helpers it spawns). This much is obvious. But the second type transition is less obvious. It is needed to make sure that we can enter the guix_daemon_t domain even when running the daemon from an executable in the store (which will be running in the “guix_store_content_t” domain). Thinking of this, I wonder if maybe that’s actually a mistake and shouldn’t be permitted. * A possible problem is that I assign all files with a name matching “/gnu/store/.+-(guix-.+|profile)/bin/guix-daemon” the label “guix_daemon_exec_t”; this means that *any* file with that name in any profile would be permitted to run in the guix_daemon_t domain. This is not ideal. An attacker could build a package that provides this executable and convince a user to install and run it, which lifts it into the guix_daemon_t domain. At that point SELinux could not prevent it from accessing files that are allowed for processes in that domain (such as the actual daemon). This makes me wonder if we could do better by generating a much more restrictive policy at installation time, so
Re: aarch64 machines donated by ARM!
Replying to self: If I had waited another minute, I would have found: https://hydra.gnu.org/jobset/gnu/master This looks like it is testing x86_64 only? Are other architectures available somewhere else? best regards, Richard On Thu, 2018-01-25 at 09:41 -0600, Richard Henwood wrote: > Hi Ludo', > > Thanks for this update and your efforts to get Guix building on > AArch64! Do you perform any automated testing continuously or on > releases? I am interested to see if anything is failing on different > architectures. > > I haven't forgotten that you are down one machine. It sounds like > you'll be perfectly happy just swapping out the power supply, so I'll > ask colleagues what they suggest, or figure out an alternative. > > best regards, > Richard > > On Thu, 2018-01-25 at 14:23 +0100, Ludovic Courtès wrote: > > > > Hello Guix! > > > > In December, Richard Henwood of ARM Holdings kindly donated two > > SoftIron > > OverDrive 1000: > > > > https://softiron.com/development-tools/overdrive-1000/ > > > > These are 4-core, pretty fast machines. Both are currently at my > > place > > and I recently added one to the berlin.guixsd.org build farm. It > > started building packages from the ‘core-updates’ branch, though > > it’s > > not working at full speed yet due to the Cuirass developments going > > on. > > > > The second machine needs a replacement of its power supply unit. > > Unfortunately, SoftIron stopped answering my messages after > > initially > > offering to provide a replacement. I started looking for a > > compatible > > PSU in on-line shops but the form factor is quite unusual > > (160x65x65mm). > > If you know where to get that, I’m all ears! > > > > Anyway, we’ll now be able to continuously provide binaries for > > aarch64, > > and that’s really great news. I suppose we’ll need to increase the > > build capacity for aarch64 eventually so we can keep up with the > > change > > rate, but that’s a great start. > > > > Thanks a lot to ARM and to Richard for this donation! > > > > Ludo’. -- richard.henw...@arm.com Server Software Eco-System Tel: +1 512 410 9612 IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Re: aarch64 machines donated by ARM!
Hi Ludo', Thanks for this update and your efforts to get Guix building on AArch64! Do you perform any automated testing continuously or on releases? I am interested to see if anything is failing on different architectures. I haven't forgotten that you are down one machine. It sounds like you'll be perfectly happy just swapping out the power supply, so I'll ask colleagues what they suggest, or figure out an alternative. best regards, Richard On Thu, 2018-01-25 at 14:23 +0100, Ludovic Courtès wrote: > Hello Guix! > > In December, Richard Henwood of ARM Holdings kindly donated two > SoftIron > OverDrive 1000: > > https://softiron.com/development-tools/overdrive-1000/ > > These are 4-core, pretty fast machines. Both are currently at my > place > and I recently added one to the berlin.guixsd.org build farm. It > started building packages from the ‘core-updates’ branch, though it’s > not working at full speed yet due to the Cuirass developments going > on. > > The second machine needs a replacement of its power supply unit. > Unfortunately, SoftIron stopped answering my messages after initially > offering to provide a replacement. I started looking for a > compatible > PSU in on-line shops but the form factor is quite unusual > (160x65x65mm). > If you know where to get that, I’m all ears! > > Anyway, we’ll now be able to continuously provide binaries for > aarch64, > and that’s really great news. I suppose we’ll need to increase the > build capacity for aarch64 eventually so we can keep up with the > change > rate, but that’s a great start. > > Thanks a lot to ARM and to Richard for this donation! > > Ludo’. -- richard.henw...@arm.com Server Software Eco-System Tel: +1 512 410 9612 IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Re: aarch64 machines donated by ARM!
Hi Richard, Richard Henwood skribis: > Replying to self: > > If I had waited another minute, I would have found: > https://hydra.gnu.org/jobset/gnu/master > > This looks like it is testing x86_64 only? > > Are other architectures available somewhere else? hydra.gnu.org is actually building for x86_64, i686, and armv7 (hard float). It continuously builds the 6K+ packages of the distribution. The situation is complicated by the fact that we are migrating to a new build farm, berlin.guixsd.org, which now does aarch64 in addition to Intel. That build farm runs a different CI tool, which provides an HTTP API but does not yet have a web UI like that you saw at hydra.gnu.org. Thanks again for your help! Ludo’.
Re: [PATCH] Add SELinux policy for guix-daemon.
Hello! Ricardo Wurmus skribis: > attached is a patch that adds an SELinux policy for the guix-daemon. > The policy defines the guix_daemon_t domain and specifies what labels > may be accessed and how by processes running in that domain. Impressive! I know nothing about SELinux so I can’t comment on the specifics. > These file labels are defined: [...] > The audit log shouldn’t show you any complaints. At this point you > could probably switch to enforcing mode, but I haven’t tested this > myself for no particular reason. What about putting this text in a new “SELinux Support” section or similar, along with the current limitations? > Open issues: [...] > * A possible problem is that I assign all files with a name matching > “/gnu/store/.+-(guix-.+|profile)/bin/guix-daemon” the label > “guix_daemon_exec_t”; this means that *any* file with that name in any > profile would be permitted to run in the guix_daemon_t domain. This > is not ideal. An attacker could build a package that provides this > executable and convince a user to install and run it, which lifts it > into the guix_daemon_t domain. At that point SELinux could not > prevent it from accessing files that are allowed for processes in that > domain (such as the actual daemon). > > This makes me wonder if we could do better by generating a much more > restrictive policy at installation time, so that only the *exact* file > name of the currently installed guix-daemon executable would be > labelled with guix_daemon_exec_t, instead of using a regular > expression like that. This means that root would have to > install/upgrade the policy at installation time whenever the Guix > package that provides the effectively running guix-daemon executable > is upgraded. Food for thought. Yeah, guix-daemon.service currently refers to /var/guix/profiles/…/guix-daemon for similar reasons. > From d20bae0953d5d0a6bf1c06ab44505af6dea4df4d Mon Sep 17 00:00:00 2001 > From: Ricardo Wurmus > Date: Thu, 25 Jan 2018 15:21:07 +0100 > Subject: [PATCH] etc: Add SELinux policy for the daemon. > > * etc/guix-daemon.cil.in: New file. > * Makefile.am: Add dist_selinux_policy_DATA. > * configure.ac: Handle --with-selinux-policy-dir. [...] > --- /dev/null > +++ b/etc/guix-daemon.cil.in > @@ -0,0 +1,281 @@ > +; -*- lisp -*- Perhaps add a comment like: ;; This is a specification for SELinux X.Y written in the SELinux ;; Common Intermediate Language (CIL). Fun that it uses sexps. :-) Thanks! Ludo’.
Re: [PATCH] website: donate: Add overdrive1.guixsd.org.
Jonathan Brielmaier skribis: > * website/apps/base/templates/donate.scm (donate-t): Add table row for > overdrive1.guixsd.org. Applied (without the “hosting” part since it’s not hosted at the MDC.) Thanks! :-) Ludo’.
Re: 01/01: gnu: gource: Fix the hashes of mutated GitHub archives.
On Thu, Jan 25, 2018 at 09:17:38AM -0500, Oleg Pykhalov wrote: > wigust pushed a commit to branch master > in repository guix. > > commit 45b486984d8ab092cf002cd0b500df4dc62e186b > Author: Oleg Pykhalov > Date: Thu Jan 25 16:58:35 2018 +0300 > > gnu: gource: Fix the hashes of mutated GitHub archives. > > * gnu/packages/version-control.scm (gource): Fix hash. > -"https://github.com/acaudwell/Gource/archive/"; > -"gource-" version ".tar.gz")) > +"https://github.com/acaudwell/Gource/releases/download"; > +"/gource-" version "/gource-" version ".tar.gz")) Hey, thanks for fixing this up. The commit message made me think that the hash had changed, but based on this commit it seems that the URL changed somehow, or was originally incorrect. In cases where the hash actually changed, please send a message to bug-guix so we can investigate publicy. The automatically created per-tag GitHub snapshots are not guaranteed to be cached forever by GitHub or recreated deterministically, so their hashes are subject to change. [0] Additionally, if a packager uses `guix download` to check the hash of some file, but uses an incorrect URL in the package definition, Guix will use the file in /gnu/store and never try the URL. So it's easy to commit the wrong URL if you use `guix download`. Instead I recommend downloading the file outside of Guix and using `guix hash`. [0] https://github.com/libgit2/libgit2/issues/4343 https://bugs.gnu.org/28659 signature.asc Description: PGP signature
Re: question regarding substitute* and #t
Arun Isaac writes: > Mark H Weaver writes: > >> After we switch to using 'invoke' everywhere, or more precisely, after >> we arrange to never return #false from any phase or snippet, then >> there should be one more step before removing the vestigial #true >> returns: we should change the code that calls phases or snippets to >> ignore the value(s) returned by those procedures. When that is done, >> then the #t's will truly be vestigial. Does that make sense? > > I think we should start removing the vestigial #true right away. Why > wait until we can make the code that calls phases ignore the values > returned by those phases? As it stands, that code errors out only when a > phase returns #false, not when it returns any other value (even > unspecified). WDYT? > > The #true is already vestigial. They are not vestigial if we care about code correctness. Phases and snippets are currently specified to return a boolean, and furthermore we must return the _appropriate_ boolean to indicate success for failure. I consider it unacceptable to not bother returning anything, allowing a completely unspecified value to be returned, and think that this is okay because it happens to work, for now, because of an internal implementation detail of Guile. This (unfortunately widespread) practice of sloppiness in software engineering is how we ended up in the mess we are in today, where our software is drowning in bugs and our systems are hopelessly insecure. Let the annoyance that you and others feel about these unsightly #t's supply the motivation to fix this issue properly. Mark
Re: Guix Workflow Language ?
Hi, zimoun writes: > In this context, since 'lispy' syntax is not mainstream (and will > never be), it appears to me as a hard position. We’ve got you covered here: the GWL has built-in support for Wisp, a pretty language extension for Guile. It also comes with a bunch of extra syntax support to make the definition of workflows easier. Here’s a convoluted artificial example: --8<---cut here---start->8--- define-module test use-modules guix workflows guix processes gnu packages bioinformatics gnu packages python process: simple-test package-inputs list python samtools data-inputs list "sample.bam" "hg38.fa" "abc" procedure #---{python} import os def hello(): print "hello from python 3" print GWL['data-inputs'] print GWL['name'] hello() --- workflow: example-workflow processes list simple-test --8<---cut here---end--->8--- Put this in a file called “test.wisp” and add the directory to the GUIX_WORKFLOW_PATH and you’re good to go. Note that the “simple-test” process definition embeds Python code. A number of other languages can be supported easily. I don’t think syntax should hold you back. -- Ricardo GPG: BCA6 89B6 3655 3801 C3C6 2150 197A 5888 235F ACAC https://elephly.net
Re: Errors encountered in building guix from source.
Sorry for the really late reply. > > Installing a missing package by guessing from non-existing command is a > Fedora's “feauture” of Bash. I believe this is a reason of following > failures. You probably could avoid this by starting a Bash process with > > bash --noprofile > > [...] > It would be nice to be included in document. > > Please, run 'sudo' not from 'guix environment'. (Press Ctrl+D to exit > from an 'guix environment'). > Can we put it in the document too? :) > > What are the exact commands you are giving? > > Pj. > I succeeded at the second time according to your guide, don't know why, maybe missing something. All in all, I sent the remaining patches for some packages to guix and those are accepted. :) I think I will have to disable guix for a while since it's really messing up with my environment and I currently don't have the time to deal with it. :( I will come back latter and try joining the development for guix itself, really sorry. Fis.
Re: Cuirass news
Hi Ludo, > Over the last few days, out of frustration ;-), I hacked Cuirass to > improve several things: Oh yeah! That’s great. > • Logging is improved: useful events are logged, including build > started/succeeded/failed (using a variant of what I proposed in the > Guix ‘wip-ui’ branch). This makes it much easier to understand > what’s going on! Finally! Better logging alone would be a reason to celebrate :) IIRC the wip-ui branch parsed the “@”-prefixed messages of the daemon. I didn’t find this in your commits to Cuirass, though. > • Restarting unfinished builds: it’s common, especially when testing, > to interrupt Cuirass, leaving a number of builds unfinished or not > even started. Now Cuirass restarts those upon startup. Also very useful. Does this mean Cuirass resumes work more quickly now whereas previously it would have to compute the full evaluation after a restart? I wonder about commit 49a341866afabe64c8ac3b8d93c64d2b6b20895d: you’re chunking the number of derivations because guix-daemon doesn’t perform well when it is asked to build lots of derivations at once. Is it possible to parse, lock, and run individual derivations in the daemon when presented with lots of them, or is there a good reason why each of these phases is executed for all derivations? > And! This brings a whole set of new bugs that I’m hunting notably on > berlin (which may thus lag behind…). I see that there are a bunch of spawn-fiber invocations with “with-database” bodies. Maybe I remember this wrong, but I thought sqlite doesn’t support concurrent database access. > Overall I think it’ll make Cuirass > easier to work with and more “introspectable”. I think so too. Thanks again. -- Ricardo GPG: BCA6 89B6 3655 3801 C3C6 2150 197A 5888 235F ACAC https://elephly.net
Re: aarch64 machines donated by ARM!
I’ve posted a news item here: https://www.gnu.org/software/guix/blog/2018/aarch64-build-machines-donated/ Notice the neat stickers courtesy of Chris Baines. :-) Ludo’.
Re: Guix Workflow Language ?
zimoun writes: > Dear Roel, > > Thank you for your comments. > > I was imaging your point 2. And the softwares come from Guix. > The added benefit was: a controlled and reproducible environment. > In other words, the added benefit came from the GuixWorkflow (the > engine of workflow), and not from the Language (lisp EDSL). > But maybe it is a wrong way. I get that point. Maybe it's then a better idea to write the workflow in CWL (like you would do), and use Guix to generate Docker containers. Then you do get the benefit of Guix's strong reproducibility and composability forscientific software, plus you get to keep writing the workflow in CWL. :-) > >>From my experience, the classical strategy of writing pipelines is to > adapt an already existing workflow for one another particular > question. We fetch bits here and there, do some ugly and dirty hacks > to have some results; then depending on them, a cleaner pipeline is > written (or not! :-) or other pieces are tested. > Again from my experience, there is (at least) 3 issues: the number of > tools to learn and know enough to be able to adapt; the bits/pieces > already available; the environment/dependencies and how they are > managed. > > In this context, since 'lispy' syntax is not mainstream (and will > never be), it appears to me as a hard position. That's why I asked if > a Guix-backend workflow engine for CWL specs is doable. Run CWL specs > workflow on the top of the GWL engine. This is a good question, but how can you describe the origin of a software package in CWL? In the GWL, we use the Scheme symbols, and the Guix programming interface directly, but that is unavailable in CWL. This is a real problem that I don't see we can easily solve. > > However, I got your point, I guess. > You mean: it is a lot of work with unclear benefits over existing engines. So, I think it's impossible to express the deployment of a software program in CWL. It is not as expressive as GWL in this regard. Translating to a precise Guix package recipe and its dependencies is very hard from what we can write in CWL. If I am mistaken here, please let me know. Maybe we can figure something out. > > > Therefore, your point 1. reverses "my issue". > Once the pipeline is well-established, write it with GWL! :-) > Next, if it is possible to convert this GWL specs pipeline to CWL one > [+ Docker] (with softwares coming from Guix), then we can enjoy the > CWL-world engine capabilities. > The benefit of that is from two sides: run the pipeline with different > engines; and produce a clean docker image. > > So , instead of working on improving the GWL engine (adding features > about efficiency, Grid, Amazon, etc.) which is a very tough task, the > doable plan would be to add an "exporter". > Right ? The plan is to implement back-ends, or 'process-engines' for GWL to work with AWS, Kubernetes, Grid (this one is already supported). These back-ends are surprisingly easy to write, because the Guix programming interface allows us to generate virtual machines, containers, or simply store items if Guix is available locally. We also implemented a Bash-engine that can generate Bash scripts for every step of the workflow. That in combination with the variety of deployment options solves most of the challenges. > > > Another question, do you think it is doable to write "importers" ? > > I am not sure that the metaphor is good enough, but do you think it is > a feasible goal from the existing GWL to go towards a kind of `Pandoc > of workflows` ? also packing the softwares. > > And a start should be: > - write a parser for (subset of) CWL yaml file and obtain the GWL > representation of the workflow > - write a exporter to CWL + Docker image > > What do you think ? Maybe. But in CWL we cannot describe precise software packages. So translating these things to Guix is hard. > > > About the parser, I haven't found yet an easy-to-use Guile lib for > parsing YAML-like files. Any pointer ? Adapt some Racket ones ? I don't know of one, sorry. > Thank you for your insights. > > All the best, > simon Thanks! Kind regards, Roel Janssen
RE: Guix Workflow Language ?
Hi, Watching this thread and trying to take the pulse of GWL. Where should I look? https://git.roelj.com/guix/gwl has little documentation - it does say " GWL has a built-in getting-started guide. To use it, run: guix workflow --web-interface" - but supposing we just want to read some documentation https://www.guixwl.org/ is 503 Workflow management with GNU Guix https://archive.fosdem.org/2017/schedule/event/guixworkflowmanagement/ is interesting but not documentation Can someone please catch me up? Thx, ~malcolm_c...@stowers.org > -Original Message- > From: Guix-devel [mailto:guix-devel-bounces+mec=stowers@gnu.org] > On Behalf Of Roel Janssen > Sent: Thursday, January 25, 2018 4:05 PM > To: zimoun > Cc: guix-devel@gnu.org > Subject: Re: Guix Workflow Language ? > > > zimoun writes: > > > Dear Roel, > > > > Thank you for your comments. > > > > I was imaging your point 2. And the softwares come from Guix. > > The added benefit was: a controlled and reproducible environment. > > In other words, the added benefit came from the GuixWorkflow (the > > engine of workflow), and not from the Language (lisp EDSL). > > But maybe it is a wrong way. > > I get that point. Maybe it's then a better idea to write the workflow > in CWL (like you would do), and use Guix to generate Docker containers. > > Then you do get the benefit of Guix's strong reproducibility and > composability forscientific software, plus you get to keep writing the > workflow in CWL. :-) > > > > >>From my experience, the classical strategy of writing pipelines is to > > adapt an already existing workflow for one another particular > > question. We fetch bits here and there, do some ugly and dirty hacks > > to have some results; then depending on them, a cleaner pipeline is > > written (or not! :-) or other pieces are tested. > > Again from my experience, there is (at least) 3 issues: the number of > > tools to learn and know enough to be able to adapt; the bits/pieces > > already available; the environment/dependencies and how they are > > managed. > > > > In this context, since 'lispy' syntax is not mainstream (and will > > never be), it appears to me as a hard position. That's why I asked if > > a Guix-backend workflow engine for CWL specs is doable. Run CWL specs > > workflow on the top of the GWL engine. > > This is a good question, but how can you describe the origin of a > software package in CWL? In the GWL, we use the Scheme symbols, and > the > Guix programming interface directly, but that is unavailable in CWL. > > This is a real problem that I don't see we can easily solve. > > > > > > However, I got your point, I guess. > > You mean: it is a lot of work with unclear benefits over existing engines. > > So, I think it's impossible to express the deployment of a software > program in CWL. It is not as expressive as GWL in this regard. > Translating to a precise Guix package recipe and its dependencies is > very hard from what we can write in CWL. > > If I am mistaken here, please let me know. Maybe we can figure > something out. > > > > > > > Therefore, your point 1. reverses "my issue". > > Once the pipeline is well-established, write it with GWL! :-) > > Next, if it is possible to convert this GWL specs pipeline to CWL one > > [+ Docker] (with softwares coming from Guix), then we can enjoy the > > CWL-world engine capabilities. > > The benefit of that is from two sides: run the pipeline with different > > engines; and produce a clean docker image. > > > > So , instead of working on improving the GWL engine (adding features > > about efficiency, Grid, Amazon, etc.) which is a very tough task, the > > doable plan would be to add an "exporter". > > Right ? > > The plan is to implement back-ends, or 'process-engines' for GWL to work > with AWS, Kubernetes, Grid (this one is already supported). > > These back-ends are surprisingly easy to write, because the Guix > programming interface allows us to generate virtual machines, > containers, or simply store items if Guix is available locally. > > We also implemented a Bash-engine that can generate Bash scripts for > every step of the workflow. That in combination with the variety of > deployment options solves most of the challenges. > > > > > > > Another question, do you think it is doable to write "importers" ? > > > > I am not sure that the metaphor is good enough, but do you think it is > > a feasible goal from the existing GWL to go towards a kind of `Pandoc > > of workflows` ? also packing the softwares. > > > > And a start should be: > > - write a parser for (subset of) CWL yaml file and obtain the GWL > > representation of the workflow > > - write a exporter to CWL + Docker image > > > > What do you think ? > > Maybe. But in CWL we cannot describe precise software packages. So > translating these things to Guix is hard. > > > >
Re: Cuirass news
Hmmm... is it down right now? I've written a HTML frontend and I always get 504 or timeout from https://berlin.guixsd.org/api/latestbuilds?nr=20 or similar. Sorry if I'm too impatient :-)
Re: Cuirass news
Might want this: $ git diff diff --git a/build-aux/guix.scm b/build-aux/guix.scm index c2f6cdb..4075806 100644 --- a/build-aux/guix.scm +++ b/build-aux/guix.scm @@ -81,6 +81,7 @@ "guile-json" "guile-sqlite3" "guile-git" + "guile-fibers" "guix"))) (native-inputs (map spec+package-list Also, guix package -f build-aux/guix.scm 's tests hang...
Re: Prevent native-inputs references ending up in the final binary
On 01/20/2018 06:40 PM, Danny Milosavljevic wrote: > Hi Leo, > >> Although native-inputs are typically things that are only required while >> building [0], there's nothing that prevents a built package from keeping >> references to native-inputs. > > We should change that in core-updates-next, if possible. > > I think that native-inputs shouldn't end up in the final binary as a > reference, especially when cross-compiling > (but we don't do cross-compilation much in Guix - usually, we let > qemu-arm emulate the ARM CPU on x86_64 and just call the target tool :) ). > > If there are indeed parts of the same package, one a native part and one a > runtime dependency part, I actually write the same package reference twice, > once in the inputs, once in the native-inputs, in my custom package > definitions. > > In a "previous life" I did a lot of Linux cellphone development and, > there, it was kinda important that a x86_64 toolchain doesn't end > up being referenced in an ARM binary, so the habit stuck - and I > think it's important to distinguish the mold used to form a product > from an integral part of that product. > I'm no expert, but can this little utility from nix help? https://nixos.org/patchelf.html
Re: Prevent native-inputs references ending up in the final binary
Hullo, On Fri, 2018-01-26 at 00:56 +, Fis Trivial wrote: > On 01/20/2018 06:40 PM, Danny Milosavljevic wrote: > > I think that native-inputs shouldn't end up in the final binary as > > a > > reference, especially when cross-compiling > > (but we don't do cross-compilation much in Guix - usually, we let > > qemu-arm emulate the ARM CPU on x86_64 and just call the target > > tool :) ). > > > > If there are indeed parts of the same package, one a native part > > and one a > > runtime dependency part, I actually write the same package > > reference twice, > > once in the inputs, once in the native-inputs, in my custom package > > definitions. > > > > In a "previous life" I did a lot of Linux cellphone development > > and, > > there, it was kinda important that a x86_64 toolchain doesn't end > > up being referenced in an ARM binary, so the habit stuck - and I > > think it's important to distinguish the mold used to form a product > > from an integral part of that product. > > > > I'm no expert, but can this little utility from nix help? > https://nixos.org/patchelf.html In what way? Patchelf re-writes library and/or loader paths in compiled binaries. It's cool, but I don't immediately see the connexion... Kind regards, T G-R
ANNOUNCE: guille kernel for Jupyter
I've just been pointed at this: https://github.com/jerry40/guile-kernel Recall that Jupyter is a web-based interactive framework meant for collaborative creation of research diaries, journals, presentations & etc. where the researcher/author can embed snippets of code that graph stuff, do stuff, etc. (e.g. there is a "graphs like XKCD" plugin) It's mostly a python thing, and I bitched about it, and now there's a guile thing! Disclaimer: I have not tried this, and have played with jupyter only shallowly. But I think this is a good thing, a very good thing, as it opens the door for using guile in big-data settings or general science or scientific-data analysis. Linas. -- cassette tapes - analog TV - film cameras - you
Re: question regarding substitute* and #t
Andy Wingo writes: > On Thu 25 Jan 2018 06:31, Maxim Cournoyer writes: > >> Where does this `invoke' comes from? Geiser is unhelpful at finding it, >> and it doesn't seem to be documented in the Guile Reference? > > https://lists.gnu.org/archive/html/guix-devel/2018-01/msg00163.html OK, so `invoke' is defined in (guix build utils), and it's docstring is: "Invoke PROGRAM with the given ARGS. Raise an error if the exit code is non-zero; otherwise return #t." Thanks, Maxim