Re: 02/05: gnu: luajit: Update to 2.1.0-beta3.

2018-02-09 Thread Mark H Weaver
Hi Tobias,

m...@tobias.gr (Tobias Geerinckx-Rice) writes:

> nckx pushed a commit to branch master
> in repository guix.
>
> commit 906f1b48e20a032c22a164c89f9e8862ab2bec7a
> Author: Tobias Geerinckx-Rice 
> Date:   Wed Jan 3 09:01:53 2018 +0100
>
> gnu: luajit: Update to 2.1.0-beta3.
> 
> * gnu/packages/lua.scm (luajit): Update to 2.1.0-beta3.
> [source]: Remove symlinks patch.
> * gnu/packages/patches/luajit-symlinks.patch: Delete file.
> * gnu/local.mk (dist_patch_DATA): Remove it.

This seems to have broken the 'love' package, which uses luajit.

  https://hydra.gnu.org/build/2488685

Would you be willing to investigate?

   Mark



Re: Cuirass news

2018-02-09 Thread Ludovic Courtès
Hello,

Danny Milosavljevic  skribis:

> On Thu, 08 Feb 2018 23:21:58 +0100
> l...@gnu.org (Ludovic Courtès) wrote:
>
>> > from a security standpoint - except for db-get-builds, which I'm amending
>> > right now.  
>> 
>> Oh sorry, I think I did the same thing as you were sending this message:
>> 
>>   
>> https://git.savannah.gnu.org/cgit/guix/guix-cuirass.git/commit/?id=8c7c93922bbe0513ff4c4ff3a6e554e3a72635b6
>
>> WDYT?
>
> I'd prefer not to have so many different SQL statements, we get a
> combinatorial explosion if we aren't careful (whether we cache or not,
> the relational database management system is going to hate us anyway
> when we do that).
>
> But I guess there are not that many yet.
>
> If we are fine in not being able to search for literal NULL we can use NULL as
> "anything" marker and have a static WHERE statement (this way is customary).
>
> Also, I've asked on the sqlite mailing list - ORDER BY cannot support "?", so
> those are unavoidable (also, we can't usefully do the ORDER BY ourselves
> by sorting the result - because of the LIMIT clause)
>
> Anyway as long as we are under 1 statements it should be fine :P

Yes.  Also, in practice, everyone’s going to make the same /api/*
requests (because there are only two clients, the Emacs and the Web UI,
and they typically always do the same requests), which in turn means
we’ll always get the same ‘db-get-builds’ call, possibly with just a
different limit, but it’s still the same statement.

So I think we should be fine.

>> Indeed!  Should we change ‘sqlite-finalize’ to a noop when called on a
>> cached statement?  (Otherwise users would have to keep track of whether
>> or not a statement is cached.)
>
> Hmm maybe that's a good way.  But its a little magic.

Yes, but I think we have no other option: now that caching is built into
sqlite3.scm, it has to be properly handled by all of that module.  For
the user, it should be a simple matter of choosing #:cache? #t
or #:cache? #f, and then (sqlite3) should just DTRT.

Otherwise we’d have to remove caching from (sqlite3) altogether, IMO.

WDYT?

>> Besides, on the big database on berlin, the initial:
>> 
>>   (db-get-builds db '((status pending)))
>> 
>> call takes a lot of time and memory.  I guess we’re doing something
>> wrong, but I’m not sure what.  The same query in the ‘sqlite3’ CLI is
>> snappy and does not consume much memory.
>
> WTF.  I'll have a look.

Great.  :-)

For the record, this is the GC profile I get (take it with a grain of
salt: it tells us what part of the code allocates, not what the live
objects are):

--8<---cut here---start->8---
scheme@(guile-user)> (define lst (gcprof (lambda () (db-get-builds db '((status 
pending))
% cumulative   self
time   seconds seconds  procedure
 33.82468.32468.32  bytevector->pointer
 22.38   1361.15309.97  cuirass/database.scm:347:0:db-format-build
 11.19154.98154.98  hashq-set!
  6.33 87.60 87.60  make-bytevector
  4.62 64.01 64.01  utf8->string
  3.89   1051.19 53.91  cuirass/database.scm:324:0:db-get-outputs
  2.92 40.43 40.43  apply-smob/1
  2.68 37.06 37.06  dereference-pointer
  2.43 33.69 33.69  anon #x28e3088
  1.46   1010.76 20.22  cuirass/database.scm:55:0:%sqlite-exec
  1.46 20.22 20.22  srfi/srfi-1.scm:269:0:iota
  1.22 47.17 16.85  ice-9/boot-9.scm:789:2:catch
  1.22 16.85 16.85  ice-9/boot-9.scm:777:2:throw
  1.22 16.85 16.85  string->utf8
  0.97 13.48 13.48  ice-9/boot-9.scm:701:2:make-prompt-tag
  0.73   1384.74 10.11  cuirass/database.scm:375:0:db-get-builds
  0.73 10.11 10.11  hash-set!
  0.49  6.74  6.74  cons
  0.24  3.37  3.37  pointer->bytevector
  0.00 4462655.56  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:510:2:lp
  0.00815.34  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:311:0:sqlite-prepare
  0.00805.24  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:286:4
  0.00101.08  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:474:0:sqlite-row
  0.00 47.17  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:223:2:sqlite-remove-statement!
  0.00 47.17  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:241:4
  0.00 37.06  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:444:4
  0.00 16.85  0.00  
/gnu/store/cxxyk9bdas4n7m6zlhdhnm7ixxkw3b0b-profile/share/guile/site/2.2/sqlite3.scm:227:22
  0.00 16.85  0.00  hash-for-each
---
Sample count: 411
Total time: 1384.735520913 seconds (1331.66193132 seconds in GC)
--8<---cut here---end--->8---


Re: Cuirass news

2018-02-09 Thread Danny Milosavljevic
Hi Ludo,

On Fri, 09 Feb 2018 10:41:13 +0100
l...@gnu.org (Ludovic Courtès) wrote:

> Yes.  Also, in practice, everyone’s going to make the same /api/*
> requests (because there are only two clients, the Emacs and the Web UI,
> and they typically always do the same requests), which in turn means
> we’ll always get the same ‘db-get-builds’ call, possibly with just a
> different limit, but it’s still the same statement.
> So I think we should be fine.

Right.

> >> Indeed!  Should we change ‘sqlite-finalize’ to a noop when called on a
> >> cached statement?  (Otherwise users would have to keep track of whether
> >> or not a statement is cached.)  
> >
> > Hmm maybe that's a good way.  But its a little magic.  
> 
> Yes, but I think we have no other option: now that caching is built into
> sqlite3.scm, it has to be properly handled by all of that module.  For
> the user, it should be a simple matter of choosing #:cache? #t
> or #:cache? #f, and then (sqlite3) should just DTRT.

Yeah, but then let's add sqlite-uncache or something that can be used to
remove it from the cache after all.  And make sqlite-finalize a noop if
it's cached.  Sounds good.

So a savvy user could do sqlite-uncache and then sqlite-finalize and it would
be gone.



/gnu/store/.links/

2018-02-09 Thread Pjotr Prins
What is

  ls -1 /gnu/store/.links/|wc -l
  495938

Never saw it before. Does this scale?

Pj.




Re: /gnu/store/.links/

2018-02-09 Thread Ricardo Wurmus

Hi Pjotr,

> What is
>
>   ls -1 /gnu/store/.links/|wc -l
>   495938
>
> Never saw it before. Does this scale?

It’s used for optional file deduplication.  It is enabled by default,
but you can disable it with a daemon option on file systems that
deduplicate data at the block level.

I don’t know about scalability.  This number is still well below the
limits of ext4 file systems, but accessing a big directory listing like
that can be slow.  I would feel a little better about this if we split
it up into different prefix directories (like it’s done for browser
caches).  I don’t think it’s necessary, though.

--
Ricardo

GPG: BCA6 89B6 3655 3801 C3C6  2150 197A 5888 235F ACAC
https://elephly.net





Re: Haunt patches

2018-02-09 Thread Ludovic Courtès
Heya,

"pelzflorian (Florian Pelz)"  skribis:

> I wrote patches that introduce page variants for translations.  I will
> send them right after this mail.  If there are interested guix-devel
> readers, they can find them at
>
> https://pelzflorian.de/git/haunt/commit/?id=ac3e93fe35363fd0066cf93b969c95c0fde7a25a
> https://pelzflorian.de/git/haunt/commit/?id=ca32925a58c8ec526653aff9825be2359bf358c3
> https://pelzflorian.de/git/haunt/commit/?id=34f6b56bfe3a3059ced4be5e9b768a6a3a93c671
>
> A site can have a list of variants such as various languages,
> e.g. '("de" "en").

Nice, looks like a great starting point!

This does not address i18n in itself, right?  Do you have examples of an
i18n’d web site using this Haunt branch + gettext?

Thank you!

Ludo’.



Re: Improving Shepherd

2018-02-09 Thread Ludovic Courtès
Hey!

Danny Milosavljevic  skribis:

> On Mon, 05 Feb 2018 21:49:08 +1100
> Carlo Zancanaro  wrote:
>
>> User services - Alex has already sent a patch to the list to allow 
>> generating user services from the Guix side. The idea is to 
>> generate a Shepherd config file, allowing a user to invoke 
>> shepherd manually to start their services.
>
>>A further extension to 
>> this would be to have something like systemd's "user sessions", 
>> where the pid 1 Shepherd automatically starts a user's services 
>> when they log in.
>
> I assume that means "starts a user's shepherd when they log in".
>
> elogind already emits a signal on dbus which tells you when a user logged in
>
> return sd_bus_emit_signal(
> u->manager->bus,
> "/org/freedesktop/login1",
> "org.freedesktop.login1.Manager",
> new_user ? "UserNew" : "UserRemoved",
> "uo", (uint32_t) u->uid, p);

I think there’s Guile D-Bus client though.  Another yak to shave…

> Also, a directory /run/user/ appears - which alternatively can be
> monitored by inotify or something.
>
> So the system shepherd could have a shepherd service which does
>
>   while (1) {
>  wait until /run/user/ appears
>  vfork
>if child: setuid, exec user shepherd, _exit
>if parent: wait until child dies
>   }
>
> We better be sure that no one else can create directories in /run/user .
>
> In non-pseudocode, both "wait until /run/user/ appears" and
> "wait until child dies" would have to be in the same call,
> maybe epoll or something.

Yes, inotify (ISTR there *are* inotify bindings for Guile somewhere.)

> Maybe call the service shepherd-nursery-service or something, like a star
> nursery :)

:-)

Ludo’.



Re: Improving Shepherd

2018-02-09 Thread Ludovic Courtès
Carlo Zancanaro  skribis:

> Hey Ludo,
>
> On Mon, Feb 05 2018, Ludovic Courtès wrote:
>>> User services - Alex has already sent a patch to the list to allow
>>> generating user services from the Guix side. The idea is to
>>> generate a
>>> Shepherd config file, allowing a user to invoke shepherd manually
>>> to
>>> start their services. A further extension to this would be to have
>>> something like systemd's "user sessions", where the pid 1 Shepherd
>>> automatically starts a user's services when they log in.
>>
>> After replying to Alex’ message, I realized that we could just as
>> well
>> have a separate “guix service” or similar tool to take care of this?
>>
>> This needs more thought (and perhaps taking a look at systemd user
>> sessions, which I’m not familiar with), but I think Alex’ approach
>> is a
>> good starting point.
>
> We were thinking it might work like this:
> - services->package constructs a package which places a file in the
> profile containing the necessary references
> - pid 1 shepherd listens to elogind login/logout events, and starts
> the services when necessary
>
> Admittedly this isn't the nicest way for it to work, but it might be a
> good starting point.

Yes, sounds reasonable.

> There were some discussions on the list a while ago about how to have
> `guix environment` automatically start services, too, so I wonder what
> overlap there could be there. Although maybe environment services (in
> containers) have more in common with system services than user
> services.

That’s a separate topic I think, but I agree it’d be useful.

>> Currently shepherd monitors SIGCHLD, and it’s not supposed to miss
>> those; in some cases it might handle them later than you’d expect,
>> which
>> means that in the meantime you see a zombie process, but otherwise
>> it
>> seems to work.
>>
>> ISTR you reported an issue when using ‘shepherd --daemonize’, right?
>> Perhaps the issue is limited to that mode?
>
> I no longer use the daemonize function. My user shepherd runs "in the
> foreground" (it's started when my X session starts), so it's not
> that. Jelle fixed the problem I was having by delaying the SIGCHLD
> handler registration until it's needed. It is still buggy if a process
> is started before the daemonize command is given to root service,
> though.
>
> If you try running "emacs --daemon" with "make-forkexec-constructor"
> (and #:pid-file, and put something in your emacs config to make it
> write out the pid) you should be able to reproduce what I am
> seeing. If you kill emacs (or if it crashes) then shepherd continues
> to report that it is started and running. When I look at htop's output
> I can also see that my emacs process is not a child of my shepherd
> process.
>
> I would like to add a --daemon/--daemonize command line argument to
> shepherd instead of the current "send the root service a daemonize
> message". I think the use cases of turning it into a daemon later are
> limited, and it just gives you an additional way of shooting yourself
> in the foot.

Also a separate topic ;-), but if you still experience a bug, please
report it and see whether you can provide a reduced test case to
reproduce it.

>> I’d really like to see that happen.  I’ve become more familiar with
>> Fibers, and I think it’ll be perfect for the Shepherd (and we’ll fix
>> the
>> ARM build issue, no doubt.)
>
> I'm not going to put much time/effort into this until we have fibers
> building on ARM.

Hopefully it’s nothing serious: Fibers doesn’t rely on anything
architecture-specific.

> I think these changes are likely to break shepherd's config API,
> too.

I’m not sure.  We may be able to keep the exact same API.  At least
that’s what I had in mind for the first Fibers-enabled Shepherd.

> In particular, with higher levels of concurrency I want to move the
> mutable state out of  objects.

The only piece of mutable state is the ‘running’ value.  We can make
that an “atomic box”, and users won’t even notice.

>> It seems that signalfd(2) is Linux-only though, which is a bummer.
>> The
>> solution might be to get over it and have it implemented on
>> GNU/Hurd…
>> (I saw this discussion:
>> ;
>> I
>> suspect it’s within reach.)
>
> Failing that, could we have our signal handlers just convert the
> signal to a message in our event loop?

Yes, they could send a message on a Fibers channel.

Thanks,
Ludo’.



guix pack -f docker ... works!

2018-02-09 Thread dvn
I'm experimenting with `guix pack`, particularly with the docker format.

I want to generate a tarball like so `guix pack -f docker bash
figlet` -- and then after loading with docker, be able to run the packed
commands. No $PATH is set in the container, so after digging around in
the tarball, I decided to try something like this: `docker run -ti 
profile:lhavpi5ngs7infrh9b4nppriy4azgbwv
/gnu/store/ars9lm9jk9hgdifg0gqvf1jrvz5mdg1j-bash-4.4.12/bin/bash` -- and
that works! woo!

This is much nicer than using an unverified Debian or Ubuntu layer in a docker 
container, just have a
way of installing a few packages -- which is really all we need in containers 
99% of the time.

What have other people done in this direction? What are some ideas for
setting up the $PATH in a nice way?

cheers,
Devan


signature.asc
Description: PGP signature


Re: Questions regarding offloading ( unprivileged setup , parallel builds )

2018-02-09 Thread Ludovic Courtès
Hi,

YOANN P  skribis:

> Anothers questions regarding the way the offload work:
>
> - does the machines.scm is read at the start of the daemon or read
> each time the hook is called ? (just to be sure because the sources
> let me think it is read each time and is what i want)

It’s reach each time the hook is called.  You can add a ‘display’ call
in there if you want to see.

> - if a machine disappear during a build, does that build is retry on
> another machine ? is there a retry parameter ?

No.  In that case, I think the daemon returns a transient error in this
case, and the build can be restarted eventually, but we don’t do that
automatically currently.

> - Is there any project to had a parametrable post/pre offload-hook ?
> (Could be used to start preemptive cloud instances before a build and
> fill the "machines.scm" and shutdown instances at the end)

Currently no, though I guess you could do some of that in machines.scm.
More specifically you could have a “machine server” that does all the
heavy lifting, and have machines.scm simply make an RPC to that server
along the lines of “gimme a bunch of machines plz.”

> - No problem to use multiple daemon with the same store ? (I'm not
> sure of it because I never seen this kind of implementation on web,
> could be very useful if it not yet possible)

It kinda works but it’s not recommended.

HTH,
Ludo’.



Re: Haunt patches

2018-02-09 Thread pelzflorian (Florian Pelz)
On Fri, Feb 09, 2018 at 02:18:13PM +0100, Ludovic Courtès wrote:
> Heya,
> 
> "pelzflorian (Florian Pelz)"  skribis:
> 
> > I wrote patches that introduce page variants for translations.  I will
> > send them right after this mail.  If there are interested guix-devel
> > readers, they can find them at
> >
> > https://pelzflorian.de/git/haunt/commit/?id=ac3e93fe35363fd0066cf93b969c95c0fde7a25a
> > https://pelzflorian.de/git/haunt/commit/?id=ca32925a58c8ec526653aff9825be2359bf358c3
> > https://pelzflorian.de/git/haunt/commit/?id=34f6b56bfe3a3059ced4be5e9b768a6a3a93c671
> >
> > A site can have a list of variants such as various languages,
> > e.g. '("de" "en").
> 
> Nice, looks like a great starting point!
> 

:)

> This does not address i18n in itself, right?  Do you have examples of an
> i18n’d web site using this Haunt branch + gettext?
> 
> Thank you!
> 
> Ludo’.

My Website uses it:

https://pelzflorian.de/git/pelzfloriande-website/

I use xgettext, msginit, msgmerge and msgfmt tools from Gettext to
create the PO files.  I now added a NOTES file on how to do so.

I use the po files with my haunt.scm’s translate-msg function and _
and __ macros.

Regards,
Florian


signature.asc
Description: PGP signature


Re: /gnu/store/.links/

2018-02-09 Thread Pjotr Prins
On Fri, Feb 09, 2018 at 01:11:23PM +0100, Ricardo Wurmus wrote:
> 
> Hi Pjotr,
> 
> > What is
> >
> >   ls -1 /gnu/store/.links/|wc -l
> >   495938
> >
> > Never saw it before. Does this scale?
> 
> It’s used for optional file deduplication.  It is enabled by default,
> but you can disable it with a daemon option on file systems that
> deduplicate data at the block level.

Hmmm. I think this is better handled at the file system level if
people want deduplication. These systems will be more common.

> I don’t know about scalability.  This number is still well below the
> limits of ext4 file systems, but accessing a big directory listing like
> that can be slow.  I would feel a little better about this if we split
> it up into different prefix directories (like it’s done for browser
> caches).  I don’t think it’s necessary, though.

For ext4 it is going to be an issue. Anyway, we'll see what happens.
Thanks for explaining.

Pj.



Re: guix pack -f docker ... works!

2018-02-09 Thread Pjotr Prins
On Fri, Feb 09, 2018 at 02:17:58PM +0100, dvn wrote:
> I'm experimenting with `guix pack`, particularly with the docker format.
> 
> I want to generate a tarball like so `guix pack -f docker bash
> figlet` -- and then after loading with docker, be able to run the packed
> commands. No $PATH is set in the container, so after digging around in
> the tarball, I decided to try something like this: `docker run -ti 
> profile:lhavpi5ngs7infrh9b4nppriy4azgbwv
> /gnu/store/ars9lm9jk9hgdifg0gqvf1jrvz5mdg1j-bash-4.4.12/bin/bash` -- and
> that works! woo!
> 
> This is much nicer than using an unverified Debian or Ubuntu layer in a 
> docker container, just have a
> way of installing a few packages -- which is really all we need in containers 
> 99% of the time.
> 
> What have other people done in this direction? What are some ideas for
> setting up the $PATH in a nice way?

Use -S, some examples

  https://github.com/pjotrp/guix-notes/blob/master/CONTAINERS.org


-- 



Re: [GNUnet-developers] gnunet-guile reboot & guix (take two)

2018-02-09 Thread amirouche



Le sam. 3 févr. 2018 à 14:10, amirouche  a 
écrit :

Hello all,

Possible solutions:

a) Add the gnunet-uri of the substitute in the package
  definition. This can only work if the package is
  reproducible aka. the build is always the same given
  the same package definition. For reproducible builds,
  it will be possible to offload the build and
  the download over gnunet.



I am not sure I will have time to invest in this project right now.

So I created an entry in Guix 2018 GSoC page @ 
https://libreplanet.org/wiki/Group:Guix/GSoC-2018#GNUnet_integration


Feel free to edit / pick the task.




Use guix to distribute data & reproducible science

2018-02-09 Thread amirouche







Use guix to distribute data & reproducible (data) science

2018-02-09 Thread Amirouche Boubekki

Héllo all,

tl;dr: Distribution of data and software seems similar.
   Data is more and more important in software and reproducible
   science. Data science ecosystem lakes resources sharing.
   I think guix can help.

Recently I stumbled upon open data movement and its links with
data science.

To give a high level overview, there is several (web) platforms
that allows administrations and companies to publish data and
_distribute_ it. Example of such platforms are data.gouv.fr [1] and
various other platforms based on CKAN [2].

[1] https://www.data.gouv.fr/
[2] https://okfn.org/projects/

I have worked with data.gouv.fr in particular. And the repository
is rather poor in terms of quality. Making very difficult to use.

The other side of this open data and data based software is the
fact that some software provide their own mechanism to _distribute_
data or binary blobs called 'models' that are sometime based on
libre data. Example of such softwares are spacy [2], gensim [3],
nltk [4] and word2vec.

[2] https://spacy.io/
[3] https://en.wikipedia.org/wiki/Gensim
[4] http://www.nltk.org/

My last point is that it's common knowledge that data wrangling
aka. cleaning and preparing data is 80% of data scientist job.
It's required because data distributors don't do it right, because
they don't have the man power and the knowledge to do it right.

To summarize:

1) Some software and platforms distribute _data_ themselves in some
   "closed garden" way. It's not the role of software to distribute
   data especially when that data can be reused in other contexts.

2) models are binary blobs that you use in the hope they do what they
   are supposed to do. How do you build the model? Is the model
   reproducible?

3) Preparing data must be re-done all the time, let's share resource
   and do it once.

It seems to me that guix has all the required feature to handle data
and models distribution.

What do people think? Do we already use guix to distribute data and 
models.


Also, it seems good to surf on AI frenzy ;)



Re: Cuirass news

2018-02-09 Thread Ludovic Courtès
Heya,

Danny Milosavljevic  skribis:

> On Fri, 09 Feb 2018 10:41:13 +0100
> l...@gnu.org (Ludovic Courtès) wrote:

[...]

>> >> Indeed!  Should we change ‘sqlite-finalize’ to a noop when called on a
>> >> cached statement?  (Otherwise users would have to keep track of whether
>> >> or not a statement is cached.)  
>> >
>> > Hmm maybe that's a good way.  But its a little magic.  
>> 
>> Yes, but I think we have no other option: now that caching is built into
>> sqlite3.scm, it has to be properly handled by all of that module.  For
>> the user, it should be a simple matter of choosing #:cache? #t
>> or #:cache? #f, and then (sqlite3) should just DTRT.
>
> Yeah, but then let's add sqlite-uncache or something that can be used to
> remove it from the cache after all.  And make sqlite-finalize a noop if
> it's cached.  Sounds good.

What about this patch:

diff --git a/sqlite3.scm b/sqlite3.scm
index fa96bdb..e8d2bf8 100644
--- a/sqlite3.scm
+++ b/sqlite3.scm
@@ -1,5 +1,6 @@
 ;; Guile-SQLite3
 ;; Copyright (C) 2010, 2014 Andy Wingo 
+;; Copyright (C) 2018 Ludovic Courtès 
 
 ;; This library is free software; you can redistribute it and/or modify
 ;; it under the terms of the GNU Lesser General Public License as
@@ -114,6 +115,14 @@
   (open? db-open? set-db-open?!)
   (stmts db-stmts))
 
+(define-record-type 
+  (make-stmt pointer live? reset? cached?)
+  stmt?
+  (pointer stmt-pointer)
+  (live? stmt-live? set-stmt-live?!)
+  (reset? stmt-reset? set-stmt-reset?!)
+  (cached? stmt-cached? set-stmt-cached?!))
+
 (define sqlite-errmsg
   (let ((f (pointer->procedure
 '*
@@ -145,11 +154,17 @@
 (dynamic-func "sqlite3_close" libsqlite3)
 (list '*
 (lambda (db)
-  (if (db-open? db)
-  (begin
-(let ((p (db-pointer db)))
-  (set-db-open?! db #f)
-  (f p)))
+  (when (db-open? db)
+;; Finalize cached statements.
+(hash-for-each (lambda (sql stmt)
+ (set-stmt-cached?! stmt #f)
+ (sqlite-finalize stmt))
+   (db-stmts db))
+(hash-clear! (db-stmts db))
+
+(let ((p (db-pointer db)))
+  (set-db-open?! db #f)
+  (f p))
 
 (define db-guardian (make-guardian))
 (define (pump-db-guardian)
@@ -208,18 +223,11 @@
   (ele (db-pointer db) onoff
 
 
+
 ;;;
 ;;; SQL statements
 ;;;
 
-(define-record-type 
-  (make-stmt pointer live? reset? cached?)
-  stmt?
-  (pointer stmt-pointer)
-  (live? stmt-live? set-stmt-live?!)
-  (reset? stmt-reset? set-stmt-reset?!)
-  (cached? stmt-cached?))
-
 (define sqlite-remove-statement!
   (lambda (db stmt)
 (when (stmt-cached? stmt)
@@ -240,11 +248,15 @@
 (dynamic-func "sqlite3_finalize" libsqlite3)
 (list '*
 (lambda (stmt)
-  (if (stmt-live? stmt)
-  (let ((p (stmt-pointer stmt)))
-(sqlite-remove-statement! (stmt->db stmt) stmt)
-(set-stmt-live?! stmt #f)
-(f p))
+  ;; Note: When STMT is cached, this is a no-op.  This ensures caching
+  ;; actually works while still separating concerns: users can turn
+  ;; caching on and off without having to change the rest of their code.
+  (when (and (stmt-live? stmt)
+ (not (stmt-cached? stmt)))
+(let ((p (stmt-pointer stmt)))
+  (sqlite-remove-statement! (stmt->db stmt) stmt)
+  (set-stmt-live?! stmt #f)
+  (f p))
 
 (define *stmt-map* (make-weak-key-hash-table))
 (define (stmt->db stmt)

?

I’d reluctant to ‘sqlite-uncache!’ though.  I would think that if users
need more sophisticated caching, they can always implement it in their
application.  I wouldn’t want us to try to be too smart here.

WDYT?

Thanks,
Ludo’.


Web site i18n with Haunt

2018-02-09 Thread Ludovic Courtès
"pelzflorian (Florian Pelz)"  skribis:

> On Fri, Feb 09, 2018 at 02:18:13PM +0100, Ludovic Courtès wrote:

[...]

>> This does not address i18n in itself, right?  Do you have examples of an
>> i18n’d web site using this Haunt branch + gettext?
>> 
>> Thank you!
>> 
>> Ludo’.
>
> My Website uses it:
>
> https://pelzflorian.de/git/pelzfloriande-website/
>
> I use xgettext, msginit, msgmerge and msgfmt tools from Gettext to
> create the PO files.  I now added a NOTES file on how to do so.
>
> I use the po files with my haunt.scm’s translate-msg function and _
> and __ macros.

Awesome.  There were a couple of people interested in internationalizing
our web site during the Guix workshop, so hopefully they can follow your
lead and ping you if they need help!

Ludo’.



Re: Cuirass news

2018-02-09 Thread Danny Milosavljevic
Hi Ludo,

the patch LGTM!

>I’d reluctant to ‘sqlite-uncache!’ though.

I was thinking if you are in the REPL and actually want it to forget the
statement which you cached before, it would be good to be able to do that.
I wonder what I was thinking that was good for, though.

User can always just close the database connection.

Nevermind! :)



Re: /gnu/store/.links/

2018-02-09 Thread Ludovic Courtès
Pjotr Prins  skribis:

> On Fri, Feb 09, 2018 at 01:11:23PM +0100, Ricardo Wurmus wrote:

[...]

>> I don’t know about scalability.  This number is still well below the
>> limits of ext4 file systems, but accessing a big directory listing like
>> that can be slow.  I would feel a little better about this if we split
>> it up into different prefix directories (like it’s done for browser
>> caches).  I don’t think it’s necessary, though.
>
> For ext4 it is going to be an issue. Anyway, we'll see what happens.

In practice, when the maximum number of links is reached, we simply
transparently skip deduplication.  See this commit:

  commit 12b6c951cf5ca6055a22a2eec85665353f5510e5
  Author: Ludovic Courtès 
  Date:   Fri Oct 28 20:34:15 2016 +0200

  daemon: Do not error out when deduplication fails due to ENOSPC.

  This solves a problem whereby if /gnu/store/.links had enough entries,
  ext4's directory index would be full, leading to link(2) returning
  ENOSPC.

  * nix/libstore/optimise-store.cc (LocalStore::optimisePath_): Upon
  ENOSPC from link(2), print a message and return instead of throwing a
  'SysError'.

It does scale well, and it’s been here “forever”.

If you’re wondering how much gets deduplicated, see
.
:-)

Ludo’.



Re: guix pack -f docker ... works!

2018-02-09 Thread Ludovic Courtès
Hello!

dvn  skribis:

> I'm experimenting with `guix pack`, particularly with the docker format.
>
> I want to generate a tarball like so `guix pack -f docker bash
> figlet` -- and then after loading with docker, be able to run the packed
> commands. No $PATH is set in the container, so after digging around in
> the tarball, I decided to try something like this: `docker run -ti 
> profile:lhavpi5ngs7infrh9b4nppriy4azgbwv
> /gnu/store/ars9lm9jk9hgdifg0gqvf1jrvz5mdg1j-bash-4.4.12/bin/bash` -- and
> that works! woo!
>
> This is much nicer than using an unverified Debian or Ubuntu layer in a 
> docker container, just have a
> way of installing a few packages -- which is really all we need in containers 
> 99% of the time.

Glad you like it.  :-)

> What have other people done in this direction?

There’s Pjotr’s notes and a couple of blog posts about this:

  https://www.gnu.org/software/guix/blog/2017/creating-bundles-with-guix-pack/

  https://guix-hpc.bordeaux.inria.fr/blog/2017/10/using-guix-without-being-root/

> What are some ideas for setting up the $PATH in a nice way?

The pack already contains /gnu/store/…-profile/etc/profile.  You could
use ‘-S /etc=etc’ but you still have to source /etc/profile
manually—well, except if you run Bash in the container since Bash is
going to source /etc/profile automatically!

We’ve been discussing ways to automate this (when one doesn’t use Bash
in the container), possibly by providing an entry point in the Docker
metadata and things like that.

Thanks for your feedback!

Ludo’.



Re: Use guix to distribute data & reproducible (data) science

2018-02-09 Thread Ludovic Courtès
Hi!

Amirouche Boubekki  skribis:

> tl;dr: Distribution of data and software seems similar.
>Data is more and more important in software and reproducible
>science. Data science ecosystem lakes resources sharing.
>I think guix can help.

I think some of us especially Guix-HPC folks are convinced about the
usefulness of Guix as one of the tools in the reproducible science
toolchain (that was one of the themes of my FOSDEM talk).  :-)

Now, whether Guix is the right tool to distribute data, I don’t know.
Distributing large amounts of data is a job in itself, and the store
isn’t designed for that.  It could quickly become a bottleneck.  That’s
one of the reasons why the Guix Workflow Language (GWL) does not store
scientific data in the store itself.

I think data should probably be stored and distributed out-of-band using
appropriate storage mechanisms.

Ludo’.



Re: Web site i18n with Haunt

2018-02-09 Thread pelzflorian (Florian Pelz)
On Fri, Feb 09, 2018 at 06:02:22PM +0100, Ludovic Courtès wrote:
> […]
> Awesome.  There were a couple of people interested in internationalizing
> our web site during the Guix workshop, so hopefully they can follow your
> lead and ping you if they need help!
> 
> Ludo’.

Gladly.  But perhaps others also have better ideas for how to do
it. ;) In particular, independent from Haunt, using XML for
translation strings as Ricardo proposed would look more familiar to
translators (and my own __ macro’s implementation is buggy for corner
cases and could certainly be simplified and cleaned up).  This must be
decided and written before the Guix website can be properly
internationalized.

How for example would this excerpt from my website better be rendered
as XML or is this or another non-XML approach better?


(p ,@(__ "Thank you for your interest in my workshop \
“GUI Programming with GTK+”. ||register_|To register please go |here|. ||For \
more information see ||link_|here||."
 `(("register_" .
,(lambda (before-link link-text after-link)
   (if enable-registration
   `(span
 ,before-link
 ,(a-href
   "/gui-prog-anmelden/"
   link-text)
 ,after-link)
   "")))
   ("link_" .
,(lambda (text)
   (a-href
(poster-url-for-lingua current-lingua)
text))

Some people also had preferred automatic extraction of strings instead
of marking each one.  If this were to be done, it would need support
in Haunt itself.  However, false positives could not be avoided when
automatically looking for what strings to extract.

Also Haunt’s design is not complete.  I’m not entirely sure about my
patches’ approach regarding Atom feeds, and blog post layouts are
definitely a problem – layouts should take the file name as an
argument somehow.

Other tooling like what sirgazil proposed here
https://lists.gnu.org/archive/html/guile-devel/2017-12/msg00051.html
might also be desirable but is not urgent.  It probably should not be
part of Haunt but an external library just like __.

Then this all needs documentation.

Regards,
Florian


signature.asc
Description: PGP signature


Re: Use guix to distribute data & reproducible (data) science

2018-02-09 Thread zimoun
Dear,

>From my understanding, what you are describing is what bioinfo guys
call a workflow:

 1- fetch data here and there
 2- clean and prepare data
 3- compute stuff with these data
 4- obtain an answer
and loop several times on several data sets.

Guix Workflow Language allows to implement the workflow, i.e., all the
steps and their link to deal with the data.
And because Guix, reproducibility in terms of softwares comes for almost free.
Moreover, if there is some channel mechanism, then there is a way to
share these workflows.

I think the tools are there, modulo UI and corner cases. :-)

>From my point of view, workflows are missing because of manpower
(lispy guy, etc.).


Last, a workflow is not necessary reproducible bit-to-bit since some
algorithms use randomness.


Hope that helps.

All the best,
simon





On 9 February 2018 at 18:13, Ludovic Courtès  wrote:
> Hi!
>
> Amirouche Boubekki  skribis:
>
>> tl;dr: Distribution of data and software seems similar.
>>Data is more and more important in software and reproducible
>>science. Data science ecosystem lakes resources sharing.
>>I think guix can help.
>
> I think some of us especially Guix-HPC folks are convinced about the
> usefulness of Guix as one of the tools in the reproducible science
> toolchain (that was one of the themes of my FOSDEM talk).  :-)
>
> Now, whether Guix is the right tool to distribute data, I don’t know.
> Distributing large amounts of data is a job in itself, and the store
> isn’t designed for that.  It could quickly become a bottleneck.  That’s
> one of the reasons why the Guix Workflow Language (GWL) does not store
> scientific data in the store itself.
>
> I think data should probably be stored and distributed out-of-band using
> appropriate storage mechanisms.
>
> Ludo’.
>



Re: /gnu/store/.links/

2018-02-09 Thread Pjotr Prins
On Fri, Feb 09, 2018 at 06:00:02PM +0100, Ludovic Courtès wrote:
> In practice, when the maximum number of links is reached, we simply
> transparently skip deduplication.  See this commit:
> 
>   commit 12b6c951cf5ca6055a22a2eec85665353f5510e5
>   Author: Ludovic Courtès 
>   Date:   Fri Oct 28 20:34:15 2016 +0200
> 
>   daemon: Do not error out when deduplication fails due to ENOSPC.
> 
>   This solves a problem whereby if /gnu/store/.links had enough entries,
>   ext4's directory index would be full, leading to link(2) returning
>   ENOSPC.
> 
>   * nix/libstore/optimise-store.cc (LocalStore::optimisePath_): Upon
>   ENOSPC from link(2), print a message and return instead of throwing a
>   'SysError'.
> 
> It does scale well, and it’s been here “forever”.

OK. My mindset is probably ext2...

> If you’re wondering how much gets deduplicated, see
> .
> :-)

Fancy that :)

Pj.



Re: /gnu/store/.links/

2018-02-09 Thread Mark H Weaver
l...@gnu.org (Ludovic Courtès) writes:

> Pjotr Prins  skribis:
>
>> On Fri, Feb 09, 2018 at 01:11:23PM +0100, Ricardo Wurmus wrote:
>
> [...]
>
>>> I don’t know about scalability.  This number is still well below the
>>> limits of ext4 file systems, but accessing a big directory listing like
>>> that can be slow.  I would feel a little better about this if we split
>>> it up into different prefix directories (like it’s done for browser
>>> caches).  I don’t think it’s necessary, though.
>>
>> For ext4 it is going to be an issue. Anyway, we'll see what happens.
>
> In practice, when the maximum number of links is reached, we simply
> transparently skip deduplication.

Ideally, we should at some point change the daemon to break
/gnu/store/.links up into several subdirectories, as is done for log
files in /var/log/guix/drvs.  The main complication is dealing with the
transition between the old layout and the new.

   Mark



Re: /gnu/store/.links/

2018-02-09 Thread Leo Famulari
On Fri, Feb 09, 2018 at 03:24:00PM +0100, Pjotr Prins wrote:
> Hmmm. I think this is better handled at the file system level if
> people want deduplication. These systems will be more common.

In general, yes! But filesystems with this feature are still not widely
deployed...


signature.asc
Description: PGP signature


Re: Use guix to distribute data & reproducible (data) science

2018-02-09 Thread Konrad Hinsen

Hi,

On 09/02/2018 18:13, Ludovic Courtès wrote:


Amirouche Boubekki  skribis:


tl;dr: Distribution of data and software seems similar.
Data is more and more important in software and reproducible
science. Data science ecosystem lakes resources sharing.
I think guix can help.


Now, whether Guix is the right tool to distribute data, I don’t know.
Distributing large amounts of data is a job in itself, and the store
isn’t designed for that.  It could quickly become a bottleneck.  That’s
one of the reasons why the Guix Workflow Language (GWL) does not store
scientific data in the store itself.


I'd say it depends on the data and how it is used inside and outside of 
a workflow. Some data could very well stored in the store, and then 
distributed via standard channels (Zenodo, ...) after export by "guix 
pack". For big datasets, some other mechanism is required.


I think it's worth thinking carefully about how to exploit guix for 
reproducible computations. As Lispers know very well, code is data and 
data is code. Building a package is a computation like any other. 
Scientific workflows could be handled by a specific build system. In 
fact, as long as no big datasets or multiple processors are involved, we 
can do this right now, using standard package declarations.


It would be nice if big datasets could conceptually be handled in the 
same way while being stored elsewhere - a bit like git-annex does for 
git. And for parallel computing, we could have special build daemons.


Konrad.



Re: Improving Shepherd

2018-02-09 Thread Carlo Zancanaro

Hey Ludo,

On Fri, Feb 09 2018, Ludovic Courtès wrote:
In particular, with higher levels of concurrency I want to move 
the

mutable state out of  objects.


The only piece of mutable state is the ‘running’ value.  We can 
make

that an “atomic box”, and users won’t even notice.


That's not quite true, unfortunately. I count four pieces of 
mutable state in the  object: `running`, `enabled?`, 
`waiting-for-termination?` and `last-respawns`. They should be 
stored elsewhere so that Shepherd can manage that state however it 
wants. We don't want to expose that to a user, where they could 
break Shepherd's assumptions about when/how it's modified (because 
user configuration can do anything it wants - including starting a 
long-running thread to mutate it later).


We shouldn't have to break much. My thought is just to remove 
those mutable fields from the  object (maybe leaving 
`enabled?`, but changing its meaning slightly to just be whether 
the service is enabled at the start). In practice it shouldn't 
break any real-world configuration, I hope.


Carlo


signature.asc
Description: PGP signature


Re: Improving Shepherd

2018-02-09 Thread David Pirotte
Hello,

> Yes, inotify (ISTR there *are* inotify bindings for Guile somewhere.)

 https://github.com/ChaosEternal/guile-inotify2.git

David



pgpnCYHkf8eet.pgp
Description: OpenPGP digital signature


Re: Improving Shepherd

2018-02-09 Thread Christopher Lemmer Webber
Ludovic Courtès writes:

> Hopefully it’s nothing serious: Fibers doesn’t rely on anything
> architecture-specific.

I think it relies on epoll currently?  But I think there should be no
reason other architectures couldn't also be supported.



Re: Use guix to distribute data & reproducible (data) science

2018-02-09 Thread zimoun
Hi,

> I'd say it depends on the data and how it is used inside and outside of a
> workflow. Some data could very well stored in the store, and then
> distributed via standard channels (Zenodo, ...) after export by "guix pack".
> For big datasets, some other mechanism is required.

I am not sure to understand the point.
>From my point of view, there is 2 kind of datasets:
 a- the ones which are part of the software, e.g., used to pass the
tests. Therefore, they are usually small, not always;
 b- the ones which are applied to the software and somehow they are
not in the source repository. They are big or not.

I do not know if some policy is established in guix about the case a-,
not sure that it is possible in fact (e.g., include Whole Genome fasta
to test alignment tools ? etc.).

It does not appear to me a good idea to try to include in the store
datasets of case b-.
Is it not the job of data management tools ? e.g., database etc.


I do not know so much, but a idea should to write a workflow: you
fetch the data, you clean them and you check by hashing that the
result is the expected one. Only the softwares used to do that are in
the store. The input and output data are not, but your workflow check
that they are the expected ones.
However, it depends on what we are calling 'cleaning' because some
algorithms are not deterministic.

Hum? I do not know if there is some mechanism in GWL to check the hash
of the `data-inputs' field.


> I think it's worth thinking carefully about how to exploit guix for
> reproducible computations. As Lispers know very well, code is data and data
> is code. Building a package is a computation like any other. Scientific
> workflows could be handled by a specific build system. In fact, as long as
> no big datasets or multiple processors are involved, we can do this right
> now, using standard package declarations.

It appear to me as a complement of these points ---and personnally, I
learn some points about the design of GWL--- with this thread:
https://lists.gnu.org/archive/html/guix-devel/2016-05/msg00380.html



> It would be nice if big datasets could conceptually be handled in the same
> way while being stored elsewhere - a bit like git-annex does for git. And
> for parallel computing, we could have special build daemons.

Hum? the point is to add data management a la git-annex to GWL ? Is it ?



Have a nice week-end !
simon



Re: Web site i18n with Haunt

2018-02-09 Thread Ricardo Wurmus

pelzflorian (Florian Pelz)  writes:

> How for example would this excerpt from my website better be rendered
> as XML or is this or another non-XML approach better?

I’d much prefer XML here.  One reason is that I find it very hard to
understand the syntax and how it is processed in your example.  Even
after looking at the example for some time, it’s still difficult to
understand.  You have non-translatable tags that end on an underscore,
but the number of pipe characters is not clear to me.  How is the
sentence with “register_” parsed?  Which parts are bound to
“before-link”, “link-text”, and “after-link”?

> (p ,@(__ "Thank you for your interest in my workshop \
> “GUI Programming with GTK+”. ||register_|To register please go |here|. ||For \
> more information see ||link_|here||."
>  `(("register_" .
> ,(lambda (before-link link-text after-link)
>(if enable-registration
>`(span
>  ,before-link
>  ,(a-href
>"/gui-prog-anmelden/"
>link-text)
>  ,after-link)
>"")))
>("link_" .
> ,(lambda (text)
>(a-href
> (poster-url-for-lingua current-lingua)
> text))

The translatable text could be written as two or more strings (it
doesn’t matter that they end up in the same paragraph).  The first one
is easy:

  "Thank you for your interest in my workshop “GUI Programming with GTK+”."

The second one might be:

  "To register please go here."

The third:

  "For more information see here."

(Whatever tag name is used is arbitrary and only has to be unique within
the context of a single translatable string.)  Or they could all be one
big XML snippet.

With sxpath we can easily pick tagged substrings by specifying their
path.  Or we could fold over the parse tree and transform it by applying
an arbitrary transformation.

Here’s a trivial example of such a transform:

--8<---cut here---start->8---
(use-modules (sxml simple) (sxml transform))
;; This is the translated string, wrapped in some tag to make it a valid
;; XML fragment.
(define tr "Click here")

;; Convert to SXML, then transform it.
(pre-post-order (xml->sxml tr)
 ;; When we find the tag “link” wrap the contents in a URL anchor.
 `((link . ,(lambda (tag . kids)
  `(a (@ (href "http://gnu.org";)) ,kids)))
   ;; Just wrap the contents in a tag for everything else
   (*default*  . ,(lambda (tag . kids) `(,tag ,@kids)))
   ;; Unwrap all text
   (*text* . ,(lambda (_ txt) txt

=> (*TOP* (translation "Click " (a (@ (href "http://gnu.org";)) "here")))
--8<---cut here---end--->8---

Using “pre-post-order” directly is verbose as you need to specify the
*default* and *text* handlers, but this can easily be hidden by a
friendlier procedure that would present a user interface that wouldn’t
look too different from what your macro presents to users.

--8<---cut here---start->8---
(define (foo str . handlers)
 (pre-post-order (xml->sxml (string-append "" str 
""))
  `(,@handlers
(*default*  . ,(lambda (tag . kids) `(,tag ,@kids)))
(*text* . ,(lambda (_ txt) txt)

(foo "Click here"
 `(link . ,(lambda (tag . contents)
   `(a (@ (href "/help")) ,@contents
--8<---cut here---end--->8---

--
Ricardo

GPG: BCA6 89B6 3655 3801 C3C6  2150 197A 5888 235F ACAC
https://elephly.net





Re: Use guix to distribute data & reproducible (data) science

2018-02-09 Thread Ricardo Wurmus

zimoun  writes:

> I do not know so much, but a idea should to write a workflow: you
> fetch the data, you clean them and you check by hashing that the
> result is the expected one. Only the softwares used to do that are in
> the store. The input and output data are not, but your workflow check
> that they are the expected ones.
> However, it depends on what we are calling 'cleaning' because some
> algorithms are not deterministic.
>
> Hum? I do not know if there is some mechanism in GWL to check the hash
> of the `data-inputs' field.

In the GWL the data-inputs field is not special as far as any of the
current execution engines are concerned.  It’s up to the execution
engine to implement recency checks or identity checks as there is not
one size that fits all inputs.

-- 
Ricardo

GPG: BCA6 89B6 3655 3801 C3C6  2150 197A 5888 235F ACAC
https://elephly.net





Let's fix core-updates!

2018-02-09 Thread Chris Marusich
Hi everyone!

Currently, 13% of builds on core-updates fail:

  https://hydra.gnu.org/jobset/gnu/core-updates

We need to fix this to help Ricardo prepare for the next release.
Questions:

  1) When is core-updates "done"?  Do we merge once we're below a
 specific failure rate, once specific bugs have been fixed, or a
 combination of the two?

  2) How shall we prioritize and divvy up work for fixing the failures?
 I'm guessing people just need to volunteer and start debugging!

  3) Are there any tools to help us understand what the failures might
 have in common?  E.g., if half the failures occur because a package
 deep in the dependency graph fails to build, clearly that package
 should be prioritized for fixing.  I suppose we'll learn about
 commonalities as we go, but it'd be nice if there were a tool that
 might help us understand what to focus on first.

  4) What other bugs/features need to be addressed to un-block release?

I know that we want to update the default JDK used by Java packages from
7 to 8, but there are probably more important tasks to finish up, also.

Let's get started!

-- 
Chris


signature.asc
Description: PGP signature


Re: Call for project proposals: Guix and Outreachy

2018-02-09 Thread Chris Marusich
Gábor Boskovits  writes:

> 5. explore orchestration in guix - I think Chris could be interested in
> this, and I am also willing to help.
> 6. explore provisioning in guix - provisioning from cloud provides
> essentially boils down to talking to apis, but I would be really interested
> in provisioning bare metal. This is thightly related to orchestration.

I am interested in these things, but I am not sure it would be a good
intern project.  Maybe I'm just being sheepish, but since we don't even
have consensus yet on a design for this, I worry that it might not be
fair to ask an intern to take this on at this time.

> 9. get guix publish to work on some solution, that allows us to share
> pre-built packages outside our local network - I have a feeling that this
> could speed up development on core-updates and feature branches.

I know you meant GNUnet, but what about publication over mDNS?  That
would be super nice, but I don't know how complicated it would be.

-- 
Chris


signature.asc
Description: PGP signature


Re: emacs-browse-at-remote-gnu supports git.savannah.gnu.org

2018-02-09 Thread Chris Marusich
Oleg Pykhalov  writes:

> Hello Guix,
>
> I send a patched emacs-browse-at-remote, called
> emacs-browse-at-remote-gnu [1], which adds a support for
> git.savannah.gnu.org Cgit repository.
>
> [1]  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=30328
>
> See a demo .
>
> Oleg.

Very cool!  By the way, how did you make the video?  It might be useful
for creating quick demos of using Guix, either from the terminal or from
within Emacs.

-- 
Chris


signature.asc
Description: PGP signature


Re: Call for project proposals: Guix and Outreachy

2018-02-09 Thread Gábor Boskovits
2018-02-10 4:06 GMT+01:00 Chris Marusich :

> Gábor Boskovits  writes:
>
> > 5. explore orchestration in guix - I think Chris could be interested in
> > this, and I am also willing to help.
> > 6. explore provisioning in guix - provisioning from cloud provides
> > essentially boils down to talking to apis, but I would be really
> interested
> > in provisioning bare metal. This is thightly related to orchestration.
>
> I am interested in these things, but I am not sure it would be a good
> intern project.  Maybe I'm just being sheepish, but since we don't even
> have consensus yet on a design for this, I worry that it might not be
> fair to ask an intern to take this on at this time.
>
> > 9. get guix publish to work on some solution, that allows us to share
> > pre-built packages outside our local network - I have a feeling that this
> > could speed up development on core-updates and feature branches.
>
> I know you meant GNUnet, but what about publication over mDNS?  That
> would be super nice, but I don't know how complicated it would be.


There was some idea that this could also be done using ipfs. Actually this
was just an idea, that would be nice to have, and I think I am open to
suggestions how this should work. It turned out on the guix event, that
we already have a working solution, we might have more knowhow lingering...
I do not feel that I could mentor this though.


> --
> Chris
>


Re: Let's fix core-updates!

2018-02-09 Thread Pjotr Prins
On Sat, Feb 10, 2018 at 03:51:59AM +0100, Chris Marusich wrote:
> Hi everyone!
> 
> Currently, 13% of builds on core-updates fail:
> 
>   https://hydra.gnu.org/jobset/gnu/core-updates

It is quite a list. I think we should purge packages that have been
failing for a longish time - apparently no one cares about those.

Maybe list the more important ones in debbugs as a first step and tag
them as core-updates-fail or something? Or does that overflow debbugs?
Main reason is that we now track things in two places. On debbugs you
can assign it.

Pj.



Re: emacs-browse-at-remote-gnu supports git.savannah.gnu.org

2018-02-09 Thread Oleg Pykhalov
Chris Marusich  writes:

  > Oleg Pykhalov  writes:

  >> See a demo .

  > By the way, how did you make the video?

I have a shell script which does the following:

- select a window by click
- record a video
- convert a video to GIF



record-window-gif.sh
Description: Shell script to record a video and convert it to GIF


signature.asc
Description: PGP signature


Re: Call for project proposals: Guix and Outreachy

2018-02-09 Thread Gábor Boskovits
2018-02-08 20:58 GMT+01:00 Ricardo Wurmus :

>
> Gábor Boskovits  writes:
>
> >> The project idea sounds good, but we really need a mentor who feels
> >> responsible for this project and who will be able to guide an intern
> >> throughout the application phase and the three-month internship between
> >> from May to August.
> […]
>
> > If we could draft the specifics of this, what we expect as a outcome
> > at least, then I am willing to help in mentoring, if that suits you. I
> > might drop some parts of the draft if I don't feel comfortable with
> > some parts of the draft proposed.
>
> Thank you.
>
> > However I feel, that the expectations about this should be discussed
> > publicly, to have a tool that is really useful for the community,
> > WDYT?
>
> Yes, we should discuss this here, but we don’t have all that much time
> left before the intern application process begins.  During that period
> interns get to pick a project they are interested in and start getting
> to know the community.  At the end of the application period we need to
> pick one of the applicants.
>
> So even though the internships won’t start before May, we really need to
> submit our proposals this week or early next week.
>
>
Ok, I've thought about what my personal needs would be at first.
I currently see three main aspects of this user interface:

1. statistics overall
a. percentage of packages failing on a branch (with links)
b. percentage of packages not reproducible (also with links) (can be
interesting in the face of the content addressable store idea, but would be
useful in general)

2. packages, that we tried to build:
a. build status on achitectures
b. build logs
c. related bugs - can this be done easily?
d. related build jobs

3. packages, that we would like to build
a. estimated effort to do so
b. privilege system to do it
c. notification when a build completes
d. some way to make this useful for collaborations - (for example if a
bunch of developers is working on a feature branch, and the feel, that it
is ok to build it, then somehow sum the resources allowable?)

I am not very familiar with our current frontend yet, so if I have
something, that is already done, please note me.

If you have any other suggestions, that would be fine.



> --
> Ricardo
>
> GPG: BCA6 89B6 3655 3801 C3C6  2150 197A 5888 235F ACAC
> https://elephly.net
>
>
>