Re: Merging staging?

2018-12-21 Thread Gábor Boskovits
Hello,

2018. dec. 21., P 2:18 dátummal Mark H Weaver  ezt írta:

> Hi Julien,
>
> I've rearranged your reply from "top-posting" style to "bottom-posting"
> style.  Please consider using bottom-posting in the future.
>
> I wrote:
>
> > Julien Lepiller  writes:
> >
> >> I'd like to get staging merged soon, as it wasn't for quite some
> >> time. Here are some stats about the current state of substitutes for
> >> staging:
> >>
> >> According to guix weather, we have:
> >>
> >> | architecture | berlin | hydra |
> >> +--++---+
> >> | x86_64   | 36.5%  | 81.7% |
> >> | i686 | 23.8%  | 71.0% |
> >> | aarch64  | 22.2%  | 00.0% |
> >> | armhf| 17.0%  | 45.6% |
> >>
> >> What should the next step be?
> >
> > I think we should wait until the coverage on armhf and aarch64 have
> > become larger, for the sake of users on those systems.
> >
> > Also, I've seen some commits that make me wonder if hydra is still
> > being configured as an authorized substitute server on new Guix
> > installations.
> > Do you know?
> >
> > If 'berlin' is the only substitute server by default, then we certainly
> > need to wait for those numbers to get higher, no?
> >
> > What do you think?
>
> Julien Lepiller  responded:
>
> > I agree, but I wonder if there is a reason for these to be so low?
>
> It's a good question.  I have several hypotheses:
>
> * Unfortunately, it is fairly common for builds for important core
>   packages to spuriously fail, often due to unreliable test suites, and
>   to cause thousands of other important dependent packages to fail.
>   When this happens on Hydra, I can see what's going on, and restart the
>   build and all of its dependents.
>

This is currently a problem, we can't see
which dependency causes the dependency failure.


>   I wouldn't be surprised if some important core packages spuriously
>   failed to build on Berlin, but we have no effective way to see what
>   happened there.  If that's the case, the 'guix weather' numbers above
>   might never get much higher no matter how long we wait.
>
> * Berlin's build slots may have been occupied for long periods of time
>   by 'test.*' jobs stuck in an endless "waiting for udevd..." loop, as
>   described in .
>
>   Hydra's web interface allows me to monitor active jobs and manually
>   kill those stuck jobs when I find them.  I don't know how to do that
>   on Berlin.
>
> * Especially on armhf and aarch64, where Berlin has very little build
>   capacity, and new builds are being added to Berlin's build queue must
>   faster than they can be built, it is quite possible that Berlin is
>   spending most of its effort on long-outdated builds.
>
>   On Hydra, I can see when this is happening, and often intervene by
>   cancelling large numbers of outdated builds on armhf, so that it
>   remains focused on the most popular and up-to-date packages.
>
We are currently missing an admin interface on berlin, and we would need
that, as canceling a job should be privileged.


> * On WIP branches like 'core-updates' and 'staging', when a new
>   evaluation is done, I cancel all outdated Hydra jobs on those
>   branches.  I don't know if anything similar is done on Berlin.
>
> In summary, there are several things that I regularly do to make
> efficient use of Hydra's limited build capacity.  I periodically look at
> Berlin's web interface to see how it has progressed, but it is currently
> mostly a black box to me.  I see no effective way to focus its limited
> resources on the most important builds, or to see when build slots are
> stuck.
>
>  Regards,
>Mark
>
I am currently looking around how to improve the situation. Suggestions are
welcome.

G_bor

>
>


Re: my CDN measure_get

2018-12-21 Thread Chris Marusich
Hi Simon,

Thank you for sharing the data with us!  It seems that for you, in
Paris, CloudFront represents a very nice performance improvement.

zimoun  writes:

> laptop$ measure_get
> https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> 2>/dev/null
> url_effective: 
> https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> http_code: 200
> num_connects: 1
> num_redirects: 0
> remote_ip: 141.80.181.40
> remote_port: 443
> size_download: 69899433 B
> speed_download: 2088170.000 B/s
> time_appconnect: 0.123089 s
> time_connect: 0.042923 s
> time_namelookup: 0.004140 s
> time_pretransfer: 0.123141 s
> time_redirect: 0.00 s
> time_starttransfer: 0.199722 s
> time_total: 33.474630 s

For this request, berlin.guixsd.org throughput is 17 Mbps, and latency
is 39 ms after DNS name resolution.

> laptop$ measure_get
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> 2>/dev/null
> url_effective: 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> http_code: 200
> num_connects: 1
> num_redirects: 0
> remote_ip: 143.204.229.90
> remote_port: 443
> size_download: 69899433 B
> speed_download: 10793612.000 B/s
> time_appconnect: 0.024161 s
> time_connect: 0.010927 s
> time_namelookup: 0.007414 s
> time_pretransfer: 0.024253 s
> time_redirect: 0.00 s
> time_starttransfer: 0.029704 s
> time_total: 6.476473 s

For this request, berlin-mirror.marusich.info throughput is 86 Mbps, and
latency is 3.5 ms after DNS name resolution.  The throughput is about
400% greater than going directly to berlin.guixsd.org, and the latency
is about 91% less than going directly to berlin.guixsd.org.

> cluster$ measure_get
> https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> 2>/dev/null
> url_effective: 
> https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> http_code: 200
> num_connects: 1
> num_redirects: 0
> remote_ip:
> remote_port:
> size_download: 69899433 B
> speed_download: 2863050,000 B/s
> time_appconnect: 0,272 s
> time_connect: 0,041 s
> time_namelookup: 0,004 s
> time_pretransfer: 0,272 s
> time_redirect: 0,000 s
> time_starttransfer: 0,346 s
> time_total: 24,414 s

For this request, berlin.guixsd.org throughput is 23 Mbps, and latency
is 37 ms after DNS name resolution.  Compared to the previous direct
request to berlin.guixsd.org, the throughput is 35% greater, and the
latency is 5.1% less.

> cluster$ measure_get
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> 2>/dev/null
> url_effective: 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> http_code: 200
> num_connects: 1
> num_redirects: 0
> remote_ip:
> remote_port:
> size_download: 69899433 B
> speed_download: 43271174,000 B/s
> time_appconnect: 0,179 s
> time_connect: 0,006 s
> time_namelookup: 0,004 s
> time_pretransfer: 0,179 s
> time_redirect: 0,000 s
> time_starttransfer: 0,182 s
> time_total: 1,615 s

For this request, berlin-mirror.marusich.info throughput is 346 Mbps,
and latency is 2 ms after DNS name resolution.  Compared to the direct
request to berlin.guixsd.org, this request's throughput is 1400%
greater, and its latency is 95% lower.  I suppose your cluster must be
close to a CloudFront edge location.

So yes, I think this makes it quite clear that for you, CloudFront
represents a very nice performance improvement, both on your laptop and
especially in your cluster.

-- 
Chris


signature.asc
Description: PGP signature


Re: CDN performance

2018-12-21 Thread Chris Marusich
Hi Meiyo,

Thank you for sharing this information with us!

Can you also share what numbers you get when you run measure_get against
berlin.guixsd.org directly?  Clearly, the connection from you to
CloudFront is not as performant as it is for others in other parts of
the world, but I wonder if it's still better than accessing berlin
directly.  If you could run measure_get against berlin directly and
share the numbers, we can see if it represents any significant
improvement for you.

Meiyo Peng  writes:

> I tested your script several times.
>
> 1. Tested today at home. China Unicom home broadband. 50Mb/s.
>
> The result is slow as usual. curl failed once.
> berlin-mirror.marusich.info is resolved to Seattle, WA, US.

Well, that's not great.  Perhaps it's still better than it would be if
the DNS name resolved to a location in Europe, though.

> #+BEGIN_EXAMPLE
>   ➜  ~ measure_get 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>Dload  Upload   Total   SpentLeft  
> Speed
>55 66.6M   55 36.9M0 0  17926  0  1:04:59  0:36:02  0:28:57 
> 17733
>   url_effective: 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
>   http_code: 200
>   num_connects: 1
>   num_redirects: 0
>   remote_ip: 52.85.158.151
>   remote_port: 443
>   size_download: 38764357 B
>   speed_download: 17926.000 B/s
>   time_appconnect: 6.078850 s
>   time_connect: 3.006821 s
>   time_namelookup: 2.659785 s
>   time_pretransfer: 6.079097 s
>   time_redirect: 0.00 s
>   time_starttransfer: 9.626001 s
>   time_total: 2162.379211 s
>   curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

I see, this is about 143 Kbps (not Mbps) of throughput, and 347 ms after
DNS name resolution.

>   ➜  ~ measure_get 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>Dload  Upload   Total   SpentLeft  
> Speed
>   100 66.6M  100 66.6M0 0   109k  0  0:10:25  0:10:25 --:--:--  
> 241k
>   url_effective: 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
>   http_code: 200
>   num_connects: 1
>   num_redirects: 0
>   remote_ip: 52.85.158.22
>   remote_port: 443
>   size_download: 69899433 B
>   speed_download: 111816.000 B/s
>   time_appconnect: 3.507528 s
>   time_connect: 2.650373 s
>   time_namelookup: 2.261801 s
>   time_pretransfer: 3.507637 s
>   time_redirect: 0.00 s
>   time_starttransfer: 5.995298 s
>   time_total: 625.129571 s
>
>   ➜  ~ measure_get 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>Dload  Upload   Total   SpentLeft  
> Speed
>   100 66.6M  100 66.6M0 0   109k  0  0:10:23  0:10:23 --:--:--  
> 141k
>   url_effective: 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
>   http_code: 200
>   num_connects: 1
>   num_redirects: 0
>   remote_ip: 52.85.158.22
>   remote_port: 443
>   size_download: 69899433 B
>   speed_download: 112187.000 B/s
>   time_appconnect: 2.280972 s
>   time_connect: 1.407197 s
>   time_namelookup: 1.056180 s
>   time_pretransfer: 2.281234 s
>   time_redirect: 0.00 s
>   time_starttransfer: 3.167703 s
>   time_total: 623.061584 s
> #+END_EXAMPLE

897 Kbps, 351 ms after the name lookup.

> 2. Tested 3 days ago at my office. China Telecom enterprise broadband. 50Mb/s.
>
> Unusually fast! berlin-mirror.marusich.info is resolved to Seattle, WA,
> US. I have no idea why it's so fast that day.
>
> #+BEGIN_EXAMPLE
>   ➜  ~ measure_get 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
> % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>Dload  Upload   Total   SpentLeft  
> Speed
>   100 66.6M  100 66.6M0 0  1364k  0  0:00:50  0:00:50 --:--:-- 
> 1352k
>   url_effective: 
> https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
>   http_code: 200
>   num_connects: 1
>   num_redirects: 0
>   remote_ip: 13.35.20.109
>   remote_port: 443
>   size_download: 69899433 B
>   speed_download: 1397429.000 B/s
>   time_appconnect: 2.432387 s
>   time_connect: 0.200842 s
>   time_namelookup: 0.000446 s
>   time_pretransfer: 2.432659 s
>   time_redirect: 0.00 s
>   time_starttransfer: 2.673045 s
>   time_total: 50.020945 s

11 Mbps, 200 ms after the name lookup.

>   ➜  ~ measure_get 
> https://berlin-mirror.marusich.info/nar/gzip/1bq78

Re: Anyone working on packaging Firejail?

2018-12-21 Thread Eric Bavier
On Thu, 20 Dec 2018 11:19:07 -0500
Joshua Branson  wrote:

> swedebugia  writes:
> 
> > On 2018-12-20 13:17, swedebugia wrote:  
> >> On 2018-12-20 08:53, Pierre Neidhardt wrote:  
> >>> Can anyone weigh the pros and cons between Firejail and Guix containers?
> >>>  
> >>
> >> Yeah, good idea.
> >>
> >> Is guix container using kernel namespaces?
> >>
> >> Our manual[1] did not say. If yes then I think we should advertise
> >> this on the front page!
> >>
> >> A run your browser in a container example script would also be nice.
> >>
> >> I think we already have all the features beside the gui of firetools. :D
> >>  
> >
> > Found this!
> >
> > Run icecat, a browser, in a container with
> >
> > guix environment --container --network --share=/tmp/.X11-unix
> > --ad-hoc icecat
> > export DISPLAY=":0.0"
> > icecat  
> 
> Is there a way to do this automatically?  ie:  you don't have to type
> guix environment --container  icecat?  You just type "icecat?"

That is the major advantage Firejail has over 'guix environment
--container' currently.  It contains a large collection of "profiles"
for different applications, specifying how exactly to jail them so that
they can still function.

I believe we'd be able to achieve something similar with some sort of
"environment configuration" manifest-type thing.

`~Eric


pgpI7RQ9tzFEw.pgp
Description: OpenPGP digital signature


Re: CDN performance

2018-12-21 Thread Meiyo Peng
Hi Chris,

Thank you for your patience!

Chris Marusich  writes:

> Can you also share what numbers you get when you run measure_get against
> berlin.guixsd.org directly?  Clearly, the connection from you to
> CloudFront is not as performant as it is for others in other parts of
> the world, but I wonder if it's still better than accessing berlin
> directly.  If you could run measure_get against berlin directly and
> share the numbers, we can see if it represents any significant
> improvement for you.

1. Tested today at home. China Unicom home broadband. 50Mb/s.

berlin.guixsd.org:

#+BEGIN_EXAMPLE
  ➜  ~ measure_get 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
   54 66.6M   54 36.3M0 0  14981  0  1:17:45  0:42:25  0:35:20 0
  url_effective: 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 141.80.181.40
  remote_port: 443
  size_download: 38141765 B
  speed_download: 14981.000 B/s
  time_appconnect: 3.228601 s
  time_connect: 2.213136 s
  time_namelookup: 0.856194 s
  time_pretransfer: 3.228820 s
  time_redirect: 0.00 s
  time_starttransfer: 3.851583 s
  time_total: 2545.889968 s
  curl: (56) GnuTLS recv error (-54): Error in the pull function.

  ➜  ~ measure_get 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100 66.6M  100 66.6M0 0  20415  0  0:57:03  0:57:03 --:--:-- 25983
  url_effective: 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 141.80.181.40
  remote_port: 443
  size_download: 69899433 B
  speed_download: 20415.000 B/s
  time_appconnect: 2.005881 s
  time_connect: 0.785257 s
  time_namelookup: 0.000520 s
  time_pretransfer: 2.006124 s
  time_redirect: 0.00 s
  time_starttransfer: 3.031582 s
  time_total: 3423.813489 s
#+END_EXAMPLE

berlin-mirror.marusich.info:

#+BEGIN_EXAMPLE
  ➜  ~ measure_get 
https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100 66.6M  100 66.6M0 0  1470k  0  0:00:46  0:00:46 --:--:-- 2368k
  url_effective: 
https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 13.35.20.87
  remote_port: 443
  size_download: 69899433 B
  speed_download: 1505934.000 B/s
  time_appconnect: 3.343496 s
  time_connect: 3.164926 s
  time_namelookup: 3.060655 s
  time_pretransfer: 3.343581 s
  time_redirect: 0.00 s
  time_starttransfer: 5.766543 s
  time_total: 46.416495 s

  ➜  ~ measure_get 
https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100 66.6M  100 66.6M0 0  3182k  0  0:00:21  0:00:21 --:--:-- 4612k
  url_effective: 
https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 13.35.20.87
  remote_port: 443
  size_download: 69899433 B
  speed_download: 3259170.000 B/s
  time_appconnect: 0.225982 s
  time_connect: 0.070428 s
  time_namelookup: 0.000483 s
  time_pretransfer: 0.226055 s
  time_redirect: 0.00 s
  time_starttransfer: 0.306621 s
  time_total: 21.447966 s
#+END_EXAMPLE


2. Tested today at my office. China Telecom enterprise broadband. 50Mb/s.

berlin.guixsd.org:

#+BEGIN_EXAMPLE
  ➜  ~ measure_get 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100 66.6M  100 66.6M0 0  3091k  0  0:00:22  0:00:22 --:--:-- 3649k
  url_effective: 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 141.80.181.40
  remote_port: 443
  size_download: 69899433 B
  speed_download: 3166021.000 B/s
  time_appconnect: 3.288213 s
  time_connect: 2.733554 s
  time_namelookup: 2.486754 s
  time_pretransfer: 3.288320 s
  time_redirect: 0.00 s
  time_starttransfer: 3.780341 s
  time_total: 22.078489 s

  ➜ 

Re: Video subtitles

2018-12-21 Thread Ricardo Wurmus


Hi Laura,

>> mpv --sub-file my-subtitle.ass my-video.ogv
> Is there a reason for the .ogv extension?

No particular reason; ogv is the extension for a patent-unencumbered
video encoding.  I don’t know enough about video encodings to give a
recommendation what to use for the project.  Luckily, ffmpeg supports a
very wide range of formats, so you needn’t worry about this now.

--
Ricardo




Re: [GWL] (random) next steps?

2018-12-21 Thread Ricardo Wurmus


Hi simon,

>> > 6.
>> > The graph of dependencies between the processes/units/rules is written
>> > by hand. What should be the best strategy to capture it ? By files "à
>> > la" Snakemake ? Other ?
>>
>> The GWL currently does not use the input information provided by the
>> user in the data-inputs field.  For the content addressible store we
>> will need to change this.  The GWL will then be able of determining that
>> data-inputs are in fact the outputs of other processes.
>
> Hum? nice but how?
> I mean, the graph cannot be deduced and it needs to be written by
> hand, somehow. Isn't it?

We can connect a graph by joining the inputs of one process with the
outputs of another.

With a content addressed store we would run processes in isolation and
map the declared data inputs into the environment.  Instead of working
on the global namespace of the shared file system we can learn from Guix
and strictly control the execution environment.  After a process has run
to completion, only files that were declared as outputs end up in the
content addressed store.

A process could declare outputs like this:

(define the-process
  (process
(name 'foo)
(outputs
 '((result "path/to/result.bam")
   (meta   "path/to/meta.xml")

Other processes can then access these files with:

(output the-process 'result)

i.e. the file corresponding to the declared output “result” of the
process named by the variable “the-process”.

The question here is just how far we want to take the idea of “content
addressed” – is it enough to take the hash of all inputs or do we need
to compute the output hash, which could be much more expensive?

--
Ricardo




Re: CDN performance

2018-12-21 Thread Marius Bakke
Giovanni Biscuolo  writes:

> Mark H Weaver  writes:
>
>> Giovanni Biscuolo  writes:
>>> with a solid infrastructure of "scientifically" trustable build farms,
>>> there are no reasons not to trust substitutes servers (this implies
>>> working towards 100% reproducibility of GuixSD)
>>
>> What does "scientifically trustable" mean?
>
> I'm still not able to elaborate on that (working on it, a sort of
> self-research-hack project) but I'm referencing to this message related
> to reduced bootstrap tarballs:
>
>   https://lists.gnu.org/archive/html/guix-devel/2018-11/msg00347.html
>
> and the related reply by Jeremiah (unfortunately cannot find it in
> archives, Message-ID: <877eh81tm4@itsx01.pdp10.guru>)

FWIW the GNU list search can take message IDs:

https://lists.gnu.org/archive/cgi-bin/namazu.cgi?query=877eh81tm4.fsf%40ITSx01.pdp10.guru&submit=Search&idxname=guix-devel


signature.asc
Description: PGP signature


Re: `guix lint' warn of GitHub autogenerated source tarballs

2018-12-21 Thread Ludovic Courtès
Hi!

Efraim Flashner  skribis:

> Here's what I currently have. I don't think I've tried running the tests
> I've written yet, and Ludo said there was a better way to check if the
> download was a git-fetch or a url-fetch. As the logic is currently
> written it'll flag any package hosted on github owned by 'archive' or
> any package named 'archive' in addition to the ones we want.

OK.  I think you’re pretty much there anyway, so please don’t drop the
ball.  ;-)

Some comments follow:

> From 8a07c8aea1f23db48a9e69956ad15f79f0f70e35 Mon Sep 17 00:00:00 2001
> From: Efraim Flashner 
> Date: Tue, 23 Oct 2018 12:01:53 +0300
> Subject: [PATCH] lint: Add checker for unstable tarballs.
>
> * guix/scripts/lint.scm (check-source-unstable-tarball): New procedure.
> (%checkers): Add it.
> * tests/lint.scm ("source-unstable-tarball", source-unstable-tarball:
> source #f", "source-unstable-tarball: valid", source-unstable-tarball:
> not-github", source-unstable-tarball: git-fetch"): New tests.

[...]

> +(define (check-source-unstable-tarball package)
> +  "Emit a warning if PACKAGE's source is an autogenerated tarball."
> +  (define (github-tarball? origin)
> +(string-contains origin "github.com"))
> +  (define (autogenerated-tarball? origin)
> +(string-contains origin "/archive/"))
> +  (let ((origin (package-source package)))
> +(unless (not origin) ; check for '(source #f)'
> +  (let ((uri   (origin-uri origin))
> +(dl-method (origin-method origin)))
> +(unless (not (pk dl-method "url-fetch"))
> +(when (and (github-tarball? uri)
> +   (autogenerated-tarball? uri))
> +  (emit-warning package
> +(G_ "the source URI should not be an autogenerated 
> tarball")
> +'source)))

You should use ‘origin-uris’ (plural), which always returns a list of
URIs, and iterate on them (see ‘check-mirror-url’ as an example.)

Also, when you have a URI, you can obtain just the host part and decode
the path part like this:

--8<---cut here---start->8---
scheme@(guile-user)> (string->uri "https://github.com/foo/bar/archive/whatnot";)
$2 = #< scheme: https userinfo: #f host: "github.com" port: #f path: 
"/foo/bar/archive/whatnot" query: #f fragment: #f>
scheme@(guile-user)> (uri-host $2)
$3 = "github.com"
scheme@(guile-user)> (split-and-decode-uri-path (uri-path $2))
$4 = ("foo" "bar" "archive" "whatnot")
--8<---cut here---end--->8---

That way you should be able to get more accurate matching than with
‘string-contains’.  Does that make sense?

The tests look good… but could you make sure they pass?  :-)

Thank you!

Ludo’.



Re: Re-approaching package tagging

2018-12-21 Thread Ludovic Courtès
Hi,

Christopher Lemmer Webber  skribis:

> I wonder for keywords that would be awkward to "force" into the
> description if we could have an "extra keywords" section?  Then we can
> skip tagging, but in case a package's description didn't comfortably fit
> that word, you can still find it by it.  Then it's not separate tagging,
> just extra words to find a package by that the description didn't say.

Good question.  I don’t think we’ve really had concerns about “extra
words” in the past (and the regexps are also matched against the file
name for instance, which can also help), but let’s keep that suggestion
in mind if that happens!

Ludo’.



Re: What's up with the .texi files always "changing"?

2018-12-21 Thread Ludovic Courtès
Hi Julien,

Julien Lepiller  skribis:

> Another solution would be to find a way to be able to generate them 
> completely, so we don't need to add them to git. The issue is that the 
> autotools don't like that: they refuse to work if the texi files are not 
> present, which would be the case for fresh checkouts. I'd prefer this kind of 
> solution though.

What do you mean by “the Autotools refuse”?  :-)  Is it that our rules
in Makefile.am assume that those .texi file are present, or is there
something else?

Having them in Git, rather than having to wget from
translationproject.org, probably simplifies CI and ‘guix pull’.

Thanks,
Ludo’.



Re: Re-approaching package tagging

2018-12-21 Thread Ludovic Courtès
Hi Chris,

Chris Marusich  skribis:

> Is "guix package --search" case-insensitive?  The manual ((guix)
> Invoking guix package) does not seem to mention it.

Per guix/scripts/package.scm, it is case-insensitive:

--8<---cut here---start->8---
  (('search _)
   (let* ((patterns (filter-map (match-lambda
  (('query 'search rx) rx)
  (_   #f))
opts))
  (regexps  (map (cut make-regexp* <> regexp/icase) patterns)))
--8<---cut here---end--->8---

I’ll add a note in the manual.

Ludo’.



Re: `guix lint' warn of GitHub autogenerated source tarballs

2018-12-21 Thread swedebugia
On 2018-12-21 21:50, Ludovic Courtès wrote:
> Hi!
> 
> Efraim Flashner  skribis:
> 
>> Here's what I currently have. I don't think I've tried running the tests
>> I've written yet, and Ludo said there was a better way to check if the
>> download was a git-fetch or a url-fetch. As the logic is currently
>> written it'll flag any package hosted on github owned by 'archive' or
>> any package named 'archive' in addition to the ones we want.
> 
> OK.  I think you’re pretty much there anyway, so please don’t drop the
> ball.  ;-)
> 
> Some comments follow:
> 
>> From 8a07c8aea1f23db48a9e69956ad15f79f0f70e35 Mon Sep 17 00:00:00 2001
>> From: Efraim Flashner 
>> Date: Tue, 23 Oct 2018 12:01:53 +0300
>> Subject: [PATCH] lint: Add checker for unstable tarballs.
>>
>> * guix/scripts/lint.scm (check-source-unstable-tarball): New procedure.
>> (%checkers): Add it.
>> * tests/lint.scm ("source-unstable-tarball", source-unstable-tarball:
>> source #f", "source-unstable-tarball: valid", source-unstable-tarball:
>> not-github", source-unstable-tarball: git-fetch"): New tests.
> 
> [...]
> 
>> +(define (check-source-unstable-tarball package)
>> +  "Emit a warning if PACKAGE's source is an autogenerated tarball."
>> +  (define (github-tarball? origin)
>> +(string-contains origin "github.com"))
>> +  (define (autogenerated-tarball? origin)
>> +(string-contains origin "/archive/"))
>> +  (let ((origin (package-source package)))
>> +(unless (not origin) ; check for '(source #f)'
>> +  (let ((uri   (origin-uri origin))
>> +(dl-method (origin-method origin)))
>> +(unless (not (pk dl-method "url-fetch"))
>> +(when (and (github-tarball? uri)
>> +   (autogenerated-tarball? uri))
>> +  (emit-warning package
>> +(G_ "the source URI should not be an autogenerated 
>> tarball")
>> +'source)))
> 
> You should use ‘origin-uris’ (plural), which always returns a list of
> URIs, and iterate on them (see ‘check-mirror-url’ as an example.)
> 
> Also, when you have a URI, you can obtain just the host part and decode
> the path part like this:
> 
> --8<---cut here---start->8---
> scheme@(guile-user)> (string->uri 
> "https://github.com/foo/bar/archive/whatnot";)
> $2 = #< scheme: https userinfo: #f host: "github.com" port: #f
> path: "/foo/bar/archive/whatnot" query: #f fragment: #f>
> scheme@(guile-user)> (uri-host $2)
> $3 = "github.com"
> scheme@(guile-user)> (split-and-decode-uri-path (uri-path $2))
> $4 = ("foo" "bar" "archive" "whatnot")
> --8<---cut here---end--->8---
> 
> That way you should be able to get more accurate matching than with
> ‘string-contains’.  Does that make sense?

This is super nice! I did not know this. It makes URL parsing much
easier :D

-- 
Cheers 
Swedebugia



Re: Anyone working on packaging Firejail?

2018-12-21 Thread Ludovic Courtès
Hi Eric,

Eric Bavier  skribis:

> On Thu, 20 Dec 2018 11:19:07 -0500

[...]

>> > Run icecat, a browser, in a container with
>> >
>> > guix environment --container --network --share=/tmp/.X11-unix
>> > --ad-hoc icecat
>> > export DISPLAY=":0.0"
>> > icecat  
>> 
>> Is there a way to do this automatically?  ie:  you don't have to type
>> guix environment --container  icecat?  You just type "icecat?"
>
> That is the major advantage Firejail has over 'guix environment
> --container' currently.  It contains a large collection of "profiles"
> for different applications, specifying how exactly to jail them so that
> they can still function.

We also discussed “guix run icecat” as a simpler option:

  https://lists.gnu.org/archive/html/help-guix/2018-01/msg00108.html

‘guix run’ can guess parts of the profile, like whether the application
needs X11 or Fontconfig stuff, just by looking at the references of the
application.  That said, I’m curious to see what the Firejail profiles
look like and to what extent we’d need to manually annotate packages if
we were to provide similar functionality.

Firejail looks nice!

Ludo’.



Re: Anyone working on packaging Firejail?

2018-12-21 Thread Ludovic Courtès
Hi Eric,

Eric Bavier  skribis:

> On Thu, 20 Dec 2018 11:19:07 -0500

[...]

>> > Run icecat, a browser, in a container with
>> >
>> > guix environment --container --network --share=/tmp/.X11-unix
>> > --ad-hoc icecat
>> > export DISPLAY=":0.0"
>> > icecat  
>> 
>> Is there a way to do this automatically?  ie:  you don't have to type
>> guix environment --container  icecat?  You just type "icecat?"
>
> That is the major advantage Firejail has over 'guix environment
> --container' currently.  It contains a large collection of "profiles"
> for different applications, specifying how exactly to jail them so that
> they can still function.

We also discussed “guix run icecat” as a simpler option:

  https://lists.gnu.org/archive/html/help-guix/2018-01/msg00108.html

‘guix run’ can guess parts of the profile, like whether the application
needs X11 or Fontconfig stuff, just by looking at the references of the
application.  That said, I’m curious to see what the Firejail profiles
look like and to what extent we’d need to manually annotate packages if
we were to provide similar functionality.

Firejail looks nice!

Ludo’.



Re: Anyone working on packaging Firejail?

2018-12-21 Thread Ludovic Courtès
Hi Eric,

Eric Bavier  skribis:

> On Thu, 20 Dec 2018 11:19:07 -0500

[...]

>> > Run icecat, a browser, in a container with
>> >
>> > guix environment --container --network --share=/tmp/.X11-unix
>> > --ad-hoc icecat
>> > export DISPLAY=":0.0"
>> > icecat  
>> 
>> Is there a way to do this automatically?  ie:  you don't have to type
>> guix environment --container  icecat?  You just type "icecat?"
>
> That is the major advantage Firejail has over 'guix environment
> --container' currently.  It contains a large collection of "profiles"
> for different applications, specifying how exactly to jail them so that
> they can still function.

We also discussed “guix run icecat” as a simpler option:

  https://lists.gnu.org/archive/html/help-guix/2018-01/msg00108.html

‘guix run’ can guess parts of the profile, like whether the application
needs X11 or Fontconfig stuff, just by looking at the references of the
application.  That said, I’m curious to see what the Firejail profiles
look like and to what extent we’d need to manually annotate packages if
we were to provide similar functionality.

Firejail looks nice!

Ludo’.



Re: Anyone working on packaging Firejail?

2018-12-21 Thread Ludovic Courtès
Hi Eric,

Eric Bavier  skribis:

> On Thu, 20 Dec 2018 11:19:07 -0500

[...]

>> > Run icecat, a browser, in a container with
>> >
>> > guix environment --container --network --share=/tmp/.X11-unix
>> > --ad-hoc icecat
>> > export DISPLAY=":0.0"
>> > icecat  
>> 
>> Is there a way to do this automatically?  ie:  you don't have to type
>> guix environment --container  icecat?  You just type "icecat?"
>
> That is the major advantage Firejail has over 'guix environment
> --container' currently.  It contains a large collection of "profiles"
> for different applications, specifying how exactly to jail them so that
> they can still function.

We also discussed “guix run icecat” as a simpler option:

  https://lists.gnu.org/archive/html/help-guix/2018-01/msg00108.html

‘guix run’ can guess parts of the profile, like whether the application
needs X11 or Fontconfig stuff, just by looking at the references of the
application.  That said, I’m curious to see what the Firejail profiles
look like and to what extent we’d need to manually annotate packages if
we were to provide similar functionality.

Firejail looks nice!

Ludo’.



Re: Anyone working on packaging Firejail?

2018-12-21 Thread Ludovic Courtès
Hi Eric,

Eric Bavier  skribis:

> On Thu, 20 Dec 2018 11:19:07 -0500

[...]

>> > Run icecat, a browser, in a container with
>> >
>> > guix environment --container --network --share=/tmp/.X11-unix
>> > --ad-hoc icecat
>> > export DISPLAY=":0.0"
>> > icecat  
>> 
>> Is there a way to do this automatically?  ie:  you don't have to type
>> guix environment --container  icecat?  You just type "icecat?"
>
> That is the major advantage Firejail has over 'guix environment
> --container' currently.  It contains a large collection of "profiles"
> for different applications, specifying how exactly to jail them so that
> they can still function.

We also discussed “guix run icecat” as a simpler option:

  https://lists.gnu.org/archive/html/help-guix/2018-01/msg00108.html

‘guix run’ can guess parts of the profile, like whether the application
needs X11 or Fontconfig stuff, just by looking at the references of the
application.  That said, I’m curious to see what the Firejail profiles
look like and to what extent we’d need to manually annotate packages if
we were to provide similar functionality.

Firejail looks nice!

Ludo’.



Re: Merging staging?

2018-12-21 Thread Ludovic Courtès
Hello Mark,

Mark H Weaver  skribis:

> It's a good question.  I have several hypotheses:

These are all valid but there’s a couple more to consider.  ;-)

Specifically we’ve had ENOSPC issues on some build nodes lately, and as
I wrote elsewhere, ‘guix offload’ would report them as “permanent
failures”.  Thus guix-daemon on berlin would cache those failures and
never retry afterwards.  This is fixed by commit
b96e05aefd7a4f734cfec3b27c2d38320d43b687.

Commit 63b0c3eaccdf1816b419632cd7fe721934d2eb27 also arranges so we
don’t choose machines low on disk space.

Another issue I’ve noticed is “database is locked” offloading crashes,
fixed by bdf860c2e99077d431da0cc1db4fc14db2a35d31.  We probably don’t
get these on hydra.gnu.org because we’re running a version that predates
the replacement of the ‘guix-register’ program by (guix store database).

There’s a few more issues about offloading in the bug tracker.  I
suspect these explain the low availability of substitutes to a large
extent.

Ludo’.



Re: bug#33676: GuixSD on eoma68-a20?

2018-12-21 Thread Danny Milosavljevic
Now I get:

$ # commit 39c676c4a3507863f4edf20b225ace4cbf646ed6
$ ./pre-inst-env guix system disk-image --system=armhf-linux -e '(begin 
(use-modules (gnu system) (gnu bootloader) (gnu bootloader u-boot) (gnu system 
install)) (operating-system (inherit installation-os) (bootloader 
(bootloader-configuration (bootloader u-boot-bootloader) (target #f)' >QQQ3 
2>&1
[...]
The following derivation will be built:
   /gnu/store/shgfclh6yy1a03hl0c293s89y3qm9033-disk-image.drv
building /gnu/store/shgfclh6yy1a03hl0c293s89y3qm9033-disk-image.drv...
environment variable `PATH' set to 
`/gnu/store/cm3j1pzdqhw4s9bg1drwlm3lw3qxzddj-qemu-minimal-3.1.0/bin:/gnu/store/hb2qj35yxmvxzcq99lbfcpija032wdzh-coreutils-8.30/bin'
creating raw image of 1265.54 MiB...
Formatting '/gnu/store/9wbw2vbpgg3pwlg9xr7jbniyca0nra2q-disk-image', fmt=raw 
size=1327019190
[0.00] Booting Linux on physical CPU 0x0
[0.00] Linux version 4.19.11-gnu (nixbld@) (gcc version 5.5.0 (GCC)) #1 
SMP 1
[0.00] CPU: ARMv7 Processor [412fc0f1] revision 1 (ARMv7), cr=10c5387d
[0.00] CPU: div instructions available: patching division code
[0.00] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
[0.00] OF: fdt: Machine model: linux,dummy-virt
[0.00] Memory policy: Data cache writealloc
[0.00] efi: Getting EFI parameters from FDT:
[0.00] efi: UEFI not found.
[0.00] cma: Reserved 16 MiB at 0x4f00
[0.00] psci: probing for conduit method from DT.
[0.00] psci: PSCIv0.2 detected in firmware.
[0.00] psci: Using standard PSCI v0.2 function IDs
[0.00] psci: Trusted OS migration not required
[0.00] random: get_random_bytes called from start_kernel+0xa0/0x50c 
with crng_init=0
[0.00] percpu: Embedded 17 pages/cpu @(ptrval) s38732 r8192 d22708 
u69632
[0.00] Built 1 zonelists, mobility grouping on.  Total pages: 64960
[0.00] Kernel command line: panic=1 
--load=/gnu/store/ip9p4q62fbgm2r2xnnh8qi5p77k2igas-linux-vm-loader 
console=ttyAMA0
[0.00] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
[0.00] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
[0.00] Memory: 214952K/262144K available (9216K kernel code, 1133K 
rwdata, 2612K rodata, 2048K init, 310K bss, 30808K reserved, 16384K 
cma-reserved, 0K highmem)
[0.00] Virtual kernel memory layout:
[0.00] vector  : 0x - 0x1000   (   4 kB)
[0.00] fixmap  : 0xffc0 - 0xfff0   (3072 kB)
[0.00] vmalloc : 0xd080 - 0xff80   ( 752 MB)
[0.00] lowmem  : 0xc000 - 0xd000   ( 256 MB)
[0.00] pkmap   : 0xbfe0 - 0xc000   (   2 MB)
[0.00] modules : 0xbf00 - 0xbfe0   (  14 MB)
[0.00]   .text : 0x(ptrval) - 0x(ptrval)   (10208 kB)
[0.00]   .init : 0x(ptrval) - 0x(ptrval)   (2048 kB)
[0.00]   .data : 0x(ptrval) - 0x(ptrval)   (1134 kB)
[0.00].bss : 0x(ptrval) - 0x(ptrval)   ( 311 kB)
[0.00] ftrace: allocating 33359 entries in 98 pages
[0.00] rcu: Hierarchical RCU implementation.
[0.00] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=1.
[0.00] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1
[0.00] NR_IRQS: 16, nr_irqs: 16, preallocated irqs: 16
[0.00] GICv2m: range[mem 0x0802-0x08020fff], SPI[80:143]
[0.00] arch_timer: cp15 timer(s) running at 62.50MHz (virt).
[0.00] clocksource: arch_sys_counter: mask: 0xff 
max_cycles: 0x1cd42e208c, max_idle_ns: 881590405314 ns
[0.003589] sched_clock: 56 bits at 62MHz, resolution 16ns, wraps every 
4398046511096ns
[0.006045] Switching to timer-based delay loop, resolution 16ns
[0.308537] Console: colour dummy device 80x30
[0.36] Calibrating delay loop (skipped), value calculated using timer 
frequency.. 125.00 BogoMIPS (lpj=25)
[0.373670] pid_max: default: 32768 minimum: 301
[0.426475] Security Framework initialized
[0.431096] Yama: becoming mindful.
[0.506448] AppArmor: AppArmor initialized
[0.535772] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
[0.543395] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
[0.965208] CPU: Testing write buffer coherency: ok
[0.998104] CPU0: Spectre v2: firmware did not set auxiliary control 
register IBE bit, system vulnerable
[1.411470] /cpus/cpu@0 missing clock-frequency property
[1.428729] CPU0: thread -1, cpu 0, socket 0, mpidr 8000
[1.698318] Setting up static identity map for 0x4010 - 0x401000a0
[1.748454] rcu: Hierarchical SRCU implementation.
[1.926086] EFI services will not be available.
[1.993192] smp: Bringing up secondary CPUs ...
[1.994874] smp: Brought up 1 node, 1 CPU
[1.997851] SMP: Total of 1 processors activated (125.00 BogoMIPS).
[1.998883] CP

New Gash build Bash without Bash, Coreutils, and a few others

2018-12-21 Thread Timothy Sample
Hi Guix,

Here’s an update about bootstrapping for you.

I am very pleased to announce that Gash (having absorbed Geesh) is now
capable of building Bash without Bash, Coreutils, Grep, Sed, or Tar.
That is, Gash provides alternatives, written in Scheme, to all the
utilities needed by the “gnu-build-system” that are normally provided by
those packages.  Note, however, that this work is still very much at the
“proof of concept” level.

This is exciting because it means that we are within sight of removing
each of those packages from the set of bootstrap binaries (in the
context Jan’s work on MES and the Guix “reduced binary seed bootstrap”).
AIUI, that means that, besides Guile and MES, the set of bootstrap
binaries need only contain AWK, Patch, Bzip2, Gzip, and XZ.

There is still a lot to do.  Concretely, Gash itself has to be
bootstrapped.  There is some mildly bit-rotten code for this that will
have to be revived.  Gash should also be ported to Guile 2.0, since that
is the current bootstrap Guile, and it would be nice not to change it.
(This shouldn’t be too hard.)

As for the other utilities, I don’t really have a strategy for them.  I
would imagine that writing a good-enough “patch.scm” would not be too
hard.  AWK is difficult, but after spending (way too much) time reading
“configure” scripts, I think it could be avoided.

You can see the latest code at  (yes,
the URL needs an update).  The work described in this message is on the
“wip-bootstrap” branch.


-- Tim