Han-Wen Nienhuys <hanw...@gmail.com> writes: > On Fri, Mar 12, 2021 at 11:28 PM Dr. Arne Babenhauserheide > <arne_...@web.de> wrote: >> >> there’s a Guile 3.0.6 release planned that includes a rewrite of the >> >> reader in Scheme. It has speed in the same order of magnitude as the >> >> previous reader but might have different performance characteristics. >> >> >> >> If I remember correctly, lilypond uses the reader a lot, so if you have >> >> a test-system with lilypond on Guile 3, could you test how running >> >> lilypond with the current Guile master from git affects lilypond? >> > >> > last time I looked, building GUILE 3 from source was truly glacial, >> > making this kind of thing annoying to check. >> >> If you build from tarball it is much faster, because it then provides >> pre-created bootstrapping files. What’s so slow is creating the initial >> optimized reader. > > That wouldn't work for testing a prerelease, though.
That’s true, yes. To ease this testing, creating a pre-release tarball could help. >> > You say "same order of magnitude". Do you have benchmarks so we know >> > what to expect? >> >> The current *average* spead of the reader is roughly 80% of the reader >> implemented in C, but with different performance characteristics. I’m > > $ cat ../bench.ly > #(define (microseconds) > (let* ((t (gettimeofday)) > (us (/ (cdr t) 1000000.0))) > (+ (car t) us))) > > #(define start (microseconds)) > > % \include "bench-largeexp.ly" > \include "bench-manysmall.ly" > > #(newline) > #(display (- (microseconds) start)) > > Parsing & evaluating '(1 2 3) 200 times. > - guile 1.8: 1.25ms > - guile 2.2: 3.2ms > - guile 3.0.6: 2.08ms That actually looks pretty good. Slowly fighting the way back to the reader speed of 1.8. > Parsing & evaluating the giant expression in define-grobs.scm once: > > - guile 1.8: 10.6ms > - guile 2.2: 166ms > - guile 3.0.6: 71ms Yikes! That’s still factor 7 slower. > Parsing & evaluating the giant expression in define-grobs.scm once > (but quoted, ie. not real evaluation): > > - guile 1.8: 10.0ms > - guile 2.2: 13ms > - guile 3.0.6: 12.8ms > > In summary, the read speed itself for large expressions is on the same > order as 1.8, but for many small expressions (which is a much more > common use-case) there is still a 60% slowdown. That’s much nicer now, but still room for improvement. And going by discussions on #guile, there is still room for speedups in the new parser. >> asking here because I want to avoid surprising and avoidable changes >> that block Lilypond. I consider Lilypond to be the most important >> flagship project of Guile, and I want to do what I can to prevent >> unnecessary friction. > > I appreciate the heads up you gave here today, but from our side, it > doesn't seem like the Guile project is much concerned with our needs. I’m trying to keep focus on the needs of Lilypoond. That’s also why I asked here. Also Guix is helping to push Guile towards the requirements of Lilypond again — see https://wingolog.org/archives/2020/06/03/a-baseline-compiler-for-guile Can I forward your results to the Guile mailing list? > The evaluation speed of GUILE 3.x is still pretty poor. Having fast, > JIT'ed code seems interesting in theory, but the way it's implemented > in Guile 3.x is a giant headache: the separate byte compilation is > extremely slow, and it is hard to manage (where should the .go files > be stored/installed, how/when are they generated etc.). It also > doesn't match our use case, because a lot of the code that we have > comes from .ly files, so it cannot be precompiled. The article linked above shows that setting -O1 as optimization of the code could help (if you’re not already doing that). Best wishes, Arne -- Unpolitisch sein heißt politisch sein ohne es zu merken
signature.asc
Description: PGP signature