On 01/07/2017 03:42 PM, David Kastrup wrote:
They are also in complete denial about the importance of interpreter
speed for an _extension_ language: for them, compiler performance is
everything.
Unfortunately for LilyPond... Yet it's worth noting that the plan seems
to be to get back to Guile 1.8 interpreter speeds via compiling the
interpreter to native code... in some future release.[1]
Also it's worth noting that the Guile 2.1.x interpreter appears to be
significantly faster than the Guile 2.0 interpreter. A simple test[1] ran:
1.1 seconds in Guile 1.8
16.4 seconds in Guile 2.0
2.4 seconds in Guile 2.1.1
I'd think the slower interpreter would be a prime suspect for causing
the LilyPond slowdown with Guile 2.0. Maybe skipping Guile 2.0 for
Guile 2.2 would make sense -- at least it's worth keeping this on the
radar for those doing benchmarking and looking into performance
implications.
(I assume significant performance wins would come from making use of
Guile's compiler. David, I guess this is your comment about organizing
"the compilation and storage of byte code for an application.")
-Paul
--------------------
[1] from
http://wingolog.org/archives/2016/01/11/the-half-strap-self-hosting-and-guile
Back in 2009 when we switched to the eval-in-Scheme, we knew that it
would result in a slower interpreter. This is because instead of the
interpreter being compiled to native code, it was compiled to bytecode.
Also, Guile's Scheme compiler wasn't as good then, so we knew that we
were leaving optimizations on the floor. Still, the switch to an
evaluator in Scheme enabled integration of the compiler, and we thought
that the interpreter speed would improve with time. I just took a look
and with this silly loop:
(let lp ((n 0)) (if (< n #e1e7) (lp (1+ n))))
Guile 1.8's interpreter written in C manages to run this in 1.1 seconds.
Guile 2.0's interpreter written in Scheme and compiled to the old
virtual machine does it in 16.4 seconds. Guile 2.1.1's interpreter, with
the closure-chaining optimization, a couple of peephole optimizations in
the interpreter, and compiled using the better compiler and VM from
Guile 2.2, manages to finish in 2.4 seconds. So we are definitely
getting better, and by the time we compile eval.scm to native code I
have no doubt that we will be as good as the old C implementation. (Of
course, when compiled to Guile 2.2's VM, the loop finishes in 55
/milli/seconds, but comparing a compiler and an interpreter is no fair.)
---------------------
_______________________________________________
lilypond-user mailing list
lilypond-user@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-user