Hi Neil, On Mon 25 May 2009 23:57, Neil Jerram <n...@ossau.uklinux.net> writes:
> l...@gnu.org (Ludovic Courtès) writes: > >> Andy Wingo <wi...@pobox.com> writes: >> >>> For loading uncompiled scripts, things will be slower, unless your >>> modules #:use-syntax some other transformer. I don't know where the >>> tradeoff is between the increased expansion speed due to compilation and >>> slowdown due to a complete codewalk, but it's certainly there. >> >> Yes. Likewise, it may be reasonable to assume from now on that most of >> the code will be compiled. For instance, an uncompiled script may just >> be a small code snipped that uses mostly compiled code. > > It seems to me that once we have a completely working compiler, we > need to ask if there are any circumstances left where it is better to > use the current interpreter instead. In the short term (within the next year or so), I would imagine that ceval/deval would be faster than an eval written in Scheme -- though I do not know. On the other hand, an eval written in Scheme would allow for tail-recursive calls between the evaluator and the VM. Another option, besides an eval in Scheme, is replacing the evaluator with the compiler. One could compile on the fly and run the compiled code from memory, or cache to the filesystem, either alongside the .scm files or in a ~/.guile-comp-cache/ or something. But compilation does take some time. It seems clear we still need an eval in C, at least to bootstrap Guile. WRT replacing ceval, I guess my conclusion is that I don't know yet. > If the answer to that is yes, the details of those remaining > circumstances will (probably) make it obvious whether the > pre-memoization idea is worthwhile. Yes, perhaps. We probably need some timings. > Do we already have performance measurements, and are those recorded / > summarized somewhere? We don't have very systematic ones. I was just running some of the feeley benchmarks again, and it looked to me that the VM's speed is about 3 or 4 times the speed of ceval in 1.9, but I should test with benchmarks that run for only 1s or so, and measure compilation time, and test against 1.8 too. >>> OTOH I would suspect that we can implement some kind of just-in-time >>> compilation -- essentially for each use-modules we can check to see if >>> the module is compiled, and if not just compile it then and there. It >>> would be a little slow the first time, but after that it would load much >>> faster, even faster than before. Python does this. We could add a guile >>> --no-comp option to disable it. >> >> I don't like this idea because it implies implicitly letting Guile >> fiddle with the user's file system. > > I don't see why that should be. Isn't it possible to read a .scm > file, compile its contents, and hold the compiled programs in memory? Yes, compile-and-load, from (system base compile). But you have to redo the compilation the next time the file is loaded, of course. Incidentally, ikarus had a similar discussion recently: http://thread.gmane.org/gmane.lisp.scheme.ikarus.user/723 http://thread.gmane.org/gmane.lisp.scheme.ikarus.user/745 Cheers, Andy -- http://wingolog.org/