Re: Guile Lua

2013-01-13 Thread Ian Price
Nala Ginrut  writes:

>> What about common lisp is scheme a lisp or is CL a scheme :-)
>> 
>
> IIRC, someone raised the topic that emerge Clisp into Guile in 2011,
> but what's the status now?
>
>> Anyway to support CL I would think that we need to support placing
>> properties
>> on symbols, e,g. currently a symbol slot is a variable, but to
>> effectively support CL I would go for 
>> /Stefan

I don't think we should get ahead of ourselves, but emacs has had some
minor CL emulation in things like cl.el and cl-lib. I think these could
be good test cases for the elisp support.

-- 
Ian Price -- shift-reset.com

"Programming is like pinball. The reward for doing it well is
the opportunity to do it again" - from "The Wizardy Compiled"



Scanning for coding declarations in all files (not just source)

2013-01-13 Thread Mark H Weaver
I just discovered that Guile is scanning for coding declarations in
*all* files opened with 'open-file', not just source files.

For source files, we are scanning for coding declarations twice: once
when when the file is opened, and a second time when 'compile-file' or
'primitive-load' explicitly scans for it.

The relevant commit is 211683cc5c99542dfb6e2a33f7cb8c1f9abbc702.
I was unable to find any discussion of this on guile-devel.

I don't like this.  I don't want 'open-file' to second-guess the
encoding I have asked for in my program, based on data in the file.
Also, the manual is misleading.  Section 6.17.8 gives the impression
that the scanning is only done for source files.

What do other people think?

  Mark



Re: Scanning for coding declarations in all files (not just source)

2013-01-13 Thread Mike Gran
> From: Mark H Weaver 
> To: guile-devel@gnu.org
> Cc: Michael Gran 
> Sent: Sunday, January 13, 2013 10:25 AM
> Subject: Scanning for coding declarations in all files (not just source)
> 
Hi Mark,

> I just discovered that Guile is scanning for coding declarations in
> *all* files opened with 'open-file', not just source files.
True

> For source files, we are scanning for coding declarations twice: once
> when when the file is opened, and a second time when 'compile-file' or
> 'primitive-load' explicitly scans for it.

If there was a reason for scanning the coding twice, I don't recall it.

> 
> The relevant commit is 211683cc5c99542dfb6e2a33f7cb8c1f9abbc702.
> I was unable to find any discussion of this on guile-devel.
> 
> I don't like this.  I don't want 'open-file' to second-guess the
> encoding I have asked for in my program, based on data in the file.
> Also, the manual is misleading.  Section 6.17.8 gives the impression
> that the scanning is only done for source files.
> 
> What do other people think?

Opening a file that contains a coding declaration using an encoding other
than binary or the coding declared in the file seems like it would be
something of a corner case.  So, IMHO it makes sense that opening a file
using its self-declared encoding should be the simple case, and that
opening a text file in a different (non-binary) text encoding should
be the more complicated case, in a API sense.

There are also obscure possibilities to consider, like reading code from
a file or pipe into a string, and then eval-ing the string.

But, I can see your point, though.  Guile does seem to be having a tough
time deciding how automatic or manual string encoding should be.

So, whatever makes people happy.

-Mike 




Re: [PATCH] Colorized REPL

2013-01-13 Thread Ludovic Courtès
Hi!

Noah Lavine  skribis:

> Yes, I agree with everything you said here. I'm torn, because I think that
> in general having more portable Scheme code is good for everyone, and the
> RnRS standards are the best way to do that, so maybe we should just accept
> that the most recent 1 or 2 standards will always be loaded. But on the
> other hand, that doesn't mean that this particular module needs to use them.

I would like to be as optimistic as you are, but hey, R7RS purposefully
ignores or re-implements part of R6RS, which purposefully ignored
several SRFIs and implementations.

Conversely, Guile is a stable standard.  :-)

Ludo’.



Re: Delimited continuations to the rescue of futures

2013-01-13 Thread Nala Ginrut
On Sat, 2012-11-17 at 00:36 +0100, Ludovic Courtès wrote:
> Hello!
> 
> As was reported recently by Mark and others, ‘par-map’ would only use
> ncores - 1, because the main thread was stuck in a
> ‘wait-condition-variable’ while touching one of the futures.
> 
> The obvious fix is to write ‘par-map’ like this (as can be seen from
> Chapter 2 of Marc Feeley’s PhD thesis):
> 
>   (define (par-mapper mapper cons)
> (lambda (proc . lists)
>   (let loop ((lists lists))
> (match lists
>   (((heads tails ...) ...)
>(let ((tail (future (loop tails)))
>  (head (apply proc heads)))
>  (cons head (touch tail
>   (_
>'())
> 
> However, our futures did not support “nested futures”.  That is, if a
> future touched another future, it would also wait on a condition
> variable until the latter completes.  Thus, the above code would only
> use one core.
> 

Sorry, but this implementation seems to be a tail-recursive unsafe one.

--cut---
scheme@(guile-user)> (par-map 1+ (iota 1))
ice-9/threads.scm:99:22: In procedure loop:
ice-9/threads.scm:99:22: Throw to key `vm-error' with args `(vm-run "VM:
Stack overflow" ())'.

Entering a new prompt.  Type `,bt' for a backtrace or `,q' to continue.
--end---

PS: I do know '1+' here may not be a proper test case, but it doesn't
related here anyway.
 


> So the fix is to support nested futures, by properly scheduling futures
> that are active, and adding those that are waiting to a wait queue.
> Those added to the wait queue have their continuation captured (yeah!),
> so that they can be later rescheduled when their “waitee” has completed.
> 
> But then there’s still the problem of the main thread: it’s silly to let
> it just wait on a condition variable when it touches a future that has
> not completed yet.  So now it behaves as a worker, processing futures
> until the one it’s waiting for has completed.
> 
> Figures from my 2-core/4-thread laptop:
> 
> --8<---cut here---start->8---
> $ time ./meta/guile -c '(begin (use-modules (ice-9 threads)) (define (fibo n) 
> (if (<= n 1) n (+ (fibo (- n 1)) (fibo (- n 2) (pk "r" (map fibo 
> (make-list 4 30'
> 
> ;;; ("r" (832040 832040 832040 832040))
> 
> real0m27.864s
> user0m27.773s
> sys 0m0.031s
> 
> $ time ./meta/guile -c '(begin (use-modules (ice-9 threads)) (define (fibo n) 
> (if (<= n 1) n (+ (fibo (- n 1)) (fibo (- n 2) (pk "r" (par-map fibo 
> (make-list 4 30'
> 
> ;;; ("r" (832040 832040 832040 832040))
> 
> real0m10.899s
> user0m42.487s
> sys 0m0.051s
> --8<---cut here---end--->8---
> 
> The speedup is not optimal, but there’s room for optimization.
> 
> I’ve pushed the result in ‘wip-nested-futures’.  Please review and test!
> 
> Thanks,
> Ludo’.
> 
> 





Re: A vm for native code in guile

2013-01-13 Thread Nala Ginrut
On Wed, 2012-08-01 at 22:59 +0200, Stefan Israelsson Tampe wrote:
> Hi,
> 
> The byte-code -> native-code compiler is does serve my needs pretty well
> now. It should really
> soon be possible to add code that will auto compile bytecode versions to
> native versions. The compiler is not perfect and some instructions is
> missing. But it can go from VM->NATIVE->VM and so on so whenever there is
> missing instruction the compiler can bail out to vm code. What's left is
> to be able to go from VM to Native returning multiple values and in all
> call positions.
> 
> To note
> 
> * the code is for x86-64, linux.
> 
> * Windows have another calling convention => the assembler has to be recoded
>   => we need compilers for all interesting combinations of operating
> systems and native targets
> 
> * Using the C-stack is nice because the native push and pop instructions
> can be used as well as
>   brk-ings makes for automatic stack growth? also calling out c functions
> can be fast. On the
>   other hand stack traces is defunct with this code and I'm uncertain how
> the prompting will
>   cope with this feature. It's probably better to use a separate stack for
> the native code and model
>   it like the wip-rtl stack. On the other hand it has been convenient to
> use it as a stack to save
>   variables before calling out to helper c-functions, but these helper
> functions usually is on the slow
>   path and the savings can be done using helper registers that is local to
> the vm a little bit slower
>   but doable. Not sure what path to take here.
> 
> * Writing assembler is really tough. Because debugging is really difficult.
> 

IMO, we don't have to write assembler again, since GNU Binutils does.
The only necessary work is to map bytecode->asm, and add a AOT option
with a script into 'guild' for calling Binutils. 
We may borrow some work from GCC. I don't know if it's easy, but GCC
uses Lisp-like thing to handle machine-description. Though it could be
interesting, it's a lot of work todo. Then it could support many
platforms rather than x86. 

> * To ease things I compiled C code and examined the assembler => fragile
> and difficult to port the
>   code. The final version needs to put more effort into probing for
> constants used in the generated
>   assembler.
> 
> * x86 code is pretty different because of the low number of registers and
> invariant registers over c-call's
> 
> * prompt and aborts are tricky instructions!
> 
> Example:
> as an example reducing a list of 1000 elements with a function that is
> basically + a 4x increase in performance when compiling to native code
> could be seen. This are typical figures for what one can expect to improve
> in speed. A smarter usage of registers and less poping and pushing (RTL)
> could mean that we can increase the speedup from stable-2.0 even further.
> 
> 
> I will next week start working on the RTL branch porting the current setup
> but use the rtl stack in stead of the native C stack.
> 
> Regards
> /Stefan