Re: Thread+GC issues on ARM

2012-01-08 Thread Neil Jerram
Rob Browning  writes:

> Neil Jerram  writes:
>
>> So, just to be clear, the sequence of events for libgc is
>>
>> - start from 9448012a
>> - apply 0001-Debian-7.1-8.patch
>> - apply 0001-Tweaks-for-successful-dpkg-buildpackage-using-libgc-.patch
>
> So do I understand correctly that in order for this to work, we'll first
> need an updated libgc in Debian unstable?

Yes.

> If so, I'll probably hold off on the guile patch for now.

Sure.  I'm happy that it's with me, for the moment, to propose something
coherent to the libgc maintainer.

  Neil



Re: SCM_ASSERT_TYPE

2012-01-08 Thread Andy Wingo
On Sun 08 Jan 2012 01:52, l...@gnu.org (Ludovic Courtès) writes:

>> On Sat 08 Oct 2011 18:04, l...@gnu.org (Ludovic Courtès) writes:
>>
 And what about the seldom-used SCM_ASRTGO?
>>>
> I think ideally everything should be either documented or deprecated
> (with a documented replacement).

I have now properly deprecated it in stable-2.0.  Is that OK? :-)

Cheers,

Andy
-- 
http://wingolog.org/



Re: Syntax Parameters documentation for guile

2012-01-08 Thread Andy Wingo
On Sun 08 Jan 2012 03:39, Ian Price  writes:

> From b7d764179d5546698617993e5a648d6c1393b5c0 Mon Sep 17 00:00:00 2001
> From: Ian Price 
> Date: Sat, 7 Jan 2012 01:59:33 +
> Subject: [PATCH] document syntax parameters
>
> * doc/ref/api-macros.texi (Macros): Add subsection for "Syntax Parameters"

Applied, thanks!

I committed some additional edits.  I wanted to point them out to you,
as you might be interested in the tao of texinfo:

> +definition within the dynamic extent of a macro expansion. It provides

We use two spaces after periods in the Guile manual.

> +a convenient solution to one of the most common types of unhygienic
> +macro: those that introduce a unhygienic binding each time the macro
> +is used. Examples include a @code{lambda} form with a @code{return} keyword, 
> or
> +class macros that introduce a special @code{self} binding.

And, we try to keep things wrapped to 72 columns.  M-q in emacs will
refill a paragraph for you.

> +With syntax parameters, instead of introducing the binding
> +unhygienically each time, we instead create one binding for the
> +keyword, which we can then adjust later when we want the keyword to
> +have a different meaning. As no new bindings are introduced, hygiene
> +is preserved. This is similar to the dynamic binding mechanisms we
> +have at run-time like @ref{SRFI-39, parameters} or
> +@ref{Fluids and Dynamic States, fluids}, except that the dynamic

@ref in mid-sentence doesn't look very nice in print output:

  
http://www.gnu.org/savannah-checkouts/gnu/texinfo/manual/texinfo/html_node/ref.html#ref

I changed this to be (@pxref{...}).

> +  ;; in the body we adjust the 'return' keyword so that calls
> +  ;; to 'return' are replaced with calls to the escape continuation

Unless it's an inline comment, it's best to make whole sentences.

Lovely example, by the way.  Thanks for the docs!

Andy
-- 
http://wingolog.org/



Re: Compiler Branch

2012-01-08 Thread Andy Wingo
Hi Noah :)

On Sun 08 Jan 2012 01:42, Noah Lavine  writes:

> The function called 'go' runs everything. You give it a Scheme
> expression. It compiles that expression to tree-il, then converts it
> to annotated-tree-il. Then it scans the annotated-tree-il looking for
> instances of the special function 'verify'.
>
> The idea of 'verify' is that if the analyzer can't prove that things
> in a verify always evaluate to true, it will throw a warning. It's a
> simple way to mark up your code with your expectations. For instance,

Interesting.  `verify' seems to be a form of contracts:

  http://ftp.ccs.northeastern.edu/scheme/pubs/icfp2002-ff.pdf

Does `verify' have runtime semantics?  Under what situations, if any,
would the compiler insert runtime checks?

As that paper indicates, two issues you will have to deal with are
higher-order functions and blame.

Your interest in static analysis naturally raises the question of types.
You might like this paper:

  http://www.ccs.neu.edu/racket/pubs/dls06-tf.pdf

I'm glad to hear of your interest in the problem, it's a good one.

>> What do you think about the tree-il differences in master relative to
>> stable-2.0?
>
> I don't know what the differences are, since I just base all of the
> work on master.

Ah, I was just curious.  I made some small changes relative to
stable-2.0 (primcall and seq), and wondered if they were a good idea or
not.

I was also considering a move to a CPS-based intermediate language.
Some links are here:

  
http://wingolog.org/archives/2011/07/12/static-single-assignment-for-functional-programmers

>> Do you see this work as an optional pass, or a core part of the
>> compiler? If the latter, what sort of algorithmic complexity are you
>> envisioning for this work?  (O(n) in size of program is ideal of
>> course.)
>
> My first idea was to implement something equivalent to 0-CFA, which
> unfortunately has complexity O(n^3). If there's something that's
> faster and still produces useful results, that could be a good first
> step. However, I also think we could get the average-case time far
> below n^3 by doing inference on demand instead of calculating the type
> of every binding, similar to the change that peval went through a
> couple months ago.

Yes, this is my thought as well.  Note also that peval is described by
waddell and dybvig as being a kind of special-purpose sub-0CFA.

> I think the complexity really determines how it could be used in the
> compiler. Ideally it would be very fast, and it could work as an
> extension to peval. If it's slower, it could only be used if the user
> requested that the compiler do more work. Either way, I'd like to see
> it help generate native code, and ideally native binaries.

Yes, that would be great.

> Message 2:
>
>> This sounds cool.  I assume you're familiar with kCFA?  See
>> http://matt.might.net/articles/implementation-of-kcfa-and-0cfa/, for
>> example.
>
> No, I hadn't read about it before. Thank you very much for the
> pointer! I admit that I am new to writing real compilers, so pointers
> to papers are great.

I'm still new to them too, so consider it a joint learning process :-)
Note that the kCFA algorithms, though proven, are not the last word; see
for example CFA2, http://arxiv.org/pdf/1102.3676.  Dimitris Vardoulakis
applied CFA2 to JavaScript last summer, in work at Mozilla.

>> It doesn't seem to me that static analysis is a prerequisite for AOT
>> compilation -- and indeed, the current .go compilation is an example of
>> naive AOT compilation.
>
> Yes, excellent point. I was blurring two ideas together. I would
> eventually like this work to lead to an optimizing native-code
> compiler, so I am planning ahead for that.

Great.

Happy hacking,

Andy
-- 
http://wingolog.org/



add-relative-load-path ?

2012-01-08 Thread Andy Wingo
Hi all,

In the following thread:

  http://thread.gmane.org/gmane.lisp.guile.user/8298/focus=8403

there was a concern that it's difficult to set up the load path for
simple one-off scripts.

I had a proposal that we add something like this:

   (define-syntax add-relative-load-path
 (lambda (x)
   (syntax-case x ()
 ((_ path) (string? (syntax->datum #'path))
  (let* ((src (syntax-source #'x))
 (current-file (or (and src (assq-ref src 'filename))
   (error "Could not determine current file 
name")))
 (vicinity (dirname (canonicalize-path current-file)))
 (path-elt (in-vicinity vicinity (syntax->datum #'path
#`(eval-when (compile load eval)
(set! %load-path (cons #,path-elt %load-path

Then in your script you would (add-relative-load-path ".").

Maybe we need an `add-to-load-path' form that handles the eval-when,
actually, so it would be

  (add-to-load-path (dirname (current-source-filename)))

or something like that.  (We'd have to define current-source-filename as
well, in terms of current-source-location.)

What do folks think?  Is it work it?

Andy
-- 
http://wingolog.org/



Re: [PATCH] Add "scandir" procedure

2012-01-08 Thread Ludovic Courtès
Hello, friends of scandir!  :-)

Commit be96155b508d220efe6f419d7743cf39744ee47c adds an ‘error’
parameter to ‘file-system-fold’ so that ‘opendir’ and ‘stat’ errors can
be handled gracefully by the caller (instead of having to guess in the
‘skip’ procedure whether an error occurred.)

Comments welcome!

Ludo’.




Re: SCM_ASSERT_TYPE

2012-01-08 Thread Ludovic Courtès
Andy Wingo  skribis:

> On Sun 08 Jan 2012 01:52, l...@gnu.org (Ludovic Courtès) writes:
>
>>> On Sat 08 Oct 2011 18:04, l...@gnu.org (Ludovic Courtès) writes:
>>>
> And what about the seldom-used SCM_ASRTGO?

>> I think ideally everything should be either documented or deprecated
>> (with a documented replacement).
>
> I have now properly deprecated it in stable-2.0.  Is that OK? :-)

Yes, thanks!

Ludo’.



Re: (define-module (foo) #:import (...)), a la r6rs

2012-01-08 Thread Ludovic Courtès
Hello!

Andy Wingo  skribis:

> No argument there!  But I rarely use it.  Even #:select is a bit of a
> PITA to use:
>
>   #:use-module ((a) #:select (b c d))
>   #:use-module ((e) #:renamer (symbol-prefix-proc 'p:))
>
> vs
>
>   (import (only (a) b c d)
>   (prefix (e) p:))

Sounds a bit like kill/yank vs. copy/paste.  ;-)

I’m happy with the current form and have a harder time parsing ‘import’,
but I can understand some may prefer ‘import’, in particular anyone who
comes to Guile with an R6RS background.

So if you think it's this worthwhile, go for it!

Thanks,
Ludo’.



Re: (define-module (foo) #:import (...)), a la r6rs

2012-01-08 Thread Andy Wingo
On Sun 08 Jan 2012 17:28, l...@gnu.org (Ludovic Courtès) writes:

> Andy Wingo  skribis:
>
>>   #:use-module ((a) #:select (b c d))
>>   #:use-module ((e) #:renamer (symbol-prefix-proc 'p:))
>>
>> vs
>>
>>   (import (only (a) b c d)
>>   (prefix (e) p:))
>
> Sounds a bit like kill/yank vs. copy/paste.  ;-)

Heh, indeed ;-)

> I’m happy with the current form and have a harder time parsing ‘import’,
> but I can understand some may prefer ‘import’, in particular anyone who
> comes to Guile with an R6RS background.
>
> So if you think it's this worthwhile, go for it!

OK, thanks for the feedback.  Next time I get bothered by this I'll
propose a concrete patch.

Cheers,

Andy
-- 
http://wingolog.org/



Re: syntax-local-value patch for discussion

2012-01-08 Thread Stefan Israelsson Tampe
On Sat, Jan 7, 2012 at 1:05 AM, Andy Wingo  wrote:

> On Mon 05 Dec 2011 19:12, Stefan Israelsson Tampe 
> writes:
>
> > (define-syntax info
> >   (lambda (x)
> > (syntax-case x ()
> >   ((_ x)
> > (pk (syntax-binding-info (syntax->datum #'x)))
> > #'#f
>
> I agree with Ian that we should be operating on syntax objects here, not
> on datums.  Also, what should the type of the return value be?  Racket
> implies that it should be a procedure, no?
>

syntax-binding-info just returns the binding information used at a lookup
in psyntax for an identifier
but it would be better if it returns a lookup procedure that applied to a
syntax object gives the lookuped
information assuming that psyntax is not mutating. using this primitive we
can design a syntax-local-value
according to racket

e.g.
(syntax-local-value id-stx [failure-thunk intdef-ctx]) -> any
  id-stx: syntax?
  failure-thunk : (or/c (-> any) #f)  = #f
  intdef-ctx: (or/c internal-definition-context? #f) = #f

This should return the value of the macro binding that can be anything and
if it's not a macro
eval failure thunk or return #f if not supplied. Also intef-ctx could in
our case be then lookup-function
returned by a call to syntax-binding-info in order to mimic the racket
logic. To note is that in syntax-parse
syntax classes are defined as a macro with a struct value instead of a
transformer. Also to not is that in racket they have special renamer macros
that interacts with this function in such a way that the syntax-local-value
of a renamer macro is following the renamer and calls this function again
on the renamer value
automagically. It should be noted that renamer macros is needed to make
syntax-parse clean in that you long descriptive names for the syntax
classes and it is assumed that you rename them when using the actual syntax
parse.



> > and calling this in an example lead to
> > (let-syntax ((a (lambda (x) #'#f)))
> >   (info a))
>
> Does it work on toplevel macros as well?
>
>
Yes!

/Stefan


Re: syntax-local-value patch for discussion

2012-01-08 Thread Mark H Weaver
Hi Stefan,

Stefan Israelsson Tampe  writes:
> diff --git a/module/ice-9/psyntax.scm b/module/ice-9/psyntax.scm
> index e522f54..70463a5 100644
> --- a/module/ice-9/psyntax.scm
> +++ b/module/ice-9/psyntax.scm
> @@ -155,6 +155,10 @@
>  (eval-when (compile)
>(set-current-module (resolve-module '(guile
>  
> +(define *macro-lookup* (make-fluid))
> +(fluid-set! *macro-lookup* 
> +(lambda x (error "not in a macro evaluation context")))
> +
>  (let ()
>(define-syntax define-expansion-constructors
>  (lambda (x)
> @@ -1304,8 +1308,12 @@
> (syntax-violation #f "encountered raw symbol in macro 
> output"
>   (source-wrap e w (wrap-subst w) mod) x))
>(else (decorate-source x s)
> -(rebuild-macro-output (p (source-wrap e (anti-mark w) s mod))
> -  (new-mark
> +(with-fluids ((*macro-lookup*
> +   (lambda (e) (lookup (id-var-name e w)
> +   r mod
> +   
> +  (rebuild-macro-output (p (source-wrap e (anti-mark w) s mod))
> +(new-mark)
>  
>  (define expand-body
>;; In processing the forms of the body, we create a new, empty wrap.

This doesn't look quite right to me.

At this one point only, where a macro is expanded, you capture the
lexical environment (r w mod) in your fluid.  This is the lexical
environment that you use to lookup plain symbols later passed to
`syntax-binding-info'.

Will this approach will be robust in the general case?  For example,
what happens if you use a macro in one module to generate a macro in
another module that uses syntax-binding-info on a syntax object that
came from yet another module?

A few suggestions:

First, as others have pointed out, you should be passing syntax-objects
to `syntax-binding-info' instead of plain symbols.  This one change
alone will make this code far robust, because syntax-objects include
their own wrap and module.

Second, in your call to `lookup', you should pass the module that came
from the syntax-object, instead of the module captured from the most
recent macro expansion.  Please take a look at how psyntax's internal
procedure `syntax-type' looks up syntax-objects (compared with how it
looks up plain symbols).  I think you should emulate that logic.

Third, are you sure that the `r' captured from the most recent macro
expansion will be recent enough in all cases to include the binding
that's being queried?  `r' is extended in quite a few places in psyntax,
for various different binding constructs.

  Best,
   Mark



Re: Continuation sets and order-independency

2012-01-08 Thread David Kastrup
Noah Lavine  writes:

> Okay, let me see if this is right now.
>
> In the expression
>
>   (list (call-with-current-continuation func) (+ 4 14)),
>
> you want the addition to be done before the
> call-with-current-continuation, as opposed to being part of the
> continuation.
>
> Right?

This is part of it, yes.  If you think as "(list expr1 expr2 ...)" as

(call-with-values (parallel expr1 expr2 ...) list)

and each of the expr might suspend its thread (and record the thread
somewhere for waking up), then you have parallel execution that
continues until all paths have reached completion or suspension.

Now instead of parallel execution, it would be ok to have the
serialization of call/cc but still continue in all paths to completion
or suspension.

Of course, a single call/cc just stores away a single continuation, and
whatever continuation while evaluating "list" was reached, one can't
prod Scheme into taking up another branch without letting that
continuation continue, so obviously something is missing in this
picture.

-- 
David Kastrup



Re: [PATCH] Implement local-eval, local-compile, and the-environment

2012-01-08 Thread Mark H Weaver
Hi Andy,

Andy Wingo  writes:
>> We could change that, but I'm reluctant to make the evaluator any
>> slower than it already is.
>
> Using variable objects has the possibility to make the evaluator faster,
> actually, if at the same time we make closures capture only the set of
> free variables that they need, instead of the whole environment.  That
> way free variable lookup would be something like (vector-ref
> free-variables k) instead of cdring down the whole environment chain.

True, but wouldn't this require an analysis pass similar to
`analyze-lexicals'?  Do we want to make our evaluator that complex?

>> More importantly, is there any guarantee that mutable lexicals will
>> continue to be represented as variable objects in future native code
>> compilers?  Do we want to commit to supporting this uniform
>> representation in all future compilers?
>
> I don't know that we should commit to it externally, but internally it's
> OK.  If we did have to commit to it externally even that would be OK, as
> I don't think it will change.

You may be right, but committing to a uniform representation makes me
very uncomfortable.  I can imagine several clever ways to represent
mutable free variables in a native compiler that don't involve separate
variable objects for each variable.

The desire to support a uniform representation has already lead to a
proposal to make the evaluator far more complex, in order to work more
like our current compiler.  I take that as a warning that this strategy
is too tightly coupled to a particular implementation.

>>> What's the purpose of the (if #t e) ?
>>
>> That's to force expression context.  There's no proper way to add new
>> definitions to an existing local environment anyway.  (the-environment)
>> is treated like an expression, thus terminating definition context.
>> Therefore, the form passed to `local-eval' should be constrained to be
>> an expression.
>>
>> BTW, I think I want to change (if #t e) to: #f e.  That should require a
>> less complicated analyzer to optimize away.
>>
>> Is there a better way to force expression context?
>
> I guess it's not clear to me why you would want to force expression
> context.

If we allow definitions, then your nice equivalence

   == (local-eval ' (the-environment))

no longer holds.  Also, the user cannot use the simple mental model of
imagining that  had been put in place of (the-environment).

For example:

  (let ((x 1))
(define (get-x) x)
(begin
  (define x 2)
  (get-x)))
  => 2

is _not_ equivalent to:

  (let ((x 1))
(define (get-x) x)
(local-eval '(begin
   (define x 2)
   (get-x))
(the-environment)))
  => 1

The only way I see to achieve your equivalence is to constrain  to
be an expression.

 +(global-extend 'core 'the-environment
>>>
>>> This one is really nasty, and I'd like to avoid it if possible.  Are
>>> there some short primitives that psyntax could export that would make it
>>> possible to implement `the-environment' in a module?

I dunno.  I still don't think it's possible to make this code much
simpler, although I _did_ try to make the code easier to read (though
less efficient) in the revised patch below.

I suspect the best that can be hoped for is to move some more of this
code from psyntax to an external module.  I'm not sure why that's
inherently desirable, but more importantly, that strategy carries with
it a significant price: it means exposing other far less elegant
primitives that are specific to our current implementation strategy.

I would proceed very cautiously here.  Even if we don't advertise a
primitive as stable, users are bound to make use of it, and then they'll
put pressure on us to keep supporting it.

`the-environment' and `local-eval' have simple and clean semantics, and
present an abstract interface that could be reimplemented later in many
different ways.  I'm comfortable exposing them.  I cannot say the same
about the other lower-level primitives under discussion.

>> * The list of ordinary variables (these need to be boxed)
>> * The list of simulated variables (we need to reuse the original box)
>
> A special form to get all visible variables, and syntax-local-value plus
> a weak hash to do the optimization?

We could do it that way, but that strategy would not extend nicely to a
more complete implementation, where local syntactic keywords are
captured.

>> * The list of others, i.e. unsupported lexical bindings
>
> In what case do you get unsupported lexical bindings?

Currently, this category includes pattern variables bound by
syntax-case, and locally-bound syntactic keywords, other than the
specially-marked ones bound by restore-environment (formerly called
box-lambda*).

I have attached a revised patch with the following changes:

* tabs => spaces

* Completely reworked the implementation of `the-environment' in
  psyntax, to hopefully be easier to read and understand, at the cost of
  some efficiency.

Re: Compiler Branch

2012-01-08 Thread Noah Lavine
Hello,

> Interesting.  `verify' seems to be a form of contracts:
>
>  http://ftp.ccs.northeastern.edu/scheme/pubs/icfp2002-ff.pdf
>
> Does `verify' have runtime semantics?  Under what situations, if any,
> would the compiler insert runtime checks?

It has no runtime semantics right now. I considered making it like
'assert', but I'm not sure that's right. I will look at that paper.

> As that paper indicates, two issues you will have to deal with are
> higher-order functions and blame.
>
> Your interest in static analysis naturally raises the question of types.
> You might like this paper:
>
>  http://www.ccs.neu.edu/racket/pubs/dls06-tf.pdf

I will look at that too; thank you.

> Ah, I was just curious.  I made some small changes relative to
> stable-2.0 (primcall and seq), and wondered if they were a good idea or
> not.
>
> I was also considering a move to a CPS-based intermediate language.
> Some links are here:
>
>  http://wingolog.org/archives/2011/07/12/static-single-assignment-for-functional-programmers

Oh, this is interesting. I was just wondering if I needed a CPS-type
representation to write the analyzer reasonably elegantly. If you
think the main compiler also needs it, then perhaps I should work on
that first, and then come back to the analyzer question.

I do think there's a problem with plain CPS, though - it forces you to
pick an order for the evaluation of function arguments. I would like
to use CPS with some sort of parallel-call operator, so we can leave
the order undefined (maybe at some point an optimizer will want to
adjust the order). What do you think?

I also noticed that at the end of that blog post you said you were
considering ANF versus CPS for Guile (I assume you'd already decided
that you didn't like Tree-IL). Does this mean you decided on CPS?

>> My first idea was to implement something equivalent to 0-CFA, which
>> unfortunately has complexity O(n^3). If there's something that's
>> faster and still produces useful results, that could be a good first
>> step. However, I also think we could get the average-case time far
>> below n^3 by doing inference on demand instead of calculating the type
>> of every binding, similar to the change that peval went through a
>> couple months ago.
>
> Yes, this is my thought as well.  Note also that peval is described by
> waddell and dybvig as being a kind of special-purpose sub-0CFA.

That makes sense. What I'd *really* like to do is make the analyzer
use the same on-demand-calculation infrastructure as peval, but it
might be really tricky to make them fit together. I am planning to
leave that project for much later.

Noah



Re: syntax-local-value patch for discussion

2012-01-08 Thread Stefan Israelsson Tampe
Thanks for taking time to help with this!

On Sun, Jan 8, 2012 at 8:15 PM, Mark H Weaver  wrote:

> Hi Stefan,
>
> Stefan Israelsson Tampe  writes:
> > diff --git a/module/ice-9/psyntax.scm b/module/ice-9/psyntax.scm
> > index e522f54..70463a5 100644
> > --- a/module/ice-9/psyntax.scm
> > +++ b/module/ice-9/psyntax.scm
> > @@ -155,6 +155,10 @@
> >  (eval-when (compile)
> >(set-current-module (resolve-module '(guile
> >
> > +(define *macro-lookup* (make-fluid))
> > +(fluid-set! *macro-lookup*
> > +(lambda x (error "not in a macro evaluation context")))
> > +
> >  (let ()
> >(define-syntax define-expansion-constructors
> >  (lambda (x)
> > @@ -1304,8 +1308,12 @@
> > (syntax-violation #f "encountered raw symbol in
> macro output"
> >   (source-wrap e w (wrap-subst w)
> mod) x))
> >(else (decorate-source x s)
> > -(rebuild-macro-output (p (source-wrap e (anti-mark w) s mod))
> > -  (new-mark
> > +(with-fluids ((*macro-lookup*
> > +   (lambda (e) (lookup (id-var-name e w)
> > +   r mod
> > +
> > +  (rebuild-macro-output (p (source-wrap e (anti-mark w) s mod))
> > +(new-mark)
> >
> >  (define expand-body
> >;; In processing the forms of the body, we create a new, empty
> wrap.
>
> This doesn't look quite right to me.
>
> At this one point only, where a macro is expanded, you capture the
> lexical environment (r w mod) in your fluid.  This is the lexical
> environment that you use to lookup plain symbols later passed to
> `syntax-binding-info'.
>

I first tought that the semantic meant that to get the macro binding for a
symbol in the stored lexical
environment. and acording to the racket doc, in a supplied context or the
context of the expansion (hence
the use of the fluid) But reading your mail the context mentioned in the
doc is probably related to
syntax-parameters. Hence I will try to folllow your suggestions to refine
the approach.

/Stefan