On 03/04/2018 09:40 PM, Alexis King wrote:
[... context ...]
Still, with all this context out of the way, my questions are
comparatively short:
1. Is this lack of hygiene well-known? I did not find anything in
Ryan’s dissertation that explicitly dealt with the question, but I
did not look very hard, and even if it isn’t explicitly mentioned
there, I imagine people have thought about it before.
2. Are there some fundamental, theoretical obstacles to making a
syntax class-like thing hygienic that I have not foreseen? Or would
it really be as simple as performing the usual scope-flipping that
macroexpansion already performs?
3. If it is possible, is the unhygienic nature of syntax classes
desirable frequently enough that it outweighs the benefits of
respecting hygiene? That seems unlikely to me, but maybe I have not
fully considered the problem. The semantics of syntax classes
cannot be changed now, of course, for backwards compatibility
reasons, but were that not a problem, would it make sense to make
them hygienic? If not, why not?
1. Yes. To me, at least :) That aspect of hygiene is triggered by a
macro expansion step, and the macro expansion step also defines the
boundary of its effect. In other words, the expansion of a macro
introduces a scope, but syntax classes do not. Compare with the
following examples:
(define x 'old)
(begin-for-syntax
(define (get-def)
#'(define x 'new))
(define (get-use)
#'x))
(define-syntax (m1 stx)
#`(begin #,(get-def) #,(get-use)))
(m1) ;; => 'new
(define-syntax (m2 stx)
(let ([expr #'x])
#`(let ([x 'new]) #,expr)))
(m2) ;; => 'new
Contrast with systems like MetaML, which strictly enforce lexical
scoping but don't actually give you macros.
2. I think the main technical challenge is finding all of the syntax
objects to flip the scope on, given that syntax classes allow attributes
to have arbitrary values (like opaque structures and closures), not just
syntax objects. We have a similar problem with syntax properties, which
are invisible to the hygiene algorithm.
It might be easier in a macro system like Andre van Tonder's system, as
Matthew and Sam mentioned.
3. Maybe. Half-baked musings follow:
There are two good ideas in opposition here. One is hygiene. The other
is the availability of pure abstractions.
As a new grad student, I spent some time playing with plain R5RS-style
syntax-rules macros. My first presentation in grad school was on an
EOPL-style interpreter I wrote using nothing but syntax-rules macros. It
did integer arithmetic (using a unary encoding of integers as
parentheses), it did closures, it did continuations... all at compile
time. But I discovered that there were things I couldn't express the way
I wanted because the only available abstraction mechanism (the macro
definition) was tangled up with the hygiene effect. I don't remember the
exact situations, but they had the following form: I needed
macro-generated-macro X to produce exactly identifier Y, but it could
only produce Y with a mark. Of course, most of these "macros" were
really implementation fragments of something else; I was forced to break
them out into separate pieces because of the limitations of the language
I chose to work in.
Hygienic macros are impure abstractions. (Other examples of impure
abstractions: Javascript functions, if you do reflection on the current
activation record, and Prolog predicates, which delimit the effect of a
cut.) They still win, because despite being impure at the syntax object
level, they approximate this nice lexical scoping property one level up,
the level of interpreting the syntax objects as expressions,
definitions, etc. (I say "approximate" because of the examples I gave in
part 1.) But I think the win depends on the placement of the hygiene
boundaries. My interpreter experience makes me think that too many
boundaries within the implementation of a macro can be bad.
So another framing of the question is where should the boundaries go?[*]
The one that corresponds to a macro expansion step is nice because macro
expansion occurs at expression (or more precisely, "form") positions,
and that connects the hygiene boundary with the interpretation level
where lexical scoping is defined. Are there other "meaningful" places to
put hygiene boundaries? Do syntax classes necessarily correspond with
meaningful boundaries? Or are meaningful boundaries not actually that
important?
([*] Matthew and I had a similar problem regarding syntax certificates,
which evolved into the current dye pack and tainting system. We couldn't
find a good way to identify anchors in the source code that represented
discrete macro implementations that could be automatically protected, so
we left it to the programmer to call syntax-protect explicitly. (Or use
syntax-rules.) The problem is that "macro" is just an interpretation
imposed by the macro expander on certain identifiers bound to
compile-time procedures, just like "expression" is an interpretation
imposed on some syntax objects---but other contexts might impose
different interpretations on the same things. OTOH, maybe one could
create a macro system where meaning is more intrinsic and less contextual.)
So, those are my reservations. But the only way to really find out is to
make a prototype implementation and try it out.
Ryan
--
You received this message because you are subscribed to the Google Groups "Racket
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.