On 2/9/2019 1:33 PM, 'John Clements' via Racket Users wrote:
> On Feb 8, 2019, at 15:01, George Neuner <gneun...@comcast.net>  wrote:
> > > The distinguishing characteristics of "nanopass" are said to be: > > (1) the intermediate-language grammars are formally specified and
>     enforced;
> (2) each pass needs to contain traversal code only for forms that
>     undergo meaningful transformation; and
> (3) the intermediate code is represented more efficiently as records
> > > IRs implemented using records/structs go back to the 1960s (if not
> earlier).
> > > Formally specified IR grammars go back at least to Algol (1958). I
> concede that I am not aware of any (non-academic) compiler that
> actually has used this approach: AFAIAA, even the Algol compilers
> internally were ad hoc.  But the *idea* is not new.
> > I can recall as a student in the late 80's reading papers about
> language translation and compiler implementation using Prolog
> [relevant to this in the sense  of being declarative programming]. I
> don't have cites available, but I was spending a lot of my library
> time reading CACM and IEEE ToPL so it probably was in one of those.
> > > I'm not sure what #2 actually refers to. I may be (probably am)
> missing something, but it would seem obvious to me that one does not
> write a whole lot of unnecessary code.


Hmm… I think I disagree.  In particular, I think you’re missing the notion of a 
DSL that allows these intermediate languages to be specified much more 
concisely by allowing users to write, in essence, “this language is just like 
that one, except that this node is added and this other one is removed.” I 
think it’s this feature, and its associated 
automatic-translation-of-untouched-nodes code, that makes it possible to 
consider writing a 50-pass parser that would otherwise have about 50 x 10 = 500 
“create a node by applying the transformation to the sub-elements” visitor 
clauses. Right?

I was referring to the development of the compiler itself.  My comments were directed at the perceived problems of bloat in the "micropass" compiler infrastructure that "nanopass" is supposed to fix.  ISTM the providers of the tools must be held accountable for any bloat they cause ... not the ones who use the tools.

In the context of real world [rather than academic] development, my experience is that most DSLs are not of the form "<X>, plus this bit, minus that bit", but rather are unique languages having unique semantics that only are *perceived* to be similar to <X> due to borrowing of syntax.  Most users of <X> don't really understand its semantics, and whatever <X>-like DSLs they create will share their flawed understanding.

Real world DSLs - whether compiled or interpreted - often are grown incrementally in a "micropass" like way.  But most developers today do not have a CS education and will not be using any kind of compiler development "framework".  Many even eschew tools like parser generators because their formal approaches are perceived to be "too complicated".  Under these circumstances, there will not be much superfluous code written [or generated] that is not either a false start to be thrown away, or some kind of unit test.


To be clear, I have no problem with "nanopass" - I think it's a fine idea.  But I wonder how well it will transition into the real world when students exposed to the methodology leave school and go to work.  Will the tools be available for business use and will they be up to the demands of real world development?  Will they reduce development effort enough to be adopted outside academia?

YMMV, (and it will)
George

--
You received this message because you are subscribed to the Google Groups "Racket 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to