Hi Noah :) On Sun 08 Jan 2012 01:42, Noah Lavine <noah.b.lav...@gmail.com> writes:
> The function called 'go' runs everything. You give it a Scheme > expression. It compiles that expression to tree-il, then converts it > to annotated-tree-il. Then it scans the annotated-tree-il looking for > instances of the special function 'verify'. > > The idea of 'verify' is that if the analyzer can't prove that things > in a verify always evaluate to true, it will throw a warning. It's a > simple way to mark up your code with your expectations. For instance, Interesting. `verify' seems to be a form of contracts: http://ftp.ccs.northeastern.edu/scheme/pubs/icfp2002-ff.pdf Does `verify' have runtime semantics? Under what situations, if any, would the compiler insert runtime checks? As that paper indicates, two issues you will have to deal with are higher-order functions and blame. Your interest in static analysis naturally raises the question of types. You might like this paper: http://www.ccs.neu.edu/racket/pubs/dls06-tf.pdf I'm glad to hear of your interest in the problem, it's a good one. >> What do you think about the tree-il differences in master relative to >> stable-2.0? > > I don't know what the differences are, since I just base all of the > work on master. Ah, I was just curious. I made some small changes relative to stable-2.0 (primcall and seq), and wondered if they were a good idea or not. I was also considering a move to a CPS-based intermediate language. Some links are here: http://wingolog.org/archives/2011/07/12/static-single-assignment-for-functional-programmers >> Do you see this work as an optional pass, or a core part of the >> compiler? If the latter, what sort of algorithmic complexity are you >> envisioning for this work? (O(n) in size of program is ideal of >> course.) > > My first idea was to implement something equivalent to 0-CFA, which > unfortunately has complexity O(n^3). If there's something that's > faster and still produces useful results, that could be a good first > step. However, I also think we could get the average-case time far > below n^3 by doing inference on demand instead of calculating the type > of every binding, similar to the change that peval went through a > couple months ago. Yes, this is my thought as well. Note also that peval is described by waddell and dybvig as being a kind of special-purpose sub-0CFA. > I think the complexity really determines how it could be used in the > compiler. Ideally it would be very fast, and it could work as an > extension to peval. If it's slower, it could only be used if the user > requested that the compiler do more work. Either way, I'd like to see > it help generate native code, and ideally native binaries. Yes, that would be great. > Message 2: > >> This sounds cool. I assume you're familiar with kCFA? See >> http://matt.might.net/articles/implementation-of-kcfa-and-0cfa/, for >> example. > > No, I hadn't read about it before. Thank you very much for the > pointer! I admit that I am new to writing real compilers, so pointers > to papers are great. I'm still new to them too, so consider it a joint learning process :-) Note that the kCFA algorithms, though proven, are not the last word; see for example CFA2, http://arxiv.org/pdf/1102.3676. Dimitris Vardoulakis applied CFA2 to JavaScript last summer, in work at Mozilla. >> It doesn't seem to me that static analysis is a prerequisite for AOT >> compilation -- and indeed, the current .go compilation is an example of >> naive AOT compilation. > > Yes, excellent point. I was blurring two ideas together. I would > eventually like this work to lead to an optimizing native-code > compiler, so I am planning ahead for that. Great. Happy hacking, Andy -- http://wingolog.org/