"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes: > I've been reading the beloved Paul Graham's "Hackers and Painters". > He claims he developed a web app at light speed using Lisp and lots > of macros.
That was the original "yahoo store". > It got me curious if Lisp is inherently faster to develop complex > apps in. For complex applications IMO/E there is no question that it is. But that is due to a lot of reasons, not only macros. > It would seem if you could create your own language in Lisp using > macros that that would be quite an advantage.... Here's the story. It has long been known that "domain specific languages" (DSL) are a significantly more potent means of producing applications (or chunks of them) in their domains than any so called "general purpose" language. This is true even if the DSL is not a particularly great piece of work. There are several examples of this that most everyone knows about (though they may not realize it), with "regular expressions" and SQL being among the most obvious. You can take this further to "application specific languages" (ASL) which provide even more leverage (though even narrower focus). Obvious examples of this sort of thing would include AutoCAD and the "grammar languages" of LALR parser generators. The reason these things are so much better (for what they do) than hacking away in your general purpose language is that the abstractions they provide are much closer to what you want to say. They are much more _declarative_ than procedural. You say _what_ you want done, instead of specifying a lot of how that will produce the what [1]. Lisp (by which I mean here Common Lisp) macros enable you to build DSLs and ASLs directly into the base language. This has several advantages beyond what the DSLs/ASLs themselves bring; some obvious ones being: * You don't have to write lexers and parsers, that part is provided to you for "free" by the Lisp reader[2]. In particularly sophisticated cases you may need to alter the reader behavior (via the readtables) and/or build a code walker to analyze things. But that would be several sigmas out of the norm. * Better yet, you don't have to write backend code generators. The expansion code of a macro just gets compiled by the Lisp compiler (typically an optimizing native code generator). * The DSL/ASL code can be seamlessly used with the base language (and libraries) just like when you use proprietary class libraries you've built from within the base language. Only here, you get a much higher level of abstraction and expressive power. So, you can freely use a DSL/ASL where it applies (and get all its advantages), but can effortlessly move to the base language when stepping outside its focus. * Better yet, in highly complex application scenarios you can have a number of DSLs/ASLs each bringing their own high expressive power and move freely among them and the base language. This should all work naturally together in concert. * Most idioms can be abstracted away and the bugs associated with them disappear as well. You don't have to remember sequences of "right, in these cases I first have to do this, then I have to call this that and the next thing in this order, and finally make sure I end it all with this bit over here." Their are a few standard red herings trotted out against all this. For example, "you mean you have a different programming language for each application! That's awful." or some variant. If you think about it for a moment, this is complete nonsense. Why? Because you need to do this anyway - building and learning a set of class libraries for an applicaion or domain (along with all its idioms as well) is actually much more work, more painful, and (typically) buys you less. The point is, you need to learn the "vocabulary" of application or domain code whether or not you use DSLs/ASLs. Worse, without the DSL/ASL approach you also need to learn a number of idioms of how to _procedurally_ make use of the resources to achieve _what_ you want. Another red hering goes something like: "it is much harder to learn a DSL/ASL than to simply use your base general purpose language". Most programmer types espousing this are really weird, because in the next breath they will be extolling the virtues of regular expressions, or list comprehension or some other DSL that just happens to already be embedded in their favorite language. Also, it makes no sense anyway - DSLs and ASLs are exactly the kind of thing provided to lay people at the user end of many application scenarios. Are they really somehow that much superior to learning these things than programmers? Another one is "fragmentation". This is really a kind of variant on the first, but it is used so often that it deserves its own mention. Basically the argument is that if you build DSLs and ASLs eventually your base language "fragments" and you have a few dozen languages instead. A few seconds thought and it is easy to see that this is no different than having a number of class libraries and in both cases the idea of "framenting" the base language is nonsense. OTOH, there _are_ issues associated with the DSL/ASL approach and Lisp macro programming generally. IMO, the _key_ thing is to realize that you don't want to have joe/jane-avg-hack, building these things, because they will likely *$*%! it up. You don't have to be a genius or something, but you have to be a _good_ designer and someone who knows how to keep ergonomics and elegance front and center in the process. The other thing is (again IMO, others will definitely disagree with this) that you should focus on getting people who know something about language design and compilation/translation to do this work. You certainly need a clear understanding of the differences between compile time, load time and runtime and what is going on at each of these stages to do the best work here. The best way to structure this is to have a group solely devoted to building the DSL/ASL, with input from the domain experts, other high level developers, and even some from joe/jane-avg-hack. The result is then used by the application developers proper, including joe/jane-avg-hack. This can and probably should be a process of continual refinement. The entire thing can even be collapsed into one individual (for smaller projects), but you need to be keenly aware of what hat you have on at any given point. Even so, it should be fairly clear that this isn't that much different from what you would want to do in building your applications core class system. It just has different areas of emphasis and expertise. > I realize that Python has operator overloading and OOP so I'm not sure. This is simply orthogonal to the whole issue. Lisp has the most potent object system available in any extent programming language. While "operator overloading" (aka "static dispatch" or "compile time dispatch") is controversial in some circles, it is quite easy to provide with some macrology as well. > Any *evidence* one way or another? There's plenty of evidence, but as with any software productivity issue, it's mostly anecdotal. Hard scientific evidence for such things is basically non existent and the economics of such things pretty much guarantee it is going to stay that way. ----------------------------------- 1. Someone recently remarked that good Lisp macros are basically executable pseudo code. I think that is pretty much exactly right and is a pretty good "sound bite" distillation of what it is all about. 2. Or, coming at from another angle, you don't have to figure out how to cobble together a bunch of extraneous applications (lexers, parers generators, code generators, etc.) into the one you are building. /Jon -- 'j' - a n t h o n y at romeo/charley/november com -- http://mail.python.org/mailman/listinfo/python-list