Re: merits of Lisp vs Python
[EMAIL PROTECTED] ha escrito: > - Lisp is hard to learn (because of all those parenthesis) I cannot understand why. It is like if you claim that packaging things in boxes is difficult to learn. HTML and XML have more brackets than LISP (usually double) for structuring data and everyone has learned HTML. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
[EMAIL PROTECTED] ha escrito: > Juan R. wrote: > > [EMAIL PROTECTED] ha escrito: > > > - Lisp is hard to learn (because of all those parenthesis) > > > > I cannot understand why. It is like if you claim that packaging things > > in boxes is difficult to learn. > > > > HTML and XML have more brackets than LISP (usually double) for > > structuring data and everyone has learned HTML. > > I think maybe you missed the point I was making. Yes i did, sorry > To make it clearer I'm saying that the arguments that are being made > over and over again against Lisp in this thread have been the > antithesis of my experience since moving from Python to Lisp. > > I just prefer personal experience to popular misconceptions :-) I often count 'parentheses' used in other approaches. E.g. the LISP-based [HTML [@:XMLNS http://www.w3.org/1999/xhtml] [HEAD [TITLE Test page]] [BODY]] is SLiP (Python) html(xmlns="http://www.w3.org/1999/xhtml";): head(): title(): "Test page" body(): LISP-based: 5 ( 5 ) 1 @ 1 : Python: 4 ( 4 ) 1 = 4 " 4 : -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Kay Schluehr ha escrito: > Note also that a homogenous syntax is not that important when > analyzing parse trees ( on the contrary, the more different structures > the better ) but when synthesizing new ones by fitting different > fragments of them together. Interesting, could you provide some illustration for this? > The next question concerns compositionality of language > enhancements or composition of even completely independent language > definitions and transformers both on source and on binary level. While > this is not feasible in general without creating ambiguities, I believe > this problem can be reduced to ambiguity detection in the underlying > grammars. A bit ambiguous my reading. What is not feasible in general? Achieving compositionality? -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Harry George ha escrito: > Really? Given its small base, the percentage increases in Ruby use > (for any reason) can look quite impressive. I've see data suggesting > Ruby is replacing Perl and maybe Java. But I've yet to see data which > shows people dropping Python and moving to Ruby. Where do I find that > data? No _scientific_ data but TIOBE Dec index shows an increase of 9x on Ruby, 1x for Python and -4x for LISP [1]. More: "There is only 1 month to go before TIOBE announces its 'programming language of the year 2006'... Ruby remains to be top favorite for the title." Look also to Google Trends [2, 3]. One can notice further increase for Ruby and slight decreasing for Python. [1] http://www.tiobe.com/tpci.htm [2] http://www.google.com/trends?q=ruby+programming%2C+python+programming&ctab=0&geo=all&date=all http://www.google.com/trends?q=ruby+language%2C+python+language&ctab=0&geo=all&date=all -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Ken Tilton ha escrito: > You missed it? Google fight: > >http://www.googlefight.com/index.php?lang=en_GB&word1=Python&word2=Ruby > > Python wins, 74 to 69.3. And there is no Monty Ruby to help. > > ken Nice animation! http://www.googlefight.com/index.php?lang=en_GB&word1=Ken+Tilton&word2=Monty+Ruby -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Rob Thorpe ha escrito: > Juan R. wrote: > > Ken Tilton ha escrito: > > > You missed it? Google fight: > > > > > >http://www.googlefight.com/index.php?lang=en_GB&word1=Python&word2=Ruby > > > > > > Python wins, 74 to 69.3. And there is no Monty Ruby to help. > > > > > > ken > > > > Nice animation! > > > > http://www.googlefight.com/index.php?lang=en_GB&word1=Ken+Tilton&word2=Monty+Ruby > > It's not fair to pick on him just because you're better at > Googlefighting... I simply noticed (eval "there is no Monty Ruby to help") --> NIL > http://www.googlefight.com/index.php?lang=en_GB&word1=Ken+Tilton&word2=Juan+Gonz%E1lez > > This thing is strange, I don't understand it at all... There is many "Juan González" but i am not one of them. Try next http://www.googlefight.com/index.php?lang=en_GB&word1=Ken+Tilton&word2=Juan+R.+Gonz%E1lez+%C1lvarez > http://www.googlefight.com/index.php?lang=en_GB&word1=Ken+Tilton&word2=Robert+Thorpe > I find the results dubious. A conspiracy masterminded by Monty Ruby no > doubt. With some help from mediocre programmers :) -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Kay Schluehr ha escrito: > Juan R. wrote: > > > Kay Schluehr ha escrito: > > > Note also that a homogenous syntax is not that important when > > > analyzing parse trees ( on the contrary, the more different structures > > > the better ) but when synthesizing new ones by fitting different > > > fragments of them together. > > > > Interesting, could you provide some illustration for this? > > My approach is strongly grammar based. You start with a grammar > description of your language. This is really not much different from > using Lex/Yacc except that it is situated and adapted to a pre-existing > language ecosystem. I do not intend to start from scratch. > > Besides the rules that constitute your host language you might add: > > repeat_stmt ::= 'repeat' ':' suite 'until' ':' test > > The transformation target ( the "template" ) is > > while True: > > if : > break > > The structure of the rule is also the structure of its constituents in > the parse tree. Since you match the repeat_stmt rule and its > corresponding node in the parse tree you immediately get the > node and the node: > > class FiberTransformer(Transformer): > @transform > def repeat_stmt(self, node): > _suite = find_node(node, symbol.suite) > _ test = find_node(node, symbol.test, depth = 1) > # > # create the while_stmt here > # > return _while_stmt_node > > So analysis works just fine. But what about creating the transformation > target? The problem with the template above is that it can't work > precisely this way as a Python statement, because the rule for a while > statement looks like this: > > while_stmt: 'while' test ':' suite > > That's why the macro expander has to merge the node, passed > into the template with the if_stmt of the template, into a new suite > node. > > Now think about having created a while_stmt from your original > repeat_stmt. You return the while_stmt and it has to be fitted into the > original syntax tree in place of the repeat_stmt. This must be done > carefully. Otherwise structure in the tree is desroyed or the node is > inserted in a place where the compiler does not expect it. > > The framework has to do lots of work to ease the pain for the meta > programmer. > > a) create the correct transformation target > b) fit the target into the syntax tree > > Nothing depends here particularly on Python but is true for any > language with a fixed grammar description. I've worked exclusively with > LL(1) grammars but I see no reason why this general scheme shall not > work with more powefull grammars and more complicated languages - Rubys > for example. Thanks. > > > The next question concerns compositionality of language > > > enhancements or composition of even completely independent language > > > definitions and transformers both on source and on binary level. While > > > this is not feasible in general without creating ambiguities, I believe > > > this problem can be reduced to ambiguity detection in the underlying > > > grammars. > > > > A bit ambiguous my reading. What is not feasible in general? Achieving > > compositionality? > > Given two languages L1 = (G1,T1), L2 = (G2, T2 ) where G1, G2 are > grammars and T1, T2 transformers that transform source written in L1 or > L2 into some base language > L0 = (G0, Id ). Can G1 and G2 be combined to create a new grammar G3 > s.t. the transformers T1 and T2 can be used also to transform L3 = (G3 > = G1(x)G2, T3 = T1(+)T2) ? In the general case G3 will be ambigous and > the answer is NO. You mean direct compositionality. Is there any formal proof that you cannot find a (G2' , T2') unambiguously generating (G2, T2) and combining with L1 or this is always possible? This would not work for language enhancements but for composition of completely independent languages. > But it could also be YES in many relevant cases. So > the question is whether it is necessary and sufficient to check whether > the "crossing" between G1 and G2 is feasible i.e. doesn't produce > ambiguities. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Kaz Kylheku ha escrito: > Kay Schluehr wrote: > > Juan R. wrote: > > > A bit ambiguous my reading. What is not feasible in general? Achieving > > > compositionality? > > > > Given two languages L1 = (G1,T1), L2 = (G2, T2 ) where G1, G2 are > > grammars and T1, T2 transformers that transform source written in L1 or > > L2 into some base language > > L0 = (G0, Id ). Can G1 and G2 be combined to create a new grammar G3 > > s.t. the transformers T1 and T2 can be used also to transform L3 = (G3 > > = G1(x)G2, T3 = T1(+)T2) ? In the general case G3 will be ambigous and > > the answer is NO. But it could also be YES in many relevant cases. So > > the question is whether it is necessary and sufficient to check whether > > the "crossing" between G1 and G2 is feasible i.e. doesn't produce > > ambiguities. > > See, we don't have this problem in Lisp, unless some of the transfomers > in T1 have names that clash with those in T2. That problem can be > avoided by placing the macros in separate packages, or by renaming. Or simply namespacing! foo from package alpha --> alpha:foo foo from package beta --> beta:foo But what composition of different languages? E.g. LISP and Fortran in the same source. > In > In the absence of naming conflicts, the two macro languages L1 and L2 > combine seamlessly into L3, because the transformers T are defined on > structure, not on lexical grammar. The read grammar doesn't change (and > is in fact irrelevant, since the whole drama is played out with > objects, not text). In L1, the grammar is nested lists. In L2, the > grammar is, again, nested lists. And in L3: nested lists. So that in > fact, at one level, you don't even recognize them as being different > languages, but on a different level you can. > > The problems you are grappling with are in fact created by the > invention of an unsuitable encoding. You are in effect solving a puzzle > that you or others created for you. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
greg ha escrito: > From another angle, think about what a hypothetical > Python-to-Lisp translator would have to do. It couldn't > just translate "a + b" into "(+ a b)". It would have > to be something like "(*python-add* a b)" where > *python-add* is some support function doing all the > dynamic dispatching that the Python interpreter would > have done. > > That's what I mean by Python being more dynamic than > Lisp. I see no dinamism on your example, just static overloading. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Kay Schluehr wrote: > > You mean a universal language adapter? I guess this is always possible > using alpha conversion but I don't believe this leads to theoretical or > practical interesting solutions but is just a limit concept. Not familiarized with you terminology. I think that i would call that a universal language composer. I mean if there exists some theoretical limitation to composionality of two directly collapsing languages (G1, T1) and (G2, T2) via a unambiguous modification (e.g. 'renaming') to a third one (G2', T2'), unknown to me. I mean some theoretical limitation in the sense of known theoretical limitations to proving theorems in closed formal systems. After all proving a formal theorem is not very different from enhacement of a language. > The practical problem with composing enhancements is that any two > extensions L1, L2 share a lot of rules and rely on their structure. > What if they don't just extend their host language L0 conservatively > but also redefine rules of L0? This is not just a theoretical problem > but it happens quite naturally if you want to adapt an extension > developed for Python 2.4 for working with Python 2.5. Here Python 2.5 > is considered as just another particular extension. Obviously Py3K will > become an interesting testcase for all kinds of syntactical and > semantical transformations. I would consider redefined-L0 to be L0'. I think that a concept of namespaces could be also used for versioning-like conflicts: L0v24:foo(), L0v25:foo(). The problem is that both versions may be stored and managed during initial period of time. But in the long run old libraries, extensions... would be updated to the new version. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
greg ha escrito: > Juan R. wrote: > > > I see no dinamism on your example, just static overloading. > > There's nothing static about it: > >q = raw_input() >if q == "A": > a = 1 > b = 2 >else: > a = "x" > b = "y" >c = a + b > > There is no way that the compiler can statically > determine what the + operator needs to do here. Before or after the input? :] No, it is not that i did mean. Of course, the operation for c is dinamic, but just statically overloading the +. The definition for c could be adapted to the cases and introduced on the if. I would call dinamic code, for instance, if the if, the different cases and the def for c could be modified on the fly _á la_ LISP macro style. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
[EMAIL PROTECTED] ha escrito: > FWIW, Python documentation consistently uses the jargon: > > () parentheses > {} braces > [] brackets > > That matches North American conventions, but occasionally confuses an > international audience (for example, the English call parentheses > "brackets" or "round brackets"). > > There's also a long tradition in both mathematics and computer science > of using "bracket" as a generic term for any syntactic device used in > pairs. Brackets are unequal paired syntatic delimiters used for packaging. Quotations marks "string" or exclamation marks ¡Esto es español! are used in pairs but not are brackets. A natural convention is brackets = ( ), [ ], { }, < > ( ) parentheses or round brackets { } braces or curly brackets [ ] box or square brackets < > chevrons or angle brackets A English calling "brackets" or "round brackets" to the parentheses is doing nothing wrong, but brackets are not parentheses. In mathematics the braces are also often called brackets and again that is not wrong. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
greg ha escrito: > I don't know about the other Pythonistas in this > discussion, but personally I do have experience with > Lisp, and I understand what you're saying. I have > nothing against Lisp parentheses, I just don't agree > that the Lisp way is superior to the Python way in > all respects, based on my experience with both. Parentheses have two advantages over identation. First is they can be used both inline and block. The inline (a b (c d (e f g))) needs to be formatted in several Python lines. Therein SLiP (Python) got problems for inline mixed markup but LML (LISP) or SXML (Scheme) do not. Second is that i do not need special editors for writting/editting. It is more difficult to do identation and editing by hand in 'Notepad' that adding parens from keyboard [*]. [*] Tip. Type always () and next move your cursor inside ( _ ). You never forget a closing ) this way! -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Raffael Cavallaro ha escrito: > This lock-in to > a particular paradigm, however powerful, is what makes any such > language strictly less expressive than one with syntactic abstraction > over a uniform syntax. Right, but it would be also remarked that there is not reason to ignoring the development and implementation of specific syntaxes, paradigms, etc. optimized to specialized (sub)fields of discourse. If all you need is basic arithmetics and 'static' data structures, optimization á la Python Y = a + b * c gets sense in most of cases. If you are developing symbolic software where data structures as highly 'dinamic' and where you would wait something more than sums, divisions, sines and cosines, and some numeric differentials then the parens automatically gets sense. Using LISP-like syntax for everything would be so stupid as using quantum mechanics for billiards. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Using LISP-like syntax for everything would be so stupid as using quantum mechanics for billiards. Claiming that LISP parens are Stupid, Superfluous, or Silly just because you do not need them in your limited field of discourse, would be so stupid as those people thinking that just because they use classical mechanics at the macro scale and works for them, then classical mechanics would also work at the atomic scale [*]. [*] Even today, after 100 years some people think that quantum mechanics is Stupid, Superfluous, or Silly and some classical formulation will replace. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Fuzzyman ha escrito: > Perhaps only with the addendum that although 'Lisp roolz', no-one uses > for anything of relevance anymore and it is continuing it's geriatric > decline into obscurity. ;-) I do not think that i cannot agree with the contrary of this but i do not think the contrary neither. I am being said that LISP is being actively pursued by a number of joung hackers as Graham and Tilton. Do not believe? Ken Tilton has noticed that Dalai Lama has becomed interested in LISP also. -- http://mail.python.org/mailman/listinfo/python-list