Canonical way of dealing with null-separated lines?

2005-02-23 Thread Douglas Alan
Is there a canonical way of iterating over the lines of a file that
are null-separated rather than newline-separated?  Sure, I can
implement my own iterator using read() and split(), etc., but
considering that using "find -print0" is so common, it seems like
there should be a more cannonical way.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Canonical way of dealing with null-separated lines?

2005-02-24 Thread Douglas Alan
Christopher De Vries <[EMAIL PROTECTED]> writes:

> I'm not sure if there is a canonical method, but I would
> recommending using a generator to get something like this, where 'f'
> is a file object:

Thanks for the generator.  It returns an extra blank line at the end
when used with "find -print0", which is probably not ideal, and is
also not how the normal file line iterator behaves.  But don't worry
-- I can fix it.

In any case, as a suggestion to the whomever it is that arranges for
stuff to be put into the standard library, there should be something
like this there, so everyone doesn't have to reinvent the wheel (even
if it's an easy wheel to reinvent) for something that any sysadmin
(and many other users) would want to do on practically a daily basis.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Canonical way of dealing with null-separated lines?

2005-02-25 Thread Douglas Alan
Okay, here's the definitive version (or so say I).  Some good doobie
please make sure it makes its way into the standard library:

def fileLineIter(inputFile, newline='\n', leaveNewline=False, readSize=8192):
   """Like the normal file iter but you can set what string indicates newline.

   You can also set the read size and control whether or not the newline string
   is left on the end of the iterated lines.  Setting newline to '\0' is
   particularly good for use with an input file created with something like
   "os.popen('find -print0')".
   """
   partialLine = []
   while True:
  charsJustRead = inputFile.read(readSize)
  if not charsJustRead: break
  lines = charsJustRead.split(newline)
  if len(lines) > 1:
 partialLine.append(lines[0])
 lines[0] = "".join(partialLine)
 partialLine = [lines.pop()]
  else:
 partialLine.append(lines.pop())
  for line in lines: yield line + ("", newline)[leaveNewline]
   if partialLine and partialLine[-1] != '': yield "".join(partialLine)

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Canonical way of dealing with null-separated lines?

2005-02-26 Thread Douglas Alan
I wrote:

> Okay, here's the definitive version (or so say I).  Some good doobie
> please make sure it makes its way into the standard library:

Oops, I just realized that my previously definitive version did not
handle multi-character newlines.  So here is a new definition
version.  Oog, now my brain hurts:

def fileLineIter(inputFile, newline='\n', leaveNewline=False, readSize=8192):
   """Like the normal file iter but you can set what string indicates newline.

   The newline string can be arbitrarily long; it need not be restricted to a
   single character. You can also set the read size and control whether or not
   the newline string is left on the end of the iterated lines.  Setting
   newline to '\0' is particularly good for use with an input file created with
   something like "os.popen('find -print0')".
   """
   isNewlineMultiChar = len(newline) > 1
   outputLineEnd = ("", newline)[leaveNewline]

   # 'partialLine' is a list of strings to be concatinated later:
   partialLine = []

   # Because read() might unfortunately split across our newline string, we
   # have to regularly check to see if the newline string appears in what we
   # previously thought was only a partial line.  We do so with this generator:
   def linesInPartialLine():
  if isNewlineMultiChar:
 linesInPartialLine = "".join(partialLine).split(newline)
 if linesInPartialLine > 1:
partialLine[:] = [linesInPartialLine.pop()]
for line in linesInPartialLine:
   yield line + outputLineEnd

   while True:
  charsJustRead = inputFile.read(readSize)
  if not charsJustRead: break
  lines = charsJustRead.split(newline)
  if len(lines) > 1:
 for line in linesInPartialLine(): yield line
 partialLine.append(lines[0])
 lines[0] = "".join(partialLine)
 partialLine[:] = [lines.pop()]
  else:
 partialLine.append(lines.pop())
 for line in linesInPartialLine(): yield line
  for line in lines: yield line + outputLineEnd
   for line in linesInPartialLine(): yield line
   if partialLine and partialLine[-1] != '':
  yield "".join(partialLine)


|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


yield_all needed in Python

2005-02-28 Thread Douglas Alan
While writing a generator, I was just thinking how Python needs a
"yield_all" statement.  With the help of Google, I found a
pre-existing discussion on this from a while back in the Lightweight
Languages mailing list.  I'll repost it here in order to improve the
chances of this enhancement actually happening someday.  The original
poster from the LL mailing list seems mostly concerned with
algorithmic efficiency, while I'm concerned more about making my
programs shorter and easier to read.  The ensuing discussion on the LL
list talks about how yield_all would be somewhat difficult to
implement if you want to get the efficiency gain desired, but I don't
think it would be very difficult to implement if that goal weren't
required, and the goal were limited to just the expressive elegance:

 A Problem with Python's 'yield'

 * To: LL1 Mailing List <[EMAIL PROTECTED]>
 * Subject: A Problem with Python's 'yield'
 * From: Eric Kidd <[EMAIL PROTECTED]>
 * Date: 27 May 2003 11:15:20 -0400
 * Organization:
 * Sender: [EMAIL PROTECTED]

 I'm going to pick on Python here, but only because the example code will
 be short and sweet. :-) I believe several other implementations of
 generators have the same problem.

 Python's generator system, used naively, turns an O(N) tree traversal
 into an O(N log N) tree traversal:

   class Tree:
   def __init__(self, value, left=None, right=None):
   self.value = value
   self.left = left
   self.right = right

   def in_order(self):
   if self.left is not None:
   for v in self.left.in_order():
   yield v
   yield self.value
   if self.right is not None:
   for v in self.right.in_order():
   yield v

   t=Tree(2, Tree(1), Tree(3))
   for v in yield_bug.t.in_order():
   print v

 This prints:
   1
   2
   3

 Unfortunately, this snippet calls 'yield' 5 times, because the leaf
 values must be yielded twice on their way back up the tree.

 We can shorten the code--and make it run in O(N) time--by adding a new
 keyword to replace the "for v in ...: yield v" pattern:

   def in_order(self):
   if self.left is not None:
   yield_all self.left.in_order():
   yield self.value
   if self.right is not None:
   yield_all self.right.in_order():

 Interestingly enough, this allows you define notions such as
 "tail-recursive generation", and apply the usual bag of
 recursion-optimization techniques.

 Cheers,
 Eric

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Canonical way of dealing with null-separated lines?

2005-02-28 Thread Douglas Alan
I wrote:

> Oops, I just realized that my previously definitive version did not
> handle multi-character newlines.  So here is a new definitive
> version.  Oog, now my brain hurts:

I dunno what I was thinking.  That version sucked!  Here's a version
that's actually comprehensible, a fraction of the size, and works in
all cases.  (I think.)

def fileLineIter(inputFile, newline='\n', leaveNewline=False, readSize=8192):
   """Like the normal file iter but you can set what string indicates newline.
   
   The newline string can be arbitrarily long; it need not be restricted to a
   single character. You can also set the read size and control whether or not
   the newline string is left on the end of the iterated lines.  Setting
   newline to '\0' is particularly good for use with an input file created with
   something like "os.popen('find -print0')".
   """
   outputLineEnd = ("", newline)[leaveNewline]
   partialLine = ''
   while True:
   charsJustRead = inputFile.read(readSize)
   if not charsJustRead: break
   lines = (partialLine + charsJustRead).split(newline)
   partialLine = lines.pop()
   for line in lines: yield line + outputLineEnd
   if partialLine: yield partialLine

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
Andrew Dalke <[EMAIL PROTECTED]> writes:

> On Mon, 28 Feb 2005 18:25:51 -0500, Douglas Alan wrote:

>> While writing a generator, I was just thinking how Python needs a
>> "yield_all" statement.  With the help of Google, I found a
>> pre-existing discussion on this from a while back in the
>> Lightweight Languages mailing list.  I'll repost it here in order
>> to improve the chances of this enhancement actually happening
>> someday.

> You should also have looked for the responses to that. Tim Peter's
> response is available from

>   http://aspn.activestate.com/ASPN/Mail/Message/624273

[...]

> Here is the most relevant parts.

[...]

>BTW, Python almost never worries about worst-case behavior, and people
>using Python dicts instead of, e.g., balanced trees, get to carry their
>shame home with them hours earlier each day  .

If you'll reread what I wrote, you'll see that I'm not concerned with
performance, but rather my concern is that I want the syntactic sugar.
I'm tired of writing code that looks like

   def foogen(arg1):

  def foogen1(arg2):
 # Some code here

  # Some code here
  for e in foogen1(arg3): yield e
  # Some code here
  for e in foogen1(arg4): yield e
  # Some code here
  for e in foogen1(arg5): yield e  
  # Some code here
  for e in foogen1(arg6): yield e  

when it would be much prettier and easier to read if it looked like:

   def foogen(arg1):

  def foogen1(arg2):
 # Some code here

  # Some code here
  yield_all foogen1(arg3)
  # Some code here
  yield_all foogen1(arg4)
  # Some code here
  yield_all foogen1(arg5)
  # Some code here
  yield_all foogen1(arg6)

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> Cetainly, if  iterator> == , I don't see how anything
> is gained except for a few keystrokes.

What's gained is making one's code more readable and maintainable,
which is the one of the primary reasons that I use Python.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Canonical way of dealing with null-separated lines?

2005-03-01 Thread Douglas Alan
"John Machin" <[EMAIL PROTECTED]> writes:

>>lines = (partialLine + charsJustRead).split(newline)

> The above line is prepending a short string to what will typically be a
> whole buffer full. There's gotta be a better way to do it.

If there is, I'm all ears.  In a previous post I provided code that
doesn't concatinate any strings together until the last possible
moment (i.e. when yielding a value).  The problem with that the code
was that it was complicated and didn't work right in all cases.

One way of solving the string concatination issue would be to write a
string find routine that will work on lists of strings while ignoring
the boundaries between list elements.  (I.e., it will consider the
list of strings to be one long string for its purposes.)  Unless it is
written in C, however, I bet it will typically be much slower than the
code I just provided.

> Perhaps you might like to refer back to CdV's solution which was
> prepending the residue to the first element of the split() result.

The problem with that solution is that it doesn't work in all cases
when the line-separation string is more than one character.

>>for line in lines: yield line + outputLineEnd

> In the case of leaveNewline being false, you are concatenating an empty
> string. IMHO, to quote Jon Bentley, one should "do nothing gracefully".

In Python,

   longString + "" is longString

evaluates to True.  I don't know how you can do nothing more
gracefully than that.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
Duncan Booth <[EMAIL PROTECTED]> writes:

> Douglas Alan wrote:

>> "Terry Reedy" <[EMAIL PROTECTED]> writes:

>>> Cetainly, if >> iterator> == , I don't see how anything
>>> is gained except for a few keystrokes.

>> What's gained is making one's code more readable and maintainable,
>> which is the one of the primary reasons that I use Python.

> On of the reasons why Python is readable is that the core language is 
> comparatively small.

It's not that small anymore.  What it *is* is relatively conceptually
simple and readily comprehensible (i.e. "lightweight"), unlike
languages like C++ and Perl.

> Adding a new reserved word simply to save a few 
> characters

It's not to "save a few characters".  It's to make it immediately
clear what is happening.

> is a difficult choice, and each case has to be judged on its merits,
> but it seems to me that in this case the extra syntax is a burden
> that would have to be learned by all Python programmers with very
> little benefit.

The amount of effort to learn what "yield_all" does compared to the
amount of effort to understand generators in general is so miniscule,
as to be negligible.  Besides, by this argument, the standard library
should be kept as small as possible too, since people have to learn
all that stuff in order to understand someone else's code.

> Remember that many generators will want to do slightly more than just yield 
> from another iterator, and the for loop allows you to put in additional 
> processing easily whereas 'yield_all' has very limited application e.g.

>for tok in tokenstream():
>if tok.type != COMMENT:
>yield tok

> I just scanned a random collection of my Python files: out of 50 yield 
> statements I found only 3 which could be rewritten using yield_all.

For me, it's a matter of providing the ability to implement
subroutines elegantly within generators.  Without yield_all, it is not
elegent at all to use subroutines to do some of the yielding, since
the calls to the subroutines are complex, verbose statements, rather
than simple ones.

I vote for the ability to have elegant, readable subroutining,
regardless of how much you in particular would use it.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
Francis Girard <[EMAIL PROTECTED]> writes:

> Therefore, the suggestion you make, or something similar, would have
> actually ease my learning, at least for me.

Yes, I agree 100%.  Not having something like "yield_all" hurt my
ability to learn to use Python's generators quickly because I figured
that Python had to have something like yield_all.  But no matter how
hard I looked, I couldn't find anything about it in the manual.

So the argument that adding a feature makes the language harder to
learn is a specious one.  Sometimes an extra feature makes the
language easier to learn.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
David Eppstein <[EMAIL PROTECTED]> writes:

> In article <[EMAIL PROTECTED]>,

>  Douglas Alan <[EMAIL PROTECTED]> wrote:

>> > Cetainly, if > > iterator> == , I don't see how anything
>> > is gained except for a few keystrokes.

>> What's gained is making one's code more readable and maintainable,
>> which is the one of the primary reasons that I use Python.

> I don't see a lot of difference in readability and maintainability 
> between the two versions.

In that case, your brain works nothing like mine.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
Steve Holden <[EMAIL PROTECTED]> writes:

> Guido has generally observed a parsimony about the introduction of
> features such as the one you suggest into Python, and in particular
> he is reluctant to add new keywords - even in cases like decorators
> that cried out for a keyword rather than the ugly "@" syntax.

In this case, that is great, since I'd much prefer

   yield *gen1(arg)

than

   yield_all gen1(arg)

anyway, as someone else suggested in this thread (followed by a
demonic laugh).  The only reason I mentioned "yield_all" is because
there was a preexisting discussion that used "yield_all".

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Canonical way of dealing with null-separated lines?

2005-03-01 Thread Douglas Alan
"John Machin" <[EMAIL PROTECTED]> writes:

>> In Python,

>>longString + "" is longString

>> evaluates to True.  I don't know how you can do nothing more
>> gracefully than that.

> And also "" + longString is longString

> The string + operator provides those graceful *external* results by
> ugly special-case testing internally.

I guess I don't know what you are getting at.  If Python peforms ugly
special-case testing internally so that I can write more simple,
elegant code, then more power to it!  Concentrating ugliness in one,
small, reusable place is a good thing.


> It is not graceful IMHO to concatenate a variable which you already
> know refers to a null string.

It's better than making my code bigger, uglier, and putting in extra
tests for no particularly good reason.


> Let's go back to the first point, and indeed further back to the use
> cases:

> (1) multi-byte separator for lines in test files: never heard of one
> apart from '\r\n'; presume this is rare, so test for length of 1 and
> use Chris's simplification of my effort in this case.

I want to ability to handle multibyte separators, and so I coded for
it.  There are plenty of other uses for an iterator that handles
multi-byte separators.  Not all of them would typically be considered
"newline-delimited lines" as opposed to "records delimited by a
separation string", but a rose by any other name

If one wants to special case for single-byte separators in the name of
efficiency, I provided one back there in the thread that never
degrades to N^2, as the ones you and Chris provided.


> (2) keep newline: with the standard file reading routines, if one is
> going to do anything much with the line other than write it out again,
> one does buffer = buffer.rstrip('\n') anyway. In the case of a
> non-standard separator, one is likely to want to write the line out
> with the standard '\n'. So, specialisation for this is indicated:

> ! if keepNewline:
> ! for line in lines: yield line + newline
> ! else:
> ! for line in lines: yield line

I would certainly never want the iterator to tack on a standard "\n"
as a replacement for whatever newline string the input used.  That
seems like completely gratuitous functionality to me.  The standard
(but not the only) reason that I want the line terminator left on the
yielded strings is so that you can tell whether or not there is a
line-separator terminating the very last line of the input.  Usually I
want the line-terminator discarded, and it kind of annoys me that the
standard line iterator leaves it on.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
Steven Bethard <[EMAIL PROTECTED]> writes:

> I'm guessing the * syntax is pretty unlikely to win Guido's
> approval. There have been a number of requests[1][2][3] for syntax
> like:

>  x, y, *rest = iterable

Oh, it is so wrong that Guido objects to the above.  Python needs
fully destructuring assignment!

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Douglas Alan
Isaac To <[EMAIL PROTECTED]> writes:

>> "Isaac" == Isaac To <[EMAIL PROTECTED]> writes:
>
> def gen_all(gen):
> for e in gen:
> yield e
>
> def foogen(arg1):
> def foogen1(arg2):
> # Some code here
> # Some code here
> gen_all(arg3)
> ^ I mean foogen1(arg3), obviously, and similar for below
> # Some code here
> gen_all(arg4)
> # Some code here
> gen_all(arg5)
> # Some code here
> gen_all(arg6)
>
> Regards,
> Isaac.

If you actually try doing this, you will see why I want "yield_all".

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-02 Thread Douglas Alan
Nick Coghlan <[EMAIL PROTECTED]> writes:

> If you do write a PEP, try to get genexp syntax supported by the
> yield keyword.

> That is, the following currently triggers a syntax error:
>def f():
>  yield x for x in gen1(arg)

Wouldn't

   yield *(x for x in gen1(arg))

be sufficient, and would already be supported by the proposal at
hand?

Also, with the syntax you suggest, it's not automatically clear
whether you want to yield the generator created by the generator
expression or the values yielded by the expression.  The "*" makes
this much more explicit, if you ask me, without hindering readability.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use empty string for self

2006-03-01 Thread Douglas Alan
Roy Smith <[EMAIL PROTECTED]> writes:

> Terry Hancock <[EMAIL PROTECTED]> wrote:

>> However, there is a slightly less onerous method which
>> is perfectly legit in present Python -- just use "s"
>> for "self":

> This is being different for the sake of being different.  Everybody *knows* 
> what self means.  If you write your code with s instead of self, it just 
> makes it that much harder for other people to understand it.

I always use "s" rather than "self".  Are the Python police going to
come and arrest me?  Have I committed the terrible crime of being
unPythonic?  (Or should that be un_pythonic?)

I rarely find code that follows clear coding conventions to be hard to
understand, as long as the coding convention is reasonable and
consistent.

Something that I do find difficult to understand, as a contrasting
example, is C++ code that doesn't prefix instance variables with "_"
or "m_" (or what have you), or access them via "this".  Without such a
cue, I have a hard time figuring out where such variables are coming
from.

Regarding why I use "s" rather than "self", I don't do this to be
different; I do it because I find "self" to be large enough that it is
distracting.  It's also a word, which demands to be read.  (Cognitive
psychologists have shown that when words are displayed to you your
brain is compelled to read them, even if you don't want to.  I
experience this personally when I watch TV with my girlfriend who is
hearing impaired.  The captioning is very annoying to me, because
it's hard not to read them, even though I don't want to.  The same
thing is true of "self".)

With too many "self"s everywhere, my brain finds it harder to locate
the stuff I'm really interested in.  "s." is small enough that I can
ignore it, yet big enough to see when I need to know that information.
It's not a word, so my brain doesn't feel compelled to read it when I
don't want to, and it's shorter, so I can fit more useful code on a
line.  Breaking up some code onto multiple lines often makes it
significantly less readable.  (Just ask a typical mathematician, who
when shown notations that Computer Science people often use, laugh in
puzzlement at their verbosity.  Mathematicians probably could not do
what they do without having the more succinct notations that they
use.)

Don't take any of this to mean that succinctness is always better than
brevity.  It quite often is not.  Brevity is good for things that you
do over and over and over again.  Just ask Python -- it often knows
this.  It's why there are no "begin" and "end" statements in Python.
It's why semicolons aren't required to separate statements that are on
different lines.  That stuff is extra text that serves little purpose
other than to clutter up the typical case.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Proper licensing and copyright attribution for extracted Python code

2007-06-14 Thread Douglas Alan
Hi.  I extracted getpath.c out of Python and modified it to make a
generally useful facility for C and C++ programming.  These comments
are at the top of my .c file, and I would like to know if they pass
muster for meeting licensing, copyright, and aesthetics requirements:

// -*- Mode: C; fill-column: 79 -*-

//=
// Description:
//
//  pathToExecutable.c is a module that allows a Unix program to find the
//  location of its executable.  This capability is extremely useful for
//  writing programs that don't have to recompiled in order to be relocated
//  within the filesystem.  Any auxiliary files (dynamically loaded
//  libraries, help files, configuration files, etc.) can just be placed in
//  the same directory as the executable, and the function
//  pathToExecutable() can be used by the program at runtime to locate its
//  executable file and from there the program can locate any auxiliary
//  files it needs in order to operate.
//
//  pathToExecutable() is smart enough to follow a symlink (or even a chain
//  of symlinks) in order to find the true location of the executable.  In
//  this manner, for instance, you might install all of the files used by a
//  program (let's say it's called "my-program"), including the executable,
//  into the directory /usr/local/lib/my-program, and then put a symlink
//  into /usr/local/bin that points to the executable
//  /usr/local/lib/my-program/my-program.  Initially pathToExecutable()
//  will identify /usr/local/bin/my-program as the executable, but it will
//  then notice that this "file" is really a symbolic link.
//  pathToExecutable() will then follow the symbolic link and return
//  "/usr/local/lib/my-program/my-pogram" instead.
//
//  Before a program can call pathToExecutable(), setArgv() must be called
//  (canonically in main()) so that pathToExecutable() can fetch the value
//  of argv[0] and use it to help figure out where the executable is
//  located.
//
// Copyright and licensing information:
//
//  This software is a heavily modified version of getpath.c from the
//  Python 2.5.1 release.  Both this software and the original software
//  from which it is derived are freely distributable under the terms of
//  the permissive freeware license, Python Software Foundation License
//  Version 2.  You can read more about this license here:
//
//   http://www.python.org/psf/license
//
//  The original software from which this software is derived carries the
//  following copyright:
//
// Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python
// Software Foundation.
//
//  The modifications to the original software, which are contained herein,
//  are
//
// Copyright (c) 2007 Douglas Alan 
//
//=

Thanks,
|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-15 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> Try suggesting on a Lisp or Scheme group that having only one type
> of syntax (prefix expressions) lacks something and that they should
> add variety in the form of statement syntax ;-) Hint: some Lispers
> have bragged here about the simplicity of 'one way to do it' and put
> Python down for its mixed syntax.  (Of course, this does not mean
> that some dialects have not sneaked in lists of statements thru a
> back door ;-).

Almost all Lisp dialects have an extremely powerful macro mechanism
that lets users and communities extend the syntax of the language in
very general ways.  Consequently, dialects such a Scheme try to keep
the core language as simple as possible.  Additional ways of doing
things can be loaded in as a library module.

So, a language such as Scheme may have no *obvious* way of something,
and yet may provide excellent means to extend the language so that
many obvious ways might be provided.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-15 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> My only point was that Sussman is an odd person to be criticizing
> (somewhat mistakingly) Python for being minimalist.

I think that being a language minimalist is very different from
believing that there should be exactly one obvious way to do
everything.

For instance, I believe that Python is now too big, and that much of
what is in the language itself should be replaced with more general
Scheme-like features.  Then a good macro mechanism should be
implemented so that all the conveniences features of the language can
be implemented via macro definitions in the standard library.

Macros, however, are typically claimed in these parts to violate the
"only one way" manifesto.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-15 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> Here's the situation.  Python is making inroads at MIT, Scheme home turf. 
> The co-developer of Scheme, while writing about some other subject, tosses 
> in an off-the-wall slam against Python.  Someone asks what we here think. 
> I think that the comment is a crock and the slam better directed, for 
> instance, at Scheme itself.  Hence 'he should look in a mirror'.

You are ignoring the fact that Scheme has a powerful syntax extension
mechanism (i.e., hygenic macros), which means that anyone in the world
can basically extend Scheme to include practically any language
feature they might like it to have.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-15 Thread Douglas Alan
Kay Schluehr <[EMAIL PROTECTED]> writes:

> On 15 Jun., 22:58, Douglas Alan <[EMAIL PROTECTED]> wrote:

>> For instance, I believe that Python is now too big, and that much
>> of what is in the language itself should be replaced with more
>> general Scheme-like features.  Then a good macro mechanism should
>> be implemented so that all the conveniences features of the
>> language can be implemented via macro definitions in the standard
>> library.

> And why sould anyone reimplement the whole standard library using
> macro reductions? Because this is the "one obvious way to do it" for
> people who are addicted to Scheme?

(1) By, "should be replaced", I meant in an ideal world.  I'm not
proposing that this be done in the real world anytime soon.

(2) I didn't suggest that a single line of the standard library be
changed.  What would need to be changed is the core Python language,
not the standard library.  If this idea were implemented, the core
language could be made smaller, and the features that were thereby
removed from the language core could be moved into the standard
library instead.

(3) My reasons for wanting this have nothing to do with being
"addicted to Scheme", which I almost never use.  It has to do more
with my language design and implementation aesthetics, and my desire
for a syntax extension mechanism so that I can add my own language
features to Python without having to hack on the CPython source code.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-15 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> On Fri, 15 Jun 2007 17:05:27 -0400, Douglas Alan wrote:

>> You are ignoring the fact that Scheme has a powerful syntax extension
>> mechanism (i.e., hygenic macros), which means that anyone in the world
>> can basically extend Scheme to include practically any language
>> feature they might like it to have.

> You say that like it is a good thing.

A chaque son gout.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-15 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> > You are ignoring the fact that

> This prefactory clause is false and as such it turns what was a true 
> statement into one that is not.  Better to leave off such ad hominisms and 
> stick with the bare true statement.

You went on about how Gerry Sussman's opinion is a crock and how he
should look in the mirror, and then you get bent out of shape over the
phrase, "you are ignoring"???  For the record, "you are ignoring" is
not an ad hominem; "anyone who doesn't know how to spell 'ad hominem'
has the intelligence of a mealworm" is an ad hominem.

> > Scheme has a powerful syntax extension mechanism

> I did not and do not see this as relevant to the main points of my
> summary above.  Python has powerful extension mechanisms too, but
> comparing the two languages on this basis is a whole other topic.

How do you know that Prof. Sussman doesn't consider the macro issue to
be essential?  Certainly other Lisp aficionados do, as does, I believe
Guy Steele, the other inventor of Scheme.

It appears to me that you are missing the point that having a
minimalist disposition towards programming language design does not
preclude believing that such languages should have features that are
missing from Python.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-16 Thread Douglas Alan
Dennis Lee Bieber <[EMAIL PROTECTED]> writes:

>   Macros? Unfortunately to my world, macros are those things
> found in C, high-powered assemblers, and pre-VBA Office. As such,
> they do anything but keep a language small, and one encounters
> multiple implementations of similar functionality -- each
> implementation the pride of one person, and abhorred by the person
> who now must edit the code.

Comparing C macros to Lisp macros is like comparing a Sawzall to a
scalpel.

Regarding having enough rope to hang yourself, the same claim can be
made about any language abstraction mechanism.  E.g., classes are to
data types as macros are to syntax.  You can abuse classes and you can
abuse macros.  Both abuses will lead to abhorring by your
collaborators. And either abstraction mechanism, when used properly,
will result in more elegant, easier to read and maintain code.

Both abstraction mechanisms also allow language feature exploration to
occur outside of the privileged few who are allowed to commit changes
into the code-base for the interpreter or compiler.  I think that this
is one of the things that Gerry Sussman might be getting at when he
talks about how science works.  (Or at least that's what I would be
getting at if I were in his shoes.)

In the Lisp community, for instance, there have been lots of language
features that were implemented by people who were not part of the core
language development clique, but that were later widely adopted.
E.g., the Common Lisp OO system with mutimethods (CLOS), and the
"loop" macro.  These features didn't require any changes to the core
language implementation, and so you can see that Lisp-style macros are
also a huge modularity boon for language implementation.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-18 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> |>oug writes:

>> Scheme has a powerful syntax extension mechanism

> I did not and do not see this as relevant to the main points of my
> summary above.  Python has powerful extension mechanisms too, but
> comparing the two languages on this basis is a whole other topic.

Please note that Guy Steele in his abstract for "Rabbit: A Compiler
for SCHEME", specifically mentions that Scheme is designed to be a
minimal language in which, "All of the traditional imperative
constructs [...] as well as many standard LISP constructs [...] are
expressed in macros in terms of the applicative basis set. [...] The
macro approach enables speedy implementation of new constructs as
desired without sacrificing efficiency in the generated code."

   http://library.readscheme.org/servlets/cite.ss?pattern=Ste-78b

Do you now see how Scheme's syntax extension mechanism is relevant?

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-19 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> The main point of my original post was that the quoted slam at Python was 
> based on a misquote of Tim Peters

But it wasn't based on a "misquote of Tim Peters"; it was based on an
*exact* quotation of Tim Peters.

> and a mischaracterization of Python

I find Sussman's criticism not to be a mischaracterization at all: I
and others have previous mentioned in this very forum our desire to
have (in an ideal world) a good syntax extension facility for Python,
and when we've done so, we've been thoroughly pounced upon by
prominent members of the Python community as just not understanding
the true "Weltanschauung" of Python.  This despite the fact that I
have been happily and productively programming in Python for more than
a decade, and the fact that Guido himself has at times mentioned that
he's been idly considering the idea of a syntax extension facility.

The reason given for why macros wouldn't gel with Python's
Weltanschauung has typically been the "only one obvious way" koan, or
some variant of it.

> and that it was out-of-place in the quoted discussion of physics
> methods and that it added nothing to that discussion and should
> better have been omitted.  *All of this has nothing to do with
> Scheme.*

I'm not sure what you're getting at.  Gerry Sussman has a philosophy
of language design that is different from Python's (at least as it is
commonly expressed around here), and he was using an analogy to help
illuminate what his differences are.  His analogy is completely clear
to me, and, I in fact agree with it.  I love Python, but I think the
"only one obvious way" philosophy may do more harm than good.  It is
certainly used, in my experience, at times, to attempt to squelch
intelligent debate.

> At the end, I added as a *side note* the irony that the purported author 
> was the co-developer of Scheme, another 'minimalist algorithm
> language 

Sussman's statements are not ironic because Scheme is a language that
is designed to be extended by the end-user (even syntactically), while
keeping the core language minimal.  This is a rather different design
philosophy from that of Python.

> (Wikipedia's characterization) with more uniform syntax than Python and 
> like Python, also with one preferred way to scan sequences (based on my 
> memory of Scheme use in the original SICP, co-authored by the same 
> purported quote author, and also confirmed by Wikipedia).

There is no one preferred way to scan sequences in Scheme.  In fact,
if you were to take SICP at MIT, as I did when I was a freshman, you
would find that many of the problem sets would require you to solve a
problem in several different ways, so you would learn that there are
typically a number of different reasonable ways to approach a problem.
E.g., one of the first problem sets would have you implement something
both iteratively and recursively.  I recall another problem set where
we had to find the way out of a maze first using a depth-first search
and then using a breadth-first search.

> | [Steele quote deleted]
> | Do you now see how Scheme's syntax extension mechanism is relevant?

> No.  This just partly explains why Scheme gets away with being
> minimalist.  I explicitly referred to the core language as delivered
> and as used in SICP.

I suggest that you haven't yet grokked the Weltanschauung of Scheme.
Scheme aficionados would not typically insist that a proposed language
feature is not good because it violates anything like an "only one
obvious way" rule.  Rather they would argue that if it can be
implemented as fuctions and/or macros, then it *should* be implemented
that way, rather than polluting the core language.  The new facility
should then be included in a library.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-19 Thread Douglas Alan
Neil Cerutti <[EMAIL PROTECTED]> writes:

> |>oug writes:

>> Sussman's statements are not ironic because Scheme is a
>> language that is designed to be extended by the end-user (even
>> syntactically), while keeping the core language minimal.  This
>> is a rather different design philosophy from that of Python.

> Which version Scheme, though? Scheme has only formally had macros
> since R4RS, and then only as an extension. Macros are an extension
> to Scheme, rather than a founder.

Macros were only *standardized* in Scheme with R4RS.  This is because
they wanted to figure out the "right" way to do macros before putting
it in stone.  (Common Lisp-like non-hygienic macros were considered
inelegant.)  All the major implementations of Scheme that I know of
implemented some form of powerful macro mechanism.  N.b., Rabbit,
which was Guy Steele's implementation of Scheme, and completed long,
long before the R4RS standard.  (Guy Steele was one of the two
inventors of Scheme.)  And, as far as I am aware, the plan was always
to eventually come up with a macro mechanism that was as elegant as
the rest of Scheme.  The problem with this approach was that achieving
this daunting goal turned out to take quite a while.

> Python could conceivably end up in the same position 15 years
> from now, with macros a well-established late-comer, as
> generators have become.

That would be very cool.  The feeling I get, however, is that there
would be too much complaining from the Python community about how such
a thing would be "un-Pythonic".

> The SRFIs are cool.

> The last time I dipped my toe into the Scheme newsgroup, I was
> overwhelmed by the many impractical discussions of Scheme's dark
> corners. Python is either much more free of dark corners, or else
> simply doesn't attract that kind of aficionado.

I don't really think that Scheme itself has many dark corners -- it's
just that being basically a pristine implementation of lambda
calculus, Scheme lets you directly explore some pretty mind-bending
stuff.  I would agree that most of that kind of stuff is not
particularly practical, but it can be fun in a hackerly,
brain-expanding/brain-teaser kind of way.

I think that most people who program in Scheme these days don't do it
to write practical software.  They either do it to have fun, or for
academic purposes.  On the other hand, most people who program in
Python are trying to get real work done.  Which is precisely why I
program a lot in Python and very little in Scheme these days.  It's
nice to have the batteries included.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-19 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> Nonetheless, picking on and characterizing Tim's statement as
> anti-flexibility and un-scientific is to me writing of a sort that I
> would not tolerate from my middle-school child.

Now it is you who are taking Sussman's comments out of context.
Sussman does not claim that Python is "un-scientific" -- he merely
holds it up as a example of canonical engineering that eschews the
kinds of flexibility that is to be found in biological systems and in
the practice of science.  In this regard, Sussman is merely
criticizing engineering "best practices" in general, not Python in
specific, and is arguing that if we want to engineer systems that are
as robust as biological systems then we need to start exploring
radically different approaches to software design.

> Python is an algorithm language and a tool used to engineering
> information systems, which is something different.  The next
> sections are about exploratory behavior.  Languages do not 'behave',
> let alone 'explore'.

I think you are missing the point.  Sussman is making a broad
criticism of software engineering in general, as it is understood
today.  The essay in questions was written for a graduate-level MIT
computer science class that aims to explore potential avenues of
research into new languages and approaches that encourage and
facilitate more robust software systems.  As good as Python is, it is
still largely an encapsulation of the best ideas about software
engineering as it was understood in the early 80's.  We're now 20+
years on, and it behooves our future Computer Science researchers to
consider if we might not be able to do better than we could in 1984.

> So Python seems to have the sort of flexibility that he implicitly
> claims it does not.

Python most certainly does *not* have the type of flexibility that he
is talking about.  For instance, one of the things that he talks about
exploring for more robust software systems is predicate dispatching,
which is an extension of multiple dispatch.  Although you might be
able to cobble something like this together in Python, it would end up
being very cumbersome to use.  (E.g., Guido wrote an essay on doing
multiple dispatch in Python, but you wouldn't actually want to write
Python code that way, because it would be too syntactically
cumbersome.)  In dialects of Lisp (such as Scheme), however,
subsystems to explore such alternative programming models can be
written completely within the language (due, in part to their syntax
extension facilities).  This is how the Common Lisp Object System came
to be, for instance.  CLOS supports all sorts of OO stuff that even
Python doesn't, and yet Lisp without CLOS isn't even an OO language.

> The general problems of software inflexibility that he mentioned in
> a previous section have nothing specific to do with Python.

Right.  And he never said they did.

> When he gets to solutions, one long section (page 13) somewhat
> specific to languages, versus applications thereof, is about
> extensible generic operations "where it is possible to define what
> is meant by addition, multiplication, etc., for new datatypes
> unimagined by the language designer."  Well, golly gee.  Guess what?
> Not only is Python code generic unless specialized (with isinstance
> tests, for instance), but it is highly extensible for new datatypes,
> just as Sussman advocates.  There is a special method for just about
> every syntactic construct and builtin function.  And 3.0 may add a
> new generic function module to dispatch on multiple arguments and
> possibly predicates.

You didn't read the paper very carefully.  Sussman points out that
traditional OO languages are up to this sort of stuff to some extent,
but not to the extent which he thinks is required to solve future
challenges.  He things, for instance, that predicate dispatching,
backtracking, and first-class continuations will be required.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-19 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> On Tue, 19 Jun 2007 17:46:35 -0400, Douglas Alan wrote:

>> I think that most people who program in Scheme these days don't do it
>> to write practical software.  They either do it to have fun, or for
>> academic purposes.  On the other hand, most people who program in
>> Python are trying to get real work done.  Which is precisely why I
>> program a lot in Python and very little in Scheme these days.  It's
>> nice to have the batteries included.

> So, once you've succeeded in your campaign to make Python more like
> Scheme, what language will you use for getting real work done?

The problem with using Scheme for real work is that it doesn't come
with enough batteries included and there isn't a big enough of a
community behind it that uses it for real work.

Also, the Scheme standard has progressed at a terribly slow pace.  I
have heard that the reason for this is due to the way that its
standardizing committees were set up.

One of the whole reasons to use Lisp is for its extensible syntax, but
it took more than a decade for macros to make it into the Scheme
standard.  And without a standard macro system, there was no standard
library -- not even for doing OO programming.

> And how long will it take before Schemers start agitating for it to
> become more like Scheme?

> There is a huge gulf between the claim that Python needs to be more
> Scheme-like, and the fact that by your own admission you use Python,
> not Scheme, for real work. What benefit will be gained? The ability
> to "directly explore some pretty mind-bending stuff ... in a
> hackerly, brain-expanding/brain-teaser kind of way"?

Well, go to MIT and take SICP and then the graduate-level sequel to
the class, Adventures in Advanced Symbolic Programming, and then
you'll see what some of the advantages would be.

A good multimethod system, e.g., would make Python a significantly
nicer language for my purposes, for instance.

For the record, I have a huge problem with NIH-syndrome, and think
that every programming language in the world could learn a thing or
two from what other languages have gotten right.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-20 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> | I think you are missing the point. Sussman is making a broad
> | criticism software engineering in general, as it is understood
> | today.

> On the contrary, I understood exactly that and said so.  *My* point
> is that in doing so, he made one jab at one specific language in a
> herky-jerky footnote (a sentence on systems design philosopy, one on
> Python, one on physics) that I consider to be at least misleading.
> And so I feel the essay would be better without that wart.

The footnote is only misleading if you read it out of the context of
the essay.  And Sussman clearly only picked on Python, in specific,
because Python is the only language for which this facet of current
engineering "best practices" has been actually been eloquently
committed to paper.  There is nothing in the essay to suggest that
Python is any worse than any other popular production programming
language with respect to the issues that Sussman is addressing.

> | > So Python seems to have the sort of flexibility that he implicitly
> | > claims it does not.

> | For instance, one of the things that he talks about exploring for
> | more robust software systems is predicate dispatching, which is an
> | extension of multiple dispatch.

> Which I obviously read and responded to by noting "And 3.0 may add a new 
> generic function module to dispatch on multiple arguments and possibly 
> predicates."

So, that's great.  Python will once again adopt a wonderful feature
that has been usable in Lisp implementations for 20 years now.  (This
is a good thing, not a bad thing.  I just don't like so much the
having to wait 20 years.)  The problem with Python's model is that you
have to wait for a rather centralized process to agree on and
implement such a feature.  In the Lisp community, *anyone* can easily
implement such features in order to experiment with them.  This allows
much more experimentation to take place, which is one of the reasons
why Gerry Sussman brought the scientific community into the
discussion.  The ability for people to readily conduct such
experiments benefits me, even if I personally never want to write a
macro or implement a language feature myself.

Related to this is the fact that it's much harder to be able to get
new language features right, unless you can get them wrong a few times
first.  It's no accident that when Python adds features from Lisp and
Haskell, etc., that it does a pretty decent job with them, and that's
because it was able to learn from the mistakes and triumphs of others
who tried out various good and not-so-good language features in other
languages first.

The downside of Python's approach is that it makes Python not such a
very good language for exploring these potentially useful features
within Python in the first place.  The Python community has to watch
what is going on in other programming language communities and then
raid these communities for their good ideas.

Perhaps this is not a bad thing if people just want Python to be a
programming language for getting real work done.  But, personally, I
would prefer to have a language that works well for *both* production
work *and* for exploration.  At the moment, neither Python nor Scheme
fits my bill, which is why I would like to see a programming language
that combines the best of both worlds.

> | Although you might be able to cobble something like this together
> | in Python, it would end up being very cumbersome to use.

> Well, talk about shooting down innovations before they happen.
> Perhaps you could wait and see what Eby and others come up with in
> the next year.

I was referring to implementing these sorts of features within Python
(i.e., by writing Python modules in Python to support them) -- not
waiting for Python 3000, which may or may not have the features that I
want.

> [from your other post]
> | A good multimethod system, e.g., would make Python a significantly
> | nicer language for my purposes, for instance.

> Or better yet, read http://www.python.org/dev/peps/pep-3124/
> (tentatively approved by Guido, pending actual code) and perhaps
> some of the discussion thereof on the Python-3000 dev list and help
> design and code something that would be at least usable if not
> 'good'.

Thanks for the pointer -- I will eagerly check it out.

> | You didn't read the paper very carefully.

> Because I don't agree with you?

No, because he clearly is not criticizing Python, in specific, and the
mention of Python is due *solely* to the fact that Python just happens
to mention, in one of its manifestos, a principle that would be
generally held to be true as a current best engineering practice.  I
feel that a careful reading of the paper would have made this
apparent.

Also, I think it a bit counterproductive to get all up in arms about
such minor jabs.  Python is strong enough to withstand all sorts of
valid criticism, as is Scheme.  Even things that are excellent can be
made better, and also t

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-20 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> On Tue, 19 Jun 2007 20:16:28 -0400, Douglas Alan wrote:

>> Steven D'Aprano <[EMAIL PROTECTED]> writes:

>>> On Tue, 19 Jun 2007 17:46:35 -0400, Douglas Alan wrote:

>> The problem with using Scheme for real work is that it doesn't come
>> with enough batteries included and there isn't a big enough of a
>> community behind it that uses it for real work.

> And yet there have been so many millions of dollars put into
> developing Lisp...

> I guess this is another example of perfection being the enemy of the
> good.

That's probably a valid criticism of Scheme.  Not so much of Lisp in
general.  The reason that Common Lisp hasn't been widely adopted
outside of the AI community, for instance, has more to do with most
programmers apparently not understanding the joys of Cambridge Polish
Notation.  Also, Lisp was garbage-collected back when
garbage-collection had a bad name, due to it being considered a
resource hog.  And then a large part of the Lisp community decided to
concentrate on building special hardware (i.e. Lisp machines) for
developing Lisp applications, rather than making good Lisp development
environments for Fortran/C machines (i.e., all normal computers).

It was clear to me at the time that it's a very hard sell to convince
people to buy expensive hardware to develop in a programming language
that hasn't yet been popularized.  But apparently what was obvious to
me was not so apparent to those who dreamed of IPO riches.

> All that development into Lisp/Scheme to make it the best, purest,
> most ideal programming language, with such flexibility and
> extensibility.  that nobody wants to use it. You can write any
> library and macro system you need, but nobody has.

Lisp in general has had all sorts of fancy macro packages for it since
the dawn of time.  But Lisp wasn't really standardized until Common
Lisp in the early '80s.  CLOS (the Common Lisp Object System), which
is implemented via macros entirely within Common Lisp itself, was
completed in the latter half of the '80s.

> I don't mean literally nobody, of course. Its a figure of
> speech. But it seems that people tend to program in Scheme for fun,
> or to stretch the boundaries of what's possible, and not to Get The
> Job Done.

Well, most implementations don't have batteries included, the way that
Python does.  Guile attempts to, as does the Scheme Shell, but for
various reasons they didn't catch on the way that Python and Perl and
Ruby have.

>> Well, go to MIT and take SICP and then the graduate-level sequel to
>> the class, Adventures in Advanced Symbolic Programming, and then
>> you'll see what some of the advantages would be.

> Are you suggesting that the only way to see the advantages of Scheme
> is to do a university course?

No, I was just suggesting *one* way of seeing the light.  I think,
though, that it's probably difficult to grasp why Scheme is so
interesting without taking a good class or two on the topic.  Not that
acquiring insight on one's own is impossible, but most people would
also have a hard time seeing why group theory or linear algebra are
really interesting without taking good classes one the subjects.

Python doesn't really require taking a class in it because it's really
very similar to many other programming languages.  So if you already
know any of the others, Python is very easy to pick up.  Scheme, on
the other hand, is more of a departure from what most people would
already be comfortable with.

>> A good multimethod system, e.g., would make Python a significantly nicer
>> language for my purposes, for instance.

> http://en.wikipedia.org/wiki/Multimethod#Python

Sure, you can do it in Python, but I bet that it's neither very fun,
nor efficient.

>> For the record, I have a huge problem with NIH-syndrome, and think that
>> every programming language in the world could learn a thing or two from
>> what other languages have gotten right.

> Of course. And Python, more than most, has shamelessly copied
> features from other languages.

To make myself a bit more clear, I don't think that Python suffers all
that much from NIH.  On the other hand, I think that many people in
discussion forums in general often do.  Here being no exception.

> So the question is, are Scheme macros one of those things that
> "other languages have gotten right"? Could they be a case of
> over-generalization? Or somewhere in between?

I don't know of any language that is not a dialect of Lisp that has a
good syntax extension mechanism.  Lisp dialects tend to get it right,
more or less, but solving the issue is much more difficult for any
language that doesn't have a Lisp-like syntax.  Lisp's syntax is
particularly amenable to syntax extension.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-20 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> All of which makes Douglas Alan's accusations of Not Invented Here 
> syndrome about Python seem rather silly.

I've never made such an accusation about Python itself -- just about
the apparent attitude of some pontiffs.

> The point I was making isn't that Scheme/Lisp features are "bad",
> but that there is no reason to slavishly follow Scheme just because
> it is(?)  technically the most "pure" programming language.

I don't argue in favor of purity.  I argue in favor of functionality
that would help me write better programs more easily.

Having a somewhat pure and minimalistic core language, however,
probably does help to make a language easier to implement, maintain,
and understand.

> I'm glad somebody understands lambda calculus and closures and meta-
> classes, and that those people have created Python so I don't have
> to.  And I suspect that for every Douglas Alan enamored with Scheme,
> there are ten thousand programmers who just want to use a handful of
> pre-built tools to get the work done, never mind using macros to
> create the tools they need before they can even start.

I don't typically want to write that many macros myself.  I want to be
part of a community where cool macro packages are actively developed
that I can then use.  For instance, in Common Lisp, the entire OO
system is just a big macro package that was implemented entirely
within the language.

With macros and first class continuations, as Sussman points out in
his essay, people can then implement very interesting features like
backtracking.  Again, I don't want to implement a backtracking macro
package myself; I want to be able to use what the community might come
up with.

> But "Scheme has macros" isn't a justification for why Python should
> have them.

No one ever gave that justification.  The justification is that they
are *good*.

Macros are a way to abstract syntax the way that objects are used to
abstract data types and that iterators and generators abstract
control, etc.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-20 Thread Douglas Alan
Michele Simionato <[EMAIL PROTECTED]> writes:

> In practice Scheme follows exactly the opposite route: there are
> dozens of different and redundant object systems, module systems,
> even record systems, built just by piling up feature over feature.

The solution to this is to have a standard library which picks the
best of each and standardizes on them.  (E.g., for Common Lisp, CLOS
became the standard object system, but there was certainly competition
for a while.  E.g., Flavors, Common Loops, etc.) The problem with this
for Scheme is that the Scheme standardizing committees operate at a
glacial pace.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-20 Thread Douglas Alan
Robert Kern <[EMAIL PROTECTED]> writes:

>> The problem with Python's model is that you
>> have to wait for a rather centralized process to agree on and
>> implement such a feature.

> No, you don't. Philip Eby has been working on various incarnations
> of generic functions for some time now. The only thing new with 3.0
> is that they may be included in the standard library and parts of
> the rest of the standard library may use them to implement their
> features. Implementing generic functions themselves don't require
> anyone to convince python-dev of anything.

>   http://python.org/pypi/simplegeneric/0.6
>   http://peak.telecommunity.com/DevCenter/RulesReadme

The first one doesn't do multiple dispatch.  I'll have to have a look
at the second one.  It looks interesting.  This link

   http://www.ibm.com/developerworks/library/l-cppeak2/

shows PEAK being used to do multiple dispatch based on predicates, but
the code to implement the predicates is in strings!  Sure, if you
don't have macros, you can always use eval instead if you have it, but
(1) doing so is ugly and dangerous, and (2) it's inefficient.  Lisp
implementations these days do fancy stuff (multiple dispatch, etc.)
implemented as macros and yet typically end up generating code that
runs within a factor of 2 of the speed of C code.

In addition to having to code predicates in strings, using PEAK seems
syntactically rather cumbersome.  Macros would help solve this
problem.  Decorators seem to help to a large extent, but they don't
help as much as macros would.

Thanks for the pointers.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-21 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> On Wed, 20 Jun 2007 17:23:42 -0400, Douglas Alan wrote:

>> Macros are a way to abstract syntax the way that objects are used to
>> abstract data types and that iterators and generators abstract control,
>> etc.

> But why is the ability to abstract syntax good?

It allows the community to develop language features in a modular way
without having to sully the code base for the language itself.  A
prime example of this is how CLOS, the Common Lisp Object System was
implemented completely as a loadable library (with the help of many
macros) into Common Lisp, which was not an OO language prior to the
adoption of CLOS.

The fact that CLOS could be developed in a modular way allowed for a
number of different groups to work on competing object systems.  After
some experience with the pros and cons of the various object systems,
the developers of CLOS were able to incorporate most of the best ideas
from the entire field and then get it adopted as a defacto standard.

This allowed, for instance, the inclusion of multimethods, which are
an extremely nice feature for modular code development.  In prior Lisp
dialects I had used, the object systems were more like the
single-object dispatching OO system in Python, which is substantially
inferior.  The fact that the entire OO system in Common Lisp could be
loaded as a module that is coded entirely within Common Lisp allowed
for a large jump in the quality of its OO subsystem.

> One criticism of operator overloading is that when you see X + Y you
> have no real idea of whether it is adding X and Y together, or doing
> something bizarre.

Yes, and despite this, operator overloading is an essential feature
for a modern language.  (Java's lack of it notwithstanding.)

> Now allow syntax to be over-ridden as well, and not only can't you tell 
> what X + Y does, but you can't even tell what it *means*. Maybe its a for-
> loop, calling the function Y X times.

(1) With operator overloading you have no idea what X + Y *means*.  It
could be feeding the cat, for all you know.  If that turns out to
be the case, you fire the programmer in question.  Just because a
language feature *can* be abused is no reason to leave it out of a
language.  Power always comes with responsibility, but we still
need powerful programming languages.

(2) In Lisp, you cannot redefine existing syntax (without modifying
the standard library, which would be considered very rude), so the
problem that you are talking about is moot.  You can only add
*new* syntactic constructs.  I would suggest that any proposed
syntax extension mechanisms for other languages behave like Lisp
in this regard.

> Sometimes, more freedom is not better. If you had a language that
> let you redefine the literal 1 to mean the integer zero, wouldn't
> that make it much harder to understand what even basic arithmetic
> meant?

But we're not talking about anything like this.  E.g., in some
dialects of Lisp it used to be possible to set the variable that
contained the value for *true* to the value for *false*.  If you
actually did this, however, just imagine the havoc that it would
wreak.  So ultimately, this capability was removed.

Alas, in Python, you can still do such a crazy thing!

> But that doesn't mean I want a language where anything goes

You are imagining something very different from what is proposed.
Lisp-like macros don't allow "anything goes".

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-21 Thread Douglas Alan
Neil Cerutti <[EMAIL PROTECTED]> writes:

>>> But why is the ability to abstract syntax good?

>> It allows the community to develop language features in a
>> modular way without having to sully the code base for the
>> language itself.  

> That's not an advantage exclusive to macros, though.

No, but macros are often are necessary to be able to implement such
features in (1) an efficient-enough manner, and (2) in a manner that
is syntactically palatable.  E.g., PEAK for Python implements multiple
predicate-based dispatch, but you have to define the predicates as
Python code within strings.  That's not very pretty.  And probably not
very fast either.  Though Python, in general, is not very fast, so
perhaps that doesn't matter too much for Python.

> Some time last week I found myself writing the following thing in
> Python:

> [...]

> I deleted it right after I tried to use it the first time. Using it
> is more cumbersome than simply repeating myself, due to syntax
> limitations of Python.

See what I mean!

> And other, more bizarre syntax extensions have been perpetrated.
> mx.TextTools uses Python tuples to write a completely different
> programming language.

Sounds like "the Loop macro" for Lisp, which implements a mini sort of
Cobol-like language just for coding gnarly loops within Lisp.  It
turns out that when restricted to just coding gnarly loops, this is a
much better idea than it sounds.

Yes, you can do this sort of thing, sort of, without macros, but, as
we discussed above, the result is often ugly and slow.

>> A prime example of this is how CLOS, the Common Lisp Object
>> System was implemented completely as a loadable library (with
>> the help of many macros) into Common Lisp, which was not an OO
>> language prior to the adoption of CLOS.

> Is there a second example? ;)

Why yes, now that you mention it: the Loop macro.  Also, in many
implementations of Lisp, much of the core language is actually
implemented using macros against an even smaller core.  Keeping this
inside core as small as possible helps make the implementation easier
to construct, maintain, and optimize.

Also, way back when, when I used to code in Maclisp, I implemented my
own object system and exception handling system in macros, as Maclisp
had neither of these off the shelf.  The object system took me a
couple of weeks to do, and the exception handing system a couple of
days.  They worked well, looked good, and ran fast.

> Seriously, maybe Python looks like 'blub' (thanks, Paul Graham), to
> the skilled Lisp user, but it makes a lot of other languages look
> like 'blub', too, including, sometimes, Lisp: Lisp has to 'blub'
> generators.

Actually, Scheme has first class continuations, and with continuations
and macros you could easily implement generators, and I'm sure someone
has.  Whether such a library has been widely adopted for Scheme,
though, I have no idea.

You're probably right about Common Lisp, which is probably missing
generators due to efficiency concerns.  Lisp Machines had "stack
groups", which were basically the same thing as generators, but making
a call to a stack group was 100 times slower than a normal function
call.  This meant that people generally didn't use them even when it
would make their code more elegant, due to the huge performance cost.

Now, since Python is like 100 times slower than Common Lisp anyway,
you don't notice this performance issue with Python's generators.
They just happen to be only as slow as the rest of Python.

|>oug

"Lisp is worth learning for the profound enlightenment experience you
will have when you finally get it; that experience will make you a
better programmer for the rest of your days, even if you never
actually use Lisp itself a lot." -- Eric Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-21 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> | It allows the community to develop language features in a modular way
> | without having to sully the code base for the language itself.
> [etc]

> Some of the strongest opposition to adding macros to Python comes
> from people like Alex Martelli who have had experience with them in
> *multi-person production* projects.  He claimed in various posts
> that the net effect was to reduce productivity.  So convince us (and
> Guido!) that he is wrong ;-)

I'm not convinced that Guido is wrong because I know that he has at
least occasionally mused that he might someday consider a macro
facility for Python.

Alex Martelli, on the other hand, although an extremely smart guy,
seems to me to often be over-opinionated and dismissive.

Regarding being on a project where people used macros poorly, I've
also been on projects where people did a poor job of OO design, and a
non-OO design would have been better than the crappy OO design that
was ultimately used.  Does that mean that we should remove the OO
features from Python?

Paul Graham made it rich implementing Yahoo Stores in Lisp, and claims
that heavy use of macros is one of the reasons that he was able to
stay well-ahead of all the competition.  So, maybe Paul and Alex can
duke it out.  Personally, I like Paul's style better.  And he's made a
lot more money using his theory of software design.

> But I would prefer you somehow try to help make usable multi-arg and 
> predicate dispatch a reality.

Alas, I can't stand programming in C, so there's no way I'm going to
dive that deeply into the CPython code base.  If I could help
implement it in Python itself, using a good macro facility, sign me up!

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-22 Thread Douglas Alan
Neil Cerutti <[EMAIL PROTECTED]> writes:

> That said, I wouldn't give up the summer I spent studying _Simply
> Scheme_.

Sounds like fun.  Is this like a kinder, gentler version of SICP?

I'm not sure, though, that I could have learned computer science
properly without the immortal characters of Ben Bittwiddler and Harry
Reasoner intruding into every problem set.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-22 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> On Thu, 21 Jun 2007 15:25:37 -0400, Douglas Alan wrote:

>> You are imagining something very different from what is proposed.
>> Lisp-like macros don't allow "anything goes".

> Provided people avoid doing anything "which would be considered very 
> rude" (your own words).

No, Lisp macros are entirely contained within a begin and end
delimiter, which is introduced by the name of the macro.  E.g., here
is a real example of some Lisp code that I wrote aeons ago using the
loop macro:

   (loop for index from 0 below size
  for element in contents
  do (store (arraycall t array index)
element)
  finally (return (make-htable BUCKETS size
   ARRAY array
   KEY-PRINTER key-printer
   ITEM-PRINTER item-printer)

The syntactical extensions supported by the loop macro can only begin
starting immediately after "(loop " and they end with the matching
closing parenthesis.  There's no way in Lisp to write a macro whose
syntactical extensions can extend outside of this very explicitly
delimited scope.  Nor can they mess with Lisp's tokenization.

> Python already allows me to shoot myself in the foot, if I wish. I'm 
> comfortable with that level of freedom. I'm not necessarily comfortable 
> with extensions to the language that would allow me the freedom to shoot 
> myself in the head.

Lisp macros don't let you shoot yourself in the head -- only in the
foot.  Being able to do

   True = False

is being able to shoot yourself in the head.  And Python certainly
lets you do that.

> I would need to be convinced of the advantages, as would many other
> people, including the BDFL.

The proof is in the pudding for anyone who has seen the advantages it
brings to Lisp.  As Paul Graham points out, it's hard to look up and
see the advantages of what is up there in a more powerful language.
It's only easy to look down and see the disadvantages of what is
missing from a less powerful language.  To understand the advantages,
one has to be willing to climb the hill and take in the view.

> It isn't clear exactly what functionality a hypothetical Python macro 
> system would include,

It should be largely equivalent to what is provided by Lisp.
Admittedly this is a bit more difficult for Python, as Lisp's syntax
is eminently suited for macros, while Python's is not.  One would
probably want to take a look at how Dylan solved this problem, as
Dylan implements Lisp-like macros even though it has an Algol-like
syntax.  Or you could look at the paper I wrote (for a class) on the
design of Python-like language that would support macros.  My paper is
only a rough sketch, however.

> let alone whether the benefits would outweigh the costs,

They pay off in boatloads in the Lisp community.

> (It took Lisp half a century and millions of dollars of corporate
> funding to reach where it is now.

Ummm, not really.  Lisp hasn't really changed very much since the late
'70s, and prior to that, most of the work on Lisp was just done in a
few university labs (e.g., MIT) and at Xerox Parc.  Any work and money
that has been spent on Lisp since then has just been in trying to
market it, or standardize it, or design hardware suited to running it
faster, or build better development environments for it, or optimizing
compilers, etc.

Lisp, itself, is rather easily to implement.  (Getting it to run as
fast as C is more of a challenge, what with good garbage collectors
and all being non-trivial to implement, but performance doesn't seem
to be much of an issue for the Python community.)  I made my own Lisp
implementation in C++ in two weeks.  (Not really a production dialect,
but it worked.)  Kyoto Common Lisp, which was definitely a production
implementation, was implemented by two people in a couple of years.
(It compiled Common Lisp into C.)

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-22 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> "Douglas Alan" <[EMAIL PROTECTED]> wrote in message 

> | > But why is the ability to abstract syntax good?

> | It allows the community to develop language features in a modular way
> | without having to sully the code base for the language itself.

> Anyone can write modules, experimental or otherwise, without touching the 
> code base for any particular implementation.

> For those whose know one of the implementation languages, source code 
> control systems allow one to do experiments on branches without 'sullying' 
> the trunk or impeding the development thereof.

When I said "without having to sully the code base", I meant that one
can implement a language feature for the target language as a loadable
module written entirely within the language itself, and without having
to understand anything particularly deep or specific about the language
implementation details.

I.e., I could write a new object system for Lisp faster than I could
even begin to fathom the internal of CPython.  Not only that, I have
absolutely no desire to spend my valuable free time writing C code.
I'd much rather be hacking in Python, thank you very much.

> One of the goals of the PyPy project was to allow people to experiment with 
> syntax extensions in Python itself.  (But I don't know how easy that is 
> yet.)

PyPy sounds like a very interesting project indeed!

> But I think that overall the problem of designing new syntax is more
> in the design than the implementation.  Anything new has to be
> usable, readable, not clash too much with existing style, not
> introduce ambiguities, and not move the extended language outside
> the LL(1) [I believe that is right] subset of CFLs.

People (myself included) haven't had much trouble implementing nice
and useful macro packages for Lisp.  Admittedly, it's a harder problem
for a language that doesn't have a Lisp-like syntax.  I believe that
Dylan has macros without having a Lisp-like syntax, but Dylan is
really a dialect of Lisp, only with a more traditional Algol-like
syntax veneered onto it.  My guess is that a macro developer for Dylan
would have to be familiar with an underlying hidden intermediate Lisp
syntax.  (Though I'm just really just spouting that guess out of my
butt.)

A few years back, I designed a somewhat Python-like language with a
macro facility for a class on dynamic languages and their
implementations.  I didn't implement it, however, and I doubt that
I'll have time to get around to it in this lifetime.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-22 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> | But why is the ability to abstract syntax good?

> I think this points to where Sussman went wrong in his footnote and
> Alan in his defense thereof.  Flexibility of function -- being able
> to do many different things -- is quite different from flexibility
> of syntax 

I think you are setting up a false dichotomy.  One that is related to
the false unification that annoying people used to always make when
they would perpetually argue that it wasn't important which
programming language you programmed in, as they are all Turing
equivalent anyway.  Well, I sure as hell don't want to write all my
programs for a Turning machine, and a Turing machine is certainly
Turing equivalent!

Functionality is no good if it's too cumbersome to use.  For instance,
Scheme gives you first class continuations, which Python doesn't.
Continuations let you do *all sorts* of interesting things that you
just cannot do in Python.  Like backtracking, for instance.  (Well
maybe you could do backtracking in Python with lots of putting code
into strings and liberal use of eval, for all I know, but the results
would almost certainly be too much of a bear to actually use.)

Now, continuations, by themselves, in Scheme actually don't buy you
very much, because although they let you do some crazy powerful
things, making use of them to do so, is too confusing and verbose.  In
order to actually use this very cool functionality, you need macros so
that you can wrap a pretty and easy-to-use face on top of all the
delicious continuation goodness.

You'll, just have to trust me on this one.  I've written code with
continuations, and I just couldn't make heads or tails out of the code
a few hours later.  But when prettied-up with a nice macro layer, they
can be a joy to behold.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-22 Thread Douglas Alan
"Terry Reedy" <[EMAIL PROTECTED]> writes:

> "Douglas Alan" <[EMAIL PROTECTED]> wrote in message 

> | "Terry Reedy" <[EMAIL PROTECTED]> writes:

> | > I think this points to where Sussman went wrong in his footnote
> | > and Alan in his defense thereof.  Flexibility of function --
> | > being able to do many different things -- is quite different
> | > from flexibility of syntax

> | I think you are setting up a false dichotomy.

> I think this denial of reality is your way of avoiding admitting, perhaps 
> to yourself, that your god Sussman made a mistake.

Sussman isn't my god -- Kate Bush is.

Just because I'm right and you're wrong, doesn't mean that I'm in
denial.  It is you who are in denial if you believe that syntax is
unimportant, as long as one is provided the required functionality.
In fact, that's stereotypical Computer Science denial.  Computer
Science academics will typically state as a truism that semantics are
what is important and syntax is just a boring trifle in comparison.
But time and time again, you'll see programming languages succeed or
fail more on their syntax than on their semantics.  And issues of
syntax is often where you see the most inflamed debates.  Just look at
all the flames one used to hear about Python using whitespace
significantly.  Or all the flames that one will still hear about Lisp
using a lot of parentheses.

You seem oblivious to the fact that one of the huge benefits of Python
is its elegant and readable syntax.  The problem with not having a
"flexible syntax", is that a programming language can't provide
off-the-shelf an elegant syntax for all functionality that will ever
be needed.  Eventually programmers find themselves in need of new
elegant functionality, but without a corresponding elegant syntax to
go along with the new functionality, the result is code that does not
look elegant and is therefore difficult to read and thus maintain.

Consequently, "flexibility of function" is often moot without
"flexibility of syntax".  I don't know how I can make it any clearer
than this.  I'm sorry if you don't understand what I am saying, but
just because you don't understand, or if you do, that you don't agree,
doesn't mean that I don't have a reasoned and reasonable point of
view.

> | One that is related to the false unification that annoying people
> | used to always make when they would perpetually argue that it
> | wasn't important which programming language you programmed in, as
> | they are all Turing equivalent anyway.  Well, I sure as hell don't
> | want to write all my programs for a Turning machine, and a Turing
> | machine is certainly Turing equivalent!

> Diversionary crap unrelated to the previous discussion.

Take the issue up with Paul Graham.  Since making a fortune developing
software in Lisp (making heavy use of macros), he now has much more
free time to write essays defending the truth than I do:

   http://www.paulgraham.com/avg.html

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-23 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> Nevertheless, in Python 1+2 always equals 3. You can't say the same thing
> about Lisp.

Well, I can't say much of *anything* about "1 + 2" in Lisp, since
that's not the syntax for adding numbers in Lisp.  In Lisp, numbers
are typically added using the "+" function, which might be invoked
like so:

   (+ 1 2 3)

This would return 6.

It's true that some dialects of Lisp will let you redefine the "+"
function, which would typically be a bad idea.  Other dialects would
give you an error or a warning if you tried to redefine "+".  I would
fall more into the latter camp.  (Though sometimes you might want a
way to escape such restrictions with some sort of "yes, I really want
to shoot myself in the head" declaration, as you may want to
experiment, not with changing the meaning of "(+ 1 2"), but rather
with adding some additional useful capability to the "+" function that
it doesn't already have.

Back on the Python front, although "1 + 2" might always equal 3 in
Python, this is really rather cold comfort, since no useful code would
ever do that.  Useful code might include "a + 1", but since you can
overload operators in Python, you can say little about what "a + 1"
might do or mean on the basis of the syntax alone.

Furthermore, in Python you can redefine the "int" data type so that
int.__add__ does a subtraction instead.  Then you end up with such
weirdness as

   >>> int(1.0) + int(2.0)
   -1

Also, you can redefine the sum() function in Python.

So, we see that Python offers you a multitude of ways to shoot
yourself in the head.

One of the things that annoys me when coding in Python (and this is a
flaw that even lowly Perl has a good solution for), is that if you do
something like

 longVarableName = foo(longVariableName)

You end up with a bug that can be very hard to track down.  So one use
for macros would be so that I can define "let" and "set" statements so
that I might code like this:

 let longVariableName = 0
 set longVarableName = foo(longVariableName)

Then if longVarableName didn't already exist, an error would be
raised, rather than a new variable being automatically created for me.

The last time I mentioned this, Alex Martelli basically accused me of
being an idiot for having such trivial concerns.  But, ya know -- it
isn't really a trivial concern, despite Martelli's obviously high
intellect.  A woman I work with who is bringing up a CMS using Drupal
was complaining to me bitterly that this very same issue in PHP was
causing her bugs that were hard to track down.  Unfortunately, I could
not gloat over her with my Python superiority, because if Drupal were
written in Python, rather than PHP, she'd have the very same problem
-- at least in this regard.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-23 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> But if you really want declarations, you can have them.
>
 import variables
 variables.declare(x=1, y=2.5, z=[1, 2, 4])
 variables.x = None
 variables.w = 0
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "variables.py", line 15, in __setattr__
> raise self.DeclarationError("Variable '%s' not declared" % name)
> variables.DeclarationError: Variable 'w' not declared

Oh, I forgot to mention that I work a lot on preexisting code, which I
am surely not going to go to all the effort to retype and then retest.
With the "let" and "set" macros I can use "set" without a matching
"let".  "set" just checks to make sure that a variable already exists
before assigning to it, and "let" just prevents against
double-declarations.  They can be used independently or together.
With your "variables" class, they have to be used together.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-23 Thread Douglas Alan
Michele Simionato <[EMAIL PROTECTED]> writes:

> Been there, done that. So what? Your example will not convince any
> Pythonista.

I'm a Pythonista, and it convinces me.

> The Pythonista expects Guido to do the language job and the
> application developer to do the application job.

I'm happy to hear that there is a brain washing device built into
Python that provides all Python programmers with exactly the same
mindset, as that will certainly aid in having a consistent look and
feel to all Python code.

> Consider for instance generators.

Yes, consider them!  If Python had first class continuations (like
Ruby does) and macros in 1991, it could have had generators in 1992,
rather than in 2002.  (I implemented generators using macros and stack
groups for Lisp Machines in 1983, and it took me all of a few hours.)

> In Python they are already implemented in the core language and the
> application developer does not care at all about implementing them.

And if they were implemented as macros in a library, then the
application developer doesn't have to care about implementing them
either.

> In Scheme I am supposed to implement them myself with continuations,
> but why should I do that, except as a learning exercise?

Well, when I get around to giving my sage advice to the Scheme
community, I'll let them know that generators need to be in the
standard library, not a roll-your-own exercise.

> It is much better if competent people are in charge of the very low
> level stuff and give me just the high level tools.

Even many competent people don't want to hack in the implementation
language and have to understand the language implementation internals
to design and implement language features.  By your argument,
Pythonistas might as well insist that the entire standard library be
coded in C.

> BTW, there are already Python-like languages with macros
> (i.e. logix) and still nobody use them, including people with a
> Scheme/Lisp background. That /should be telling you something.

It only tells me what I've known for at least a couple decades now --
that languages live and die on issues that often have little to do
with the language's intrinsic merits.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-23 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

>> So one use for macros would be so that I can define "let" and "set"
>> statements so that I might code like this:
>> 
>>  let longVariableName = 0
>>  set longVarableName = foo(longVariableName)
>> 
>> Then if longVarableName didn't already exist, an error would be
>> raised, rather than a new variable being automatically created for me.

> So "let" is the initial declaration, and "set" modifies the existing
> variable? 

Yes.

> What happens is you declare a variable twice?

The same thing that would happen in Perl or any other language that
supports this type of variable declaration and setting: it would raise
an error.

The big debate you get next, is then whether you should be allowed to
shadow variables in nested scopes with new variables of the same
name.  Given that Python already allows this, my guess is that the
answer should be yes.

> How long did it take you to write the macros, and use them, compared
> to running Pylint or Pychecker or equivalent?

An hour?  Who cares?  You write it once and then you have it for the
rest of your life.  You put it in a widely available library, and then
*every* programmer also has it for the rest of their lives.  The
amortized cost: $0.00.  The value: priceless.

> But if you really want declarations, you can have them.

 import variables
 variables.declare(x=1, y=2.5, z=[1, 2, 4])
 variables.x = None
 variables.w = 0
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "variables.py", line 15, in __setattr__
> raise self.DeclarationError("Variable '%s' not declared" % name)
> variables.DeclarationError: Variable 'w' not declared

Thanks, but that's just too syntactically ugly and verbose for me to
use.  Not only that, but my fellow Python programmers would be sure to
come and shoot me if I were to code that way.

One of the reasons that I want to use Python is because I like reading
and writing code that is easy to read and looks good.  I don't want to
bend it to my will at the expense of ugly looking code.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-24 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> On Sat, 23 Jun 2007 14:56:35 -0400, Douglas Alan wrote:
>
>>> How long did it take you to write the macros, and use them, compared
>>> to running Pylint or Pychecker or equivalent?

>> An hour?  Who cares?  You write it once and then you have it for the
>> rest of your life.  You put it in a widely available library, and then
>> *every* programmer also has it for the rest of their lives.  The
>> amortized cost: $0.00.  The value: priceless.

> Really? Where do I download this macro? How do I find out about it? How
> many Lisp programmers are using it now?

(1) I didn't have to write such a macro for Lisp, as Lisp works
differently.  For one thing, Lisp already has let and set special
forms.  (Lisp uses the term "special form" for what Python would call
a "statement", but Lisp doesn't call them statements since they return
values.)

(2) You act as if I have no heavy criticisms of Lisp or the Lisp
community.  I critique everything with equal vigor, and keep an eye
out for the good aspects and ideas of everything with equal vigor.

> How does your glib response jib with your earlier claims that the
> weakness of Lisp/Scheme is the lack of good libraries?

(1) See above. (2) My response wasn't glib.

> Googling for ' "Douglas Allen" download lisp OR scheme ' wasn't very
> promising.

(1) You spelled my name wrong.  (2) I haven't written any libraries
for any mainstream dialects of Lisp since there was a web.  I did
write a multiple dispatch lookup cacher for a research dialect of
Lisp, but it  was just an exercise for a version of Lisp that few
people have ever used.

> In fairness, the various Python lints/checkers aren't part of the standard
> library either, but they are well-know "standards".

In general I don't like such checkers, as I tend to find them more
annoying than useful.

>> Thanks, but that's just too syntactically ugly and verbose for me to
>> use.

> "Syntactically ugly"? "Verbose"?

> Compare yours with mine:

> let x = 0
> let y = 1
> let z = 2
> set x = 99 

> (Looks like BASIC, circa 1979.)

It looks like a lot of languages.  And there's a reason for that -- it
was a good idea.

> variables.declare(x=0, y=1, z=2)
> variables.x = 99

> (Standard Python syntax.)

> I don't think having two easily confused names, let and set is an
> advantage,

Let and set are not easily confused.  Lisp programmers have had
absolutely no problem keeping the distinction separate for the last 47
years now.

> but if you don't like the word "declare" you could change it to
> "let", or change the name of the module to "set" (although that runs the
> risk of confusing it with sets).

> Because this uses perfectly stock-standard Python syntax, you could even
> do this, so you type fewer characters:

> v = variables
> v.x = 99

> and it would Just Work. 

I wouldn't program that way, and no one that I know would either.

In this regard you sound exactly like all the C++ folks, who when you
point out that something in C++ is inadequate for one's needs, they
point you at some cumbersome and ugly solution and then tell you that
since C++ can already deal with the complaint, that there's no good
reason to consider changing C++.  Consequently, C++ still doesn't have
a "finally" statement, and it requires either making instance
variables public or forcing the programmer to write lots of
boilerplate code writing setter and getter functions.  Fortunately,
the Python developers finally saw the errors of their ways in this
regard and fixed the situation.  But, it seems to me that you would
have been one of those people saying that there's no need to have a
way of overriding attribute assignment and fetching, as you can always
just write all that extra boilerplate code, or instead add an extra
layer of indirection (proxy objects) in your instance data to have
things done the way you want, at the expense of ugly code.

>> Not only that, but my fellow Python programmers would be sure to
>> come and shoot me if I were to code that way.

> *shrug* They'd shoot you if you used "let x = 0" too.

Clearly you are not familiar with the programmers that I work with.
As I mentioned previously, at least one of them is quite upset about
the auto-declaration feature of most scripting languages, and your
suggestion would not make her any happier.

>> One of the reasons that I want to use Python is because I like reading
>> and writing code that is easy to read and looks good.  I don't want to
>> bend it to my will at the expense of ugly looking code.

> But the "ugly looking code" is stock-s

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-24 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

>> You seem oblivious to the fact that one of the huge benefits of Python
>> is its elegant and readable syntax.  The problem with not having a
>> "flexible syntax", is that a programming language can't provide
>> off-the-shelf an elegant syntax for all functionality that will ever
>> be needed.

> It is hardly "off-the-shelf" if somebody has to create new syntax
> for it.

Ummm. that's my point.  No language can provide all the syntax that
will ever be needed to write elegant code.  If module authors can
provide the syntax needed to use their module elegantly, then problem
solved.

>> Eventually programmers find themselves in need of new
>> elegant functionality, but without a corresponding elegant syntax to
>> go along with the new functionality, the result is code that does not
>> look elegant and is therefore difficult to read and thus maintain.

> That's true, as far as it goes, but I think you over-state your
> case.

I do not.

It is so easy for you, without *any* experience with a language (i.e.,
Lisp) or its community to completely dismiss the knowledge and wisdom
acquired by that community.  Doesn't that disturb you a bit?

> The syntax included in Python is excellent for most things, and even
> at its weakest, is still good. I can't think of any part of Python's
> syntax that is out-and-out bad.

The proposed syntax for using the proposed predicate-based multimethod
library is ungainly.

Until decorators were added to the language, the way to do things that
decorators are good for was ugly.  Decorators patch up one ugliness,
but who wants Python to become an old boat with lots of patches?

Nearly every addition made to Python since 1.5 could have been done in
the standard library, rather than being made to the core language, if
Python had a good macro system.  The exceptions, I think, being
objects all the way down, and generators.  Though generators could
have been done in the standard library too, if Python had first class
continuations, like Scheme and Ruby do.

Over time, an infinite number of examples will turn up like this, and
I claim (1) that it is better to modify the standard library than to
modify the language implementation, and that (2) it is better to allow
people to experiment with language features without having to modify
the implementation, and (3) that it is better to allow people to
distribute new language features for experimentation or production in
a loadable modular fashion, and (4) that it is better to allow
application developers to develope new language features for their
application frameworks than to not.

> The reality is, one can go a long, long, long distance with Python's
> syntax.

And you can go a long, long way with Basic, or Fortran, or C, or C++,
or Haskell, or Lisp.  None of this implies that there aren't
deficiencies in all of these languages.  Python is no exception.
Python just happens to be better than most in a number of significant
regards.

> Most requests for "new syntax" I've seen fall into a few
> categories:

> * optimization, e.g. case, repeat, multi-line lambda

I don't give a hoot about case or repeat, though a Lisp-like "loop
macro" might be nice.  (The loop macro is a little mini language
optimized for coding complicated loops.)  A multi-line lambda would
be very nice.

> * "language Foo looks like this, it is kewl"

Sometimes language Foo has features that are actually important to for
a specific application or problem domain.  It's no accident, for
instance, that Lisp is still the preferred language for doing AI
research.  It's better for Python if Python can accommodate these
applications and domains than for Python to give up these markets to
Foo.

> * the usual braces/whitespace flamewars
> * trying to get static type checking into the language
>
>
> So let's be specific -- what do you think Python's syntax is missing? If
> Python did have a macro facility, what would you change?

In addition to the examples given above, symbols would be nice.  Lisp
has 'em, Ruby has 'em, Python doesn't.  They are very useful.

An elegant multimethod based object system will be essential
for every language someday, when the time is right for people to
understand the advantages.

Manifest typing will be essential.

A backtracking system is important for some applications.  Perhaps all
applications, someday.

The ability to make mini-languages for specific domains, like fields
of math and science, is very useful, so the mathematicians and
scientists can denote things in a notation that is closer to the
notation that they actually work in.

Etc., etc., etc.  The future is long, and our ability to peer into it
is blurry, and languages that can adapt to the unforeseen needs of that
blurry future are the ones that will survive.

For instance, I can state with almost 100% certainty that one hundred
years from now, some dialect of Lisp will still be around and in
common usage.  I can't say the same thing about Python

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-24 Thread Douglas Alan
Graham Breed <[EMAIL PROTECTED]> writes:

> Another way is to decorate functions with their local variables:

 from strict import my
 @my("item")
> ... def f(x=1, y=2.5, z=[1,2,4]):
> ... x = float(x)
> ... w = float(y)
> ... return [item+x-y for item in z]

Well, I suppose that's a bit better than the previous suggestion, but
(1) it breaks the style rule of not declaring variables until you need
them, and (2) it doesn't catch double initialization.

> The best way to catch false rebindings is to stick a comment with
> the word "rebound" after every statement where you think you're
> rebinding a variable.

No, the best way to catch false rebindings is to have the computers
catch such errors for you.  That's what you pay them for.

> Then you can search your code for cases where there's a "rebound"
> comment but no rebinding.

And how do I easily do that?  And how do I know if I even need to in
the face of sometimes subtle bugs?

> Assuming you're the kind of person who knows that false rebindings
> can lead to perplexing bugs, but doesn't check apparent rebindings
> in a paranoid way every time a perplexing bug comes up, anyway.
> (They aren't that common in modern python code, after all.)

They're not that uncommon, either.

I've certainly had it happen to me on several occasions, and sometimes
they've been hard to find as I might not even see the mispeling even
if I read the code 20 times.

(Like the time I spent all day trying to figure out why my assembly
code wasn't working when I was a student and finally I decided to ask
the TA for help, and while talking him through my code so that he
could tell me what I was doing wrong, I finally noticed the "rO" where
there was supposed to be an "r0".  It's amazing how useful a TA can
be, while doing nothing at all!)

> And you're also the kind of person who's troubled by perplexing bugs
> but doesn't run a fully fledged lint.

Maybe PyLint is better than Lint for C was (hated it!), but my idea of
RAD does not include wading through piles of useless warning messages
looking for the needle warning in the warning haystack.  Or running
any other programs in the midst of my code, run, code, run, ..., loop.

> Maybe that's the kind of person who wouldn't put up with anything
> short of a macro as in the original proposal.  All I know is that
> it's the kind of person I don't want to second guess.

As it is, I code in Python the way that a normal Python programmer
would, and when I have a bug, I track it down through sometimes
painstaking debugging as a normal Python programmer would.  Just as
any other normal Python programmer, I would not use the alternatives
suggested so far, as I'd find them cumbersome and inelegant.  I'd
prefer not to have been bit by the bugs to begin with.  Consequently,
I'd use let and set statements, if they were provided (or if I could
implement them), just as I have the equivalents to let and set in
every other programming language that I commonly program in other than
Python.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-24 Thread Douglas Alan
Michele Simionato <[EMAIL PROTECTED]> writes:

> You should really be using pychecker (as well as Emacs autocompletion
> feature ...):

I *do* use Emacs's autocompletion, but sometimes these sorts of bugs
creep in anyway.  (E.g., sometimes I autocomplete in the wrong variable!)

> ~$ pychecker -v x.py
> Processing x...
>
> Warnings...
>
> x.py:4: Variable (longVarableName) not used
>
> [I know you will not be satisfied with this, but pychecker is really
> useful,

Okay, I'll check out PyChecker and PyLint, though I'm sure they will
annoy the hell out of me.  They're probably less annoying than
spending all day tracking down some stupid bug.

> since it catches many other errors that no amount of
> macroprogramming would evere remove].

And likewise, good macro programming can solve some problems that no
amount of linting could ever solve.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-25 Thread Douglas Alan
Paul Rubin <http://[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:
>> And likewise, good macro programming can solve some problems that no
>> amount of linting could ever solve.

> I think Lisp is more needful of macros than other languages, because
> its underlying primitives are too, well, primitive.  You have to write
> all the abstractions yourself.

Well, not really beause you typically use Common Lisp with CLOS and a
class library.  If you ask me, the more things that can (elegantly) be
moved out of the core language and into a standard library, the
better.

> Python has built-in abstractions for a few container types like
> lists and dicts, and now a new and more general one (iterators), so
> it's the next level up.

Common Lisp has had all these things for ages.

> And a bunch of stuff that Python could use macros for, are easily
> done in Haskell using delayed evaluation and monads.  And Haskell is
> starting to grow its own macro system (templates) but that's
> probably a sign that an even higher level language (maybe with
> dependent types or something) would make the templates unnecessary.

Alas, I can't comment too much on Haskell, as, although I am familiar
with it to some extent, I am far from proficient in it.  Don't worry
-- it's on my to-do list.

I think that first I'd like to take Gerry Sussman's new graduate
class, first, though, and I'll find out how it can all be done in
Scheme.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-25 Thread Douglas Alan
Alexander Schmolck <[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:

>>> Python has built-in abstractions for a few container types like
>>> lists and dicts, and now a new and more general one (iterators), so
>>> it's the next level up.

>> Common Lisp has had all these things for ages.

> Rubbish. Do you actually know any common lisp?

Yes, though it's been quite a while, and it was mostly on Lisp
Machines, which, at the time, Common Lisp was still being
standardized, and so Lisp Machine "Chine Nual" Lisp wasn't quite
Common Lisp compliant at the time.  Also, Lisp Machine Lisp had a lot
of features, such as stack groups, that weren't put into Common Lisp.
Also, my experience predates CLOS, as at the time Lisp Machines used
Flavors.

Most of my Lisp experience is actually in MacLisp (and Ulisp and
Proto, neither of which you've likely heard of).  MacLisp was an
immediate precursor of Common Lisp, and didn't have a standard object
system at all (I rolled one myself for my applications), but it had
the Loop macro and if I recall correctly, the MacLisp Loop macro
(which was nearly identical to the Chine Nual Loop macro, which I
thought was ported rather unsullied for Common Lisp).  In any case,
IIRC, there were hooks in the Loop macro for dealing with iterators
and I actually used this for providing an iterator-like interface to
generators (for Lisp Machines) that I coded up with macros and stack
groups.

It may be that these hooks didn't make it into the Common Lisp Loop
macro, or that my memory of what was provided by the macro is a little
off.  What's not off, is that it was really easy to implement these
things, and it wasn't like I was some sort of Lisp guru -- I was just
an undergraduate student.

I will certainly admit that Lisp programmers at the time were (and
likely still are) much more enamored of mapping functions than of
iterators.  Mapping functions certainly get the job done as elegantly
as iterators most of the time, although I would agree that they are
not quite so general.  Of course, using generators, I was easily able
to make a converter that would take a mapping function and return a
corresponding iterator.

Scheme, on, the other hand, at least by idiom, has computation
"streams", and streams are equivalent to iterators.

> There is precisely no way to express
>
> for x in xs:
> blah(x)

The canonical way to do this in Lisp would be something like:

   (mapcar (lambda (x) (blah x))
   xs)

Though there would (at least in MacLisp) be a differently named
mapping function for each sequence type, which makes things a bit less
convenient, as you have to know the name of the mapping function
for each type.

> or
> x = xs[key]

I'm not sure what you are asserting?  That Common Lisp doesn't have
hash tables?  That's certainly not the case.  Or that it doesn't
provide standard generic functions for accessing them, so you can
provide your own dictionaries that are implemented differently and
then use exactly the same interface?  The latter I would believe, as
that would be one of my criticisms of Lisp -- although it's pretty
cool that you can load whatever object system you would like (CLOS
being by far the most common), it also means that the core language
itself is a bit deficient in OO terms.

This problem would be significantly mitigated by defining new
standards for such things in terms of CLOS, but unfortunately
standards change unbearably slowly.  There are certainly many
implementations of Lisp that solve these issues, but they have a hard
time achieving wide adoption.  A language like Python, which is
defined by its implementation, rather than by a standard, can move
much more quickly.  This debate though is really one more of
what is the best model for language definition, rather than one on
what the ideal language is like.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-25 Thread Douglas Alan
Paul Rubin <http://[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:

>> I will certainly admit that Lisp programmers at the time were (and
>> likely still are) much more enamored of mapping functions than of
>> iterators.  Mapping functions certainly get the job done as elegantly
>> as iterators most of the time, although I would agree that they are
>> not quite so general.

> In the Maclisp era functions like mapcar worked on lists, and
> generated equally long lists in memory.

I'm aware, but there were various different mapping functions.  "map",
as opposed to "mapcar" didn't return any values at all, and so you had
to rely on side effects with it.

> It was sort of before my time but I have the impression that Maclisp
> was completely dynamically scoped and as such,

Yes, that's right.

> it couldn't cleanly make anything like generators (since it had no
> way to make lexical closures).

That's right, generators would have been quite difficult to do in
MacLisp.  But a Lisp Machine (with stack groups) could have done them,
and did, with or without closures.

>> Scheme, on, the other hand, at least by idiom, has computation
>> "streams", and streams are equivalent to iterators.

> No not really, they (in SICP) are at best more like class instances
> with a method that mutates some state.  There's nothing like a yield
> statement in the idiom.

Right -- I wrote "iterators", not "generators".

> You could do it with call/cc but SICP just uses ordinary closures to
> implement streams.

Yes, that's right.

>> The canonical way to do this in Lisp would be something like:
>>(mapcar (lambda (x) (blah x)) xs)

> At least you could spare our eyesight by writing that as 
> (mapcar #'blah xs) ;-).

Good point!  But I just love lambda -- even when I'm just using it as
a NOP  (Also I couldn't remember the syntax for accessing the
function property of a symbol in MacLisp.)

> The point is that mapcar (as the name implies) advances down a list
> using cdr, i.e. it only operates on lists, not general iterators or
> streams or whatever.

Right, but each sequence type had it's own corresponding mapping
fuctions.

>> > x = xs[key]

>> I'm not sure what you are asserting?  That Common Lisp doesn't have
>> hash tables?  That's certainly not the case.  Or that it doesn't
>> provide standard generic functions for accessing them

> The latter.  Of course there are getf/setf, but those are necessarily
> macros.

Right.  OO on primitive data types is kind of hard in a non OO
language.  So, when writing an application in MacLisp, or Lisp Machine
lisp, I might have had to spend a bit of time writing an application
framework that provided the OO features I needed.  This was not
particularly hard to do in Lisp, but surely not nearly as nice as if
they had standardized such things.  This would not be particularly
difficult to do, other than the getting everyone to agree on just what
the interfaces should be.  But Lisp programmers, are of course, just
as recalcitrant as Python programmers.

>> A language like Python, which is defined by its implementation,
>> rather than by a standard, can move much more quickly.  This debate
>> though is really one more of what is the best model for language
>> definition, rather than one on what the ideal language is like.

> Python is not Perl and it has in principle always been defined by its
> reference manual,

And in Python's case, the reference manual is just an incomplete
description of the features offered by the implementation, and people
revel in features that are not yet in the reference manual.

> though until fairly recently it's fostered a style of relying on
> various ugly CPython artifacts like the reference counting GC.

That's not ugly.  The fact that CPython has a reference-counting GC
makes the lifetime of object predictable, which means that like in
C++, and unlike in Java, you can use destructors to good effect.  This
is one of the huge boons of C++.  The predictability of lifespan makes
the language more expressive and powerful.  The move to deprecate
relying on this feature in Python is a bad thing, if you ask me, and
removes one of the advantages that Python had over Lisp.

> Lisp accumulated a lot of cruft over the decades and it kept some
> baggage that it really could have done without.

Indeed -- true of most languages.  Of course, there have been quite a
few Lisp dialects that have been cleaned up in quite a few ways (e.g.,
Dylan), but they, of course, have a hard time achieving any
significant traction.

> I don't think Python's designers learned nearly as much from Lisp as
> they could hav

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-26 Thread Douglas Alan
Paul Rubin  writes:

> Andy Freeman <[EMAIL PROTECTED]> writes:

>> Compare that with what a programmer using Python 2.4 has to do if
>> she'd like the functionality provided by 2.5's with statement.  Yes,
>> with is "just syntax", but it's extremely useful syntax, syntax that
>> can be easily implemented with lisp-style macros.

> Not really.  The with statement's binding targets all have to support
> the protocol, which means a lot of different libraries need redesign.
> You can't do that with macros.

But that's a library issue, not a language issue.  The technology
exists completely within Lisp to accomplish these things, and most
Lisp programmers even know how to do this, as application frameworks
in Lisp often do this kind.  The problem is getting anything put into
the standard.  Standardizing committees just suck.

I just saw a presentation today on the Boost library for C++.  This
project started because the standard library for C++ is woefully
inadequate for today's programming needs, but any chance of getting
big additions into the standard library will take 5-10 years.
Apparently this is true for all computer language standards.  And even
then, the new standard will be seriously lacking, because it is
usually based on armchair thinking rather than real-world usage.

So the Boost guys are making a defacto standard (or so they hope)
library for C++ that has more of the stuff you want, and then when the
standardizing committees get around to revising the actual standard,
the new standard will already be in wide use, meaning they just have
to sign off on it (and perhaps suggest a few tweaks).

Alas, the Lisp standards are stuck in this sort of morass, even while
many implementations do all the right things.

Python doesn't have this problem because it operates like Boost to
begin with, rather than having a zillion implementations tracking some
slow moving standard that then mandates things that might be nearly
impossible to implement, while leaving out much of what people need.

But then again, neither do many dialects of Lisp, which are developed
more or less like Python is.  But then they aren't standards
compliant, and so they don't receive wide adoption.

> Macros can handle some narrow special cases such as file-like
> objects, handled in Python with contextlib.closing.

Macros handle the language part of things in Lisp perfectly well in
this regard.  But you are right -- they certainly can't make
standardizing committees do the right thing.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-26 Thread Douglas Alan
Paul Rubin <http://[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:

>> > In the Maclisp era functions like mapcar worked on lists, and
>> > generated equally long lists in memory.

>> I'm aware, but there were various different mapping functions.  "map",
>> as opposed to "mapcar" didn't return any values at all, and so you had
>> to rely on side effects with it.

> The thing is there was no standard way in Maclisp to write something
> like Python's "count" function and map over it.  This could be done in
> Scheme with streams, of course.

I'm not sure that you can blame MacLisp for not being object-oriented.
The idea hadn't even been invented yet when MacLisp was implemented
(unless you count Simula).  If someone went to make an OO version of
MacLisp, I'm sure they'd get all this more or less right, and people
have certainly implemented dialects of Lisp that are consistently OO.

>> Right -- I wrote "iterators", not "generators".

> Python iterators (the __iter__ methods on classes) are written with
> yield statements as often as not.

I certainly agree that iterators can be implemented with generators,
but generators are a language feature that are impossible to provide
without deep language support, while iterators are just an OO
interface that any OO language can provide.  Though without a good
macro facility the syntax to use them may not be so nice.

>> That's not ugly.  The fact that CPython has a reference-counting GC
>> makes the lifetime of object predictable, which means that like in
>> C++, and unlike in Java, you can use destructors to good effect.  This
>> is one of the huge boons of C++.  The predictability of lifespan makes
>> the language more expressive and powerful.  The move to deprecate
>> relying on this feature in Python is a bad thing, if you ask me, and
>> removes one of the advantages that Python had over Lisp.

> No that's wrong, C++ has no GC at all, reference counting or
> otherwise, so its destructors only run when the object is manually
> released or goes out of scope.

Right, but implementing generic reference-counted smart pointers is
about a page of code in C++, and nearly every large C++ application
I've seen uses such things.

> Python (as of 2.5) does that using the new "with" statement, which
> finally makes it possible to escape from that losing GC-dependent
> idiom.  The "with" statement handles most cases that C++ destructors
> normally handle.

Gee, that's back to the future with 1975 Lisp technology.  Destructors
are a much better model for dealing with such things (see not *all*
good ideas come from Lisp -- a few come from C++) and I am dismayed
that Python is deprecating their use in favor of explicit resource
management.  Explicit resource management means needlessly verbose
code and more opportunity for resource leaks.

The C++ folks feel so strongly about this, that they refuse to provide
"finally", and insist instead that you use destructors and RAII to do
resource deallocation.  Personally, I think that's taking things a bit
too far, but I'd rather it be that way than lose the usefulness of
destructors and have to use "when" or "finally" to explicitly
deallocate resources.

> Python object lifetimes are in fact NOT predictable because the ref
> counting doesn't (and can't) pick up cyclic structure.

Right, but that doesn't mean that 99.9% of the time, the programmer
can't immediately tell that cycles aren't going to be an issue.

I love having a *real* garbage collector, but I've also dealt with C++
programs that are 100,000+ lines long and I wrote plenty of Python
code before it had a real garbage collector, and I never had any
problem with cyclic data structures causing leaks.  Cycles are really
not all that common, and when they do occur, it's usually not very
difficult to figure out where to add a few lines to a destructor to
break the cycle.

> And the refcounts are a performance pig in multithreaded code,
> because of how often they have to be incremented and updated.

I'm willing to pay the performance penalty to have the advantage of
not having to use constructs like "when".

Also, I'm not convinced that it has to be a huge performance hit.
Some Lisp implementations had a 1,2,3, many (or something like that)
reference-counter for reclaiming short-lived objects.  This bypassed
the real GC and was considered a performance optimization.  (It was
probably on a Lisp Machine, though, where they had special hardware to
help.)

> That's why CPython has the notorious GIL (a giant lock around the
> whole interpreter that stops more than one interpreter thread from

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-27 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

> Is this where I get to call Lispers Blub programmers, because they
> can't see the clear benefit to a generic iteration interface?

I think you overstate your case.  Lispers understand iteration
interfaces perfectly well, but tend to prefer mapping fuctions to
iteration because mapping functions are both easier to code (they are
basically equivalent to coding generators) and efficient (like
non-generator-implemented iterators).  The downside is that they are
not quite as flexible as iterators (which can be hard to code) and
generators, which are slow.

Lispers have long since understood how to write mapping function to
iterator converters using stack groups or continuations, but Common
Lisp never mandated stack groups or continuations for conforming
implementations.  Scheme, of course, has continuations, and there are
implementations of Common Lisp with stack groups.

>> The difference is that lisp users can easily define python-like for
>> while python folks have to wait for the implementation.

> Yes, but Python already has it (so the wait time is 0), and the Lisp
> user doesn't.

So do Lispers, provided that they use an implementation of Lisp that
has the aforementioned extensions to the standard.  If they don't,
they are the unfortunately prisoners of the standardizing committees.

And, I guarantee you, that if Python were specified by a standardizing
committee, it would suffer this very same fate.

Regarding there being way too many good but incompatible
implementations of Lisp -- I understand.  The very same thing has
caused Ruby to incredibly rapidly close the lead that Python has
traditionally had over Ruby.  There reason for this is that there are
too many good but incompatible Python web dev frameworks, and only one
good one for Ruby.  So, we see that while Lisp suffers from too much
of a good thing, so does Python, and that may be the death of it if
Ruby on Rails keeps barreling down on Python like a runaway train.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-27 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

> On 6/27/07, Douglas Alan <[EMAIL PROTECTED]> wrote:

>> The C++ folks feel so strongly about this, that they refuse to provide
>> "finally", and insist instead that you use destructors and RAII to do
>> resource deallocation.  Personally, I think that's taking things a bit
>> too far, but I'd rather it be that way than lose the usefulness of
>> destructors and have to use "when" or "finally" to explicitly
>> deallocate resources.

> This totally misrepresents the case. The with statement and the
> context manager is a superset of the RAII functionality.

No, it isn't.  C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques.  Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library.  One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.

> It doesn't overload object lifetimes, rather it makes the intent
> (code execution upon entrance and exit of a block) explicit.

But I don't typically wish for this sort of intent to be made
explicit.  TMI!  I used "with" for *many* years in Lisp, since this is
how non-memory resource deallocation has been dealt with in Lisp since
the dawn of time.  I can tell you from many years of experience that
relying on Python's refcounter is superior.

Shouldn't you be happy that there's something I like more about Python
than Lisp?

> Nobody in their right mind has ever tried to get rid of explicit
> resource management - explicit resource management is exactly what you
> do every time you create an object, or you use RAII, or you open a
> file.

This just isn't true.  For many years I have not had to explicitly
close files in Python.  Nor have I had to do so in C++.  They have
been closed for me implicitly.  "With" is not implicit -- or at least
not nearly as implicit as was previous practice in Python, or as is
current practice in C++.

> *Manual* memory management, where the tracking of references and
> scopes is placed upon the programmer, is what people are trying to
> get rid of and the with statement contributes to that goal, it
> doesn't detract from it.

As far as I am concerned, memory is just one resource amongst many,
and the programmer's life should be made easier in dealing with all
such resources.

> Before the with statement, you could do the same thing but you
> needed nested try/finally blocks

No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.

> RAII is a good technique, but don't get caught up on the
> implementation details.

I'm not -- I'm caught up in the loss of power and elegance that will
be caused by deprecating the use of destructors for resource
deallocation.

> The with statement does exactly the same thing, but is actually
> superior because
>
> a) It doesn't tie the resource managment to object creation. This
> means you can use, for example, with lock: instead of the C++ style
> Locker(lock)

I know all about "with".  As I mentioned above, Lisp has had it since
the dawn of time.  And I have nothing against it, since it is at times
quite useful.  I'm just dismayed at the idea of deprecating reliance
on destructors in favor of "with" for the majority of cases when the
destructor usage works well and is more elegant.

> b) You can tell whether you exited with an exception, and what that
> exception is, so you can take different actions based on error
> conditions vs expected exit. This is a significant benefit, it
> allows the application of context managers to cases where RAII is
> weak. For example, controlling transactions.

Yes, for the case where you might want to do fancy handling of
exceptions raised during resource deallocation, then "when" is
superior, which is why it is good to have in addition to the
traditional Python mechanism, not as a replacement for it.

>> Right, but that doesn't mean that 99.9% of the time, the programmer
>> can't immediately tell that cycles aren't going to be an issue.

> They can occur in the most bizarre and unexpected places. To the point
> where I suspect that the reality is simply that you never noticed your
> cycles, not that they didn't exist.

Purify tells me that I know more about the behavior of my code than
you do: I've *never* had any memory leaks in large C++ programs that
used refco

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-27 Thread Douglas Alan
Douglas Woodrow <[EMAIL PROTECTED]> writes:

> On Wed, 27 Jun 2007 01:45:44, Douglas Alan <[EMAIL PROTECTED]> wrote

>>A chaque son gout

> I apologise for this irrelevant interruption to the conversation, but
> this isn't the first time you've written that.

> The word "chaque" is not a pronoun.

A chacun son epellation.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-27 Thread Douglas Alan
Dennis Lee Bieber <[EMAIL PROTECTED]> writes:

>   But if these "macros" are supposed to allow one to sort of extend
> Python syntax, are you really going to code things like
>
>   macrolib1.keyword
> everywhere?

No -- I would expect that macros (if done the way that I would like
them to be done) would work something like so:

   from setMacro import macro set, macro let
   let x = 1
   set x += 1

The macros "let" and "set" (like all macro invocations) would have to
be the first tokens on a line.  They would be passed either the
strings "x = 1" and "x += 1", or some tokenized version thereof.
There would be parsing libraries to help them from there.

For macros that need to continue over more than one line, e.g.,
perhaps something like

   let x = 1
   y = 2
   z = 3
   set x = y + z
   y = x + z
   z = x + y
   print x, y, z

the macro would parse up to when the indentation returns to the previous
level.

For macros that need to return values, a new bracketing syntax would
be needed.  Perhaps something like:

   while $(let x = foo()):
  print x

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-28 Thread Douglas Alan
Steve Holden <[EMAIL PROTECTED]> writes:

> Douglas Woodrow wrote:

>> On Wed, 27 Jun 2007 01:45:44, Douglas Alan <[EMAIL PROTECTED]> wrote

>>> A chaque son gout

>> I apologise for this irrelevant interruption to the conversation,
>> but this isn't the first time you've written that.  The word
>> "chaque" is not a pronoun.

>> http://grammaire.reverso.net/index_alpha/Fiches/Fiche220.htm

> Right, he probably means "Chaqu'un à son gout" (roughly, each to his
> own taste).

Actually, it's "chacun".  And the "à" may precede the "chacun".

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-28 Thread Douglas Alan
Steve Holden <[EMAIL PROTECTED]> writes:

>> Actually, it's "chacun".  And the "à" may precede the "chacun".

>> |>oug

> "chacun" is an elision of the two words "Chaque" (each) and "un"
> (one), and use of those two words is at least equally correct, though
> where it stands in modern usage I must confess I have no idea.

Google can answer that: 158,000 hits for "chaqu'un", 57 million for
"chacun".

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-28 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

> Obviously. But theres nothing about the with statement that's
> different than using smart pointers in this regard.

Sure there is -- smart pointers handle many sorts of situations, while
"with" only handles the case where the lifetime of the object
corresponds to the scope.

>> But I don't typically wish for this sort of intent to be made
>> explicit.  TMI!  I used "with" for *many* years in Lisp, since this
>> is how non-memory resource deallocation has been dealt with in Lisp
>> since the dawn of time.  I can tell you from many years of
>> experience that relying on Python's refcounter is superior.

> I question the relevance of your experience, then.

Gee, thanks.

> Refcounting is fine for memory, but as you mention below, memory is
> only one kind of resource and refcounting is not necessarily the
> best technique for all resources.

I never said that it is the best technique for *all* resources.  Just
the most typical ones.

>> This just isn't true.  For many years I have not had to explicitly
>> close files in Python.  Nor have I had to do so in C++.  They have
>> been closed for me implicitly.  "With" is not implicit -- or at least
>> not nearly as implicit as was previous practice in Python, or as is
>> current practice in C++.

> You still don't have to manually close files. But you cannot, and
> never could, rely on them being closed at a given time unless you
> did so.

You could for most intents and purposes.

> If you need a file to be closed in a deterministic manner, then you
> must close it explicitly.

You don't typically need them to be closed in a completely fool-proof
deterministic fashion.  If some other code catches your exceptions and
holds onto the traceback then it must know that can be delaying a few
file-closings, or the like.

> The with statement is not implicit and never has been. Implicit
> resource management is *insufficient* for the general resource
> management case. It works fine for memory, it's okay for files
> (until it isn't), it's terrible for thread locks and network
> connections and database transactions. Those things require
> *explicit* resource management.

Yes, I agree there are certain situations in which you certainly want
"with", or something like it.  I've never disagreed with that
assertion at all.  I just don't agree that for most Python code this
is the *typical* case.

> To the extent that your code ever worked when you relied on this
> detail, it will continue to work.

I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.

> There are no plans to replace pythons refcounting with fancier GC
> schemes that I am aware of.

This is counter to what other people have been saying.  They have been
worrying me by saying that the refcounter may go away and so you may
not be able to rely on predictable object lifetimes in the future.

> Nothing about Pythons memory management has changed. I know I'm
> repeating myself here, but you just don't seem to grasp this
> concept.  Python has *never* had deterministic destruction of
> objects. It was never guaranteed, and code that seemed like it
> benefited from it was fragile.

It was not fragile in my experience.  If a resource *positively*,
*absolutely* needed to be deallocated at a certain point in the code
(and occasionally that was the case), then I would code that way.  But
that has been far from the typical case for me.

>> Purify tells me that I know more about the behavior of my code than
>> you do: I've *never* had any memory leaks in large C++ programs that
>> used refcounted smart pointers that were caused by cycles in my data
>> structures that I didn't know about.

> I'm talking about Python refcounts. For example, a subtle resource
> leak that has caught me before is that tracebacks hold references to
> locals in the unwound stack.

Yes, I'm aware of that.  Most programs don't hold onto tracebacks for
long.  If you are working with software that does, then, I agree, that
sometimes one will have to code things more precisely.

> If you relied on refcounting to clean up a resource, and you needed
> exception handling, the resource wasn't released until *after* the
> exception unwound, which could be a problem. Also holding onto
> tracebacks for latter processing (not uncommon in event based
> programs) would artificially extend the lifetime of the resource. If
> the resource you were managing was a thread lock this could be a
> real problem.

Right -- I've always explicitly managed thread locks.

>> I really have no desire to code in C, thank you.  I'd rather be coding
>> in Python.  (Hence my [idle] desire for macros in Python, so that I
>> could do even more of my work in Python.)

> In this particular conversation, I really don't think that theres much
> to say beyond put up or shut up.

I think your attitude here is unPythonic.

> The experts in the field have said that it's not practical.

Gu

Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Michele Simionato <[EMAIL PROTECTED]> writes:

>> I've written plenty of Python code that relied on destructors to
>> deallocate resources, and the code always worked.

> You have been lucky:

No I haven't been lucky -- I just know what I'm doing.

> $ cat deallocating.py
> import logging
>
> class C(object):
> def __init__(self):
> logging.warn('Allocating resource ...')
>
> def __del__(self):
> logging.warn('De-allocating resource ...')
> print 'THIS IS NEVER REACHED!'
>
> if __name__ == '__main__':
> c = C()
>
> $ python deallocating.py
> WARNING:root:Allocating resource ...
> Exception exceptions.AttributeError: "'NoneType' object has no
> attribute 'warn'" in  0xb7b9436c>> ignored

Right.  So?  I understand this issue completely and I code
accordingly.

> Just because your experience has been positive, you should not
> dismiss the opinion who have clearly more experience than you on
> the subtilities of Python.

I don't dismiss their opinion at all.  All I've stated is that for my
purposes I find that the refcounting semantics of Python to be useful,
expressive, and dependable, and that I wouldn't like it one bit if
they were removed from Python.

Those who claim that the refcounting semantics are not useful are the
ones who are dismissing my experience.  (And the experience of
zillions of other Python programmers who have happily been relying on
them.)

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Dennis Lee Bieber <[EMAIL PROTECTED]> writes:

>   LISP and FORTH are cousins...

Not really.  Their only real similarity (other than the similarities
shared by most programming languages) is that they both use a form of
Polish notation.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Hrvoje Niksic <[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:

>> I think you overstate your case.  Lispers understand iteration
>> interfaces perfectly well, but tend to prefer mapping fuctions to
>> iteration because mapping functions are both easier to code (they
>> are basically equivalent to coding generators) and efficient (like
>> non-generator-implemented iterators).  The downside is that they are
>> not quite as flexible as iterators (which can be hard to code) and
>> generators, which are slow.

> Why do you think generators are any slower than hand-coded iterators?

Generators aren't slower than hand-coded iterators in *Python*, but
that's because Python is a slow language.  In a fast language, such as
a Lisp, generators are like 100 times slower than mapping functions.
(At least they were on Lisp Machines, where generators were
implemented using a more generator coroutining mechanism [i.e., stack
groups].  *Perhaps* there would be some opportunities for more
optimization if they had used a less general mechanism.)

CLU, which I believe is the language that invented generators, limited
them to the power of mapping functions (i.e., you couldn't have
multiple generators instantiated in parallel), making them really
syntactic sugar for mapping functions.  The reason for this limitation
was performance.  CLU was a fast language.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

> You're arguing against explicit resource management with the argument
> that you don't need to manage resources. Can you not see how
> ridiculously circular this is?

No.  It is insane to leave files unclosed in Java (unless you know for
sure that your program is not going to be opening many files) because
you don't even know that the garbage collector will ever even run, and
you could easily run out of file descriptors, and hog system
resources.

On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used.  Such assumptions are called "preconditions", which are
an understood notion in software engineering and by me when I write
software.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

>> On the other hand, in Python, you can be 100% sure that your files
>> will be closed in a timely manner without explicitly closing them, as
>> long as you are safe in making certain assumptions about how your code
>> will be used.  Such assumptions are called "preconditions", which are
>> an understood notion in software engineering and by me when I write
>> software.

> Next time theres one of those "software development isn't really
> engineering" debates going on I'm sure that we'll be able to settle
> the argument by pointing out that relying on *explicitly* unreliable
> implementation details is defined as "engineering" by some people.

The proof of the pudding is in it's eating.  I've worked on very large
programs that exhibited very few bugs, and ran flawlessly for many
years.  One managed the memory remotely of a space telescope, and the
code was pretty tricky.  I was sure when writing the code that there
would be a number of obscure bugs that I would end up having to pull
my hair out debugging, but it's been running flawlessly for more than
a decade now, without require nearly any debugging at all.

Engineering to a large degree is knowing where to dedicate your
efforts.  If you dedicate them to where they are not needed, then you
have less time to dedicate them to where they truly are.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Jean-Paul Calderone <[EMAIL PROTECTED]> writes:

>>On the other hand, in Python, you can be 100% sure that your files
>>will be closed in a timely manner without explicitly closing them, as
>>long as you are safe in making certain assumptions about how your code
>>will be used.  Such assumptions are called "preconditions", which are
>>an understood notion in software engineering and by me when I write
>>software.

> You realize that Python has exceptions, right?

Yes, of course.

> Have you ever encountered a traceback object?

Yes, of course.

> Is one of your preconditions that no one will ever handle an
> exception raised by your code or by their own code when it is
> invoked by yours?

A precondition of much of my Python code is that callers won't
squirrel away large numbers of tracebacks for long periods of time.  I
can live with that.  Another precondition of much of my code is that
the caller doesn't assume that it is thread-safe.  Another
precondition is that the caller doesn't assume that it is likely to
meet real-time constraints.  Another precondition is that the caller
doesn't need my functions to promise not to generate any garbage that
might call the GC to invoked.

If I had to write all my code to work well without making *any*
assumptions about what the needs of the caller might be, then my code
would have to be much more complicated, and then I'd spend more effort
making my code handle situations that it won't face for my purposes.
Consequently, I'd have less time to make my software have the
functionality that I actually require.

Regarding, specifically, tracebacks holding onto references to open
files -- have you considered that you may actually *want* to see the
file in the state that it was in when the exception was raised for the
purposes of debugging, rather than having it forcefully closed on you?

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Hrvoje Niksic <[EMAIL PROTECTED]> writes:

>> Generators aren't slower than hand-coded iterators in *Python*, but
>> that's because Python is a slow language.

> But then it should be slow for both generators and iterators.

Python *is* slow for both generators and iterators.  It's slow for
*everything*, except for cases when you can have most of the work done
within C-coded functions or operations that perform a lot of work
within a single call.  (Or, of course, cases where you are i/o
limited, or whatever.)

>> *Perhaps* there would be some opportunities for more optimization if
>> they had used a less general mechanism.)

> Or if the generators were built into the language and directly
> supported by the compiler.  In some cases implementing a feature is
> *not* a simple case of writing a macro, even in Lisp.  Generators may
> well be one such case.

You can't implement generators in Lisp (with or without macros)
without support for generators within the Lisp implementation.  This
support was provided as "stack groups" on Lisp Machines and as
continuations in Scheme.  Both stack groups and continuations are
slow.  I strongly suspect that if they had provided direct support for
generators, rather than indirectly via stack groups and continuations,
that that support would have been slow as well.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Steve Holden <[EMAIL PROTECTED]> writes:

> "Python" doesn't *have* any refcounting semantics.

I'm not convinced that Python has *any* semantics at all outside of
specific implementations.  It has never been standardized to the rigor
of your typical barely-readable language standards document.

> If you rely on the behavior of CPython's memory allocation and
> garbage collection you run the risk of producing programs that won't
> port tp Jython, or IronPython, or PyPy, or ...

> This is a trade-off that many users *are* willing to make.

Yes, I have no interest at the moment in trying to make my code
portable between every possible implementation of Python, since I have
no idea what features such implementations may or may not support.
When I code in Python, I'm coding for CPython.  In the future, I may
do some stuff in Jython, but I wouldn't call it "Python" -- it'd call
it "Jython".  When I do code for Jython, I'd be using it to get to
Java libraries that would make my code non-portable to CPython, so
portability here seems to be a red herring.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Duncan Booth <[EMAIL PROTECTED]> writes:

>> A precondition of much of my Python code is that callers won't
>> squirrel away large numbers of tracebacks for long periods of time.  I
>> can live with that.  Another precondition of much of my code is that
>> the caller doesn't assume that it is thread-safe.  Another
>> precondition is that the caller doesn't assume that it is likely to
>> meet real-time constraints.  Another precondition is that the caller
>> doesn't need my functions to promise not to generate any garbage that
>> might call the GC to invoked.

> None of that is relevant.

Of course it is.  I said "large number of tracebacks" up there, and
you promptly ignored that precondition in your subsequent
counterexample.

> Have you ever seen any code looking roughly like this?

> def mainloop():
>while somecondition:
>   try:
>   dosomestuff()
>   except SomeExceptions:
>   handletheexception()

Of course.

> Now, imagine somewhere deep inside dosomestuff an exception is
> raised while you have a file open and the exception is handled in
> mainloop. If the loop then continues with a fresh call to
> dosomestuff the traceback object will continue to exist until the
> next exception is thrown or until mainloop returns.

It's typically okay in my software for a single (or a few) files to
remain open for longer than I might expect.  What it couldn't handle
is running out of file descriptors, or the like.  (Just like it
couldn't handle running out of memory.)  But that's not going to
happen with your counterexample.

If I were worried about a file or two remaining open too long, I'd
clear the exception in the mainloop above, after handling it.  Python
lets you do that, doesn't it?

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-29 Thread Douglas Alan
Lenard Lindstrom <[EMAIL PROTECTED]> writes:

> Douglas Alan wrote:

>> [I]n Python, you can be 100% sure that your files
>> will be closed in a timely manner without explicitly closing them, as
>> long as you are safe in making certain assumptions about how your code
>> will be used.  Such assumptions are called "preconditions", which are
>> an understood notion in software engineering and by me when I write
>> software.

> So documenting an assumption is more effective than removing the
> assumption using a with statement?

Once again I state that I have nothing against "with" statements.  I
used it all the time ages ago in Lisp.

But (1) try/finally blocks were not to my liking for this sort of
thing because they are verbose and I think error-prone for code
maintenance.  I and many others prefer relying on the refcounter for
file closing over the try/finally solution.  Consequently, using the
refcounter for such things is a well-entrenched and succinct idiom.
"with" statements are a big improvement over try/finally, but for
things like file closing, it's six of one, half dozen of the other
compared against just relying on the refcounter.

(2) "with" statements do not work in all situations because often you
need to have an open file (or what have you) survive the scope in
which it was opened.  You may need to have multiple objects be able to
read and/or write to the file.  And yet, the file may not want to be
kept open for the entire life of the program.  If you have to decide
when to explicitly close the file, then you end up with the same sort
of modularity issues as when you have to free memory explicitly.  The
refcounter handles these sorts of situations with aplomb.

(3) Any code that is saving tracebacks should assume that it is likely
to cause trouble, unless it is using code that is explicitly
documented to be robust in the face of this, just as any code that
wants to share objects between multiple threads should assume that
this is likely to cause trouble, unless it is using code that is
explicitly documented to be robust in the face of this.

(4) Any code that catches exceptions should either return soon or
clear the exception.  If it doesn't, the problem is not with the
callee, but with the caller.

(5) You don't necessarily want a function that raises an exception to
deallocate all of its resources before raising the exception, since
you may want access to these resources for debugging, or what have you.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-30 Thread Douglas Alan
Paul Rubin <http://[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:

>> But that's a library issue, not a language issue.  The technology
>> exists completely within Lisp to accomplish these things, and most
>> Lisp programmers even know how to do this, as application frameworks
>> in Lisp often do this kind.  The problem is getting anything put into
>> the standard.  Standardizing committees just suck.

> Lisp is just moribund, is all.  Haskell has a standardizing committee
> and yet there are lots of implementations taking the language in new
> and interesting directions all the time.  The most useful extensions
> become de facto standards and then they make it into the real
> standard.

You only say this because you are not aware of all the cool dialetcs
of Lisp that are invented.  The problem is that they rarely leave the
tiny community that uses them, because each community comes up with
it's own different cool dialect of Lisp.  So, clearly the issue is not
one of any lack of motivation or people working on Lisp innovations --
it's getting them to sit down together and agree on a standard.

This, of course is a serious problem.  One that is very similar to the
problem with Python vs. Ruby on Rails.  It's not the problem that you are
ascribing to Lisp, however.

|>oug

P.S. Besides Haskell is basically a refinement of ML, which is a
dialect of Lisp.

P.P.S. I doubt that any day soon any purely (or even mostly)
functional language is going to gain any sort of popularity outside of
academia.  Maybe 20 years from now, they will, but I wouldn't bet on
it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-30 Thread Douglas Alan
Michele Simionato <[EMAIL PROTECTED]> writes:

>> Right.  So?  I understand this issue completely and I code
>> accordingly.

> What does it mean you 'code accordingly'? IMO the only clean way out
> of this issue is to NOT rely on the garbage collector and to manage
> resource deallocation explicitely, not implicitely.

(1) I don't rely on the refcounter for resources that ABSOLUTELY,
POSITIVELY must be freed before the scope is left.  In the code that
I've worked on, only a small fraction of resources would fall into
this category.  Open files, for instance, rarely do.  For open files,
in fact, I actually want access to them in the traceback for debugging
purposes, so closing them using "with" would be the opposite of what I
want.

(2) I don't squirrel away references to tracebacks.

(3) If a procedure catches an exception but isn't going to return
quickly, I clear the exception.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-30 Thread Douglas Alan
Paul Rubin <http://[EMAIL PROTECTED]> writes:

> Douglas Alan <[EMAIL PROTECTED]> writes:

>> P.S. Besides Haskell is basically a refinement of ML, which is a
>> dialect of Lisp.

> I'd say Haskell and ML are descended from Lisp, just like mammals are
> descended from fish.

Hardly -- they all want to share the elegance of lambda calculus,
n'est-ce pas?  Also, ML was originally implemented in Lisp, and IIRC
correctly, at least in early versions, shared much of Lisp's syntax.

Also, Scheme has a purely functional core (few people stick to it, of
course), and there are purely functional dialects of Lisp.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-30 Thread Douglas Alan
Paul Rubin  writes:

> Haskell and ML are both evaluate typed lambda calculus unlike Lisp
> which is based on untyped lambda calculus.  Certainly the most
> familiar features of Lisp (dynamic typing, S-expression syntax,
> programs as data (Lisp's macro system results from this)) are absent
> from Haskell and ML.

And that is supposed to make them better and more flexible???  The
ideal language of the future will have *optional* manifest typing
along with type-inference, and will have some sort of pramgma to turn
on warnings when variables are forced to become dynamic due to there
not being enough type information to infer the type.  But it will
still allow programming with dynamic typing when that is necessary.

The last time I looked at Haskell, it was still in the stage of being
a language that only an academic could love.  Though, it was certainly
interesting.

> Haskell's type system lets it do stuff that Lisp can't approach.

What kind of stuff?  Compile-time polymorphism is cool for efficiency
and type safety, but doesn't actually provide you with any extra
functionality that I'm aware of.

> I'm reserving judgement about whether Haskell is really practical
> for application development, but it can do stuff that no traditional
> Lisp can (e.g. its concurrency and parallelism stuff, with
> correctness enforced by the type system).  It makes it pretty clear
> that Lisp has become Blub.

Where do you get this idea that the Lisp world does not get such
things as parallelism?  StarLisp was designed for the Connection
Machine by Thinking Machines themselves.  The Connection Machine was
one of the most parallel machines ever designed.  Alas, it was ahead of
it's time.

Also, I know a research scientist at CSAIL at MIT who has designed and
implemented a version of Lisp for doing audio and video art.  It was
designed from the ground-up to deal with realtime audio and video
streams as first class objects.  It's actually pretty incredible -- in
just a few lines of code, you can set up a program that displays the
same video multiplied and tiled into a large grid of little videos
tiles, but where a different filter or delay is applied to each tile.
This allows for some stunningly strange and interesting video output.
Similar things can be done in the language with music (though if you
did that particular experiment it would probably just sound
cacophonous).

Does that sound like an understanding of concurrency to you?  Yes, I
thought so.

Also, Dylan has optional manifests types and type inference, so the
Lisp community understands some of the benefits of static typing.
(Even MacLisp had optional manifest types, but they weren't  there for
safety, but rather for performance.  Using them, you could get Fortran
level of performance out of Lisp, which was quite a feat at the time.)

> ML's original implementation language is completely irrelevant;
> after all Python is still implemented in C.

Except that in the case of ML, it was mostly just a thin veneer on
Lisp that added a typing system and type inference.

>> Also, Scheme has a purely functional core (few people stick to it, of
>> course), and there are purely functional dialects of Lisp.

> Scheme has never been purely functional.  It has had mutation since
> the beginning.

I never said that was purely functional -- I said that it has a purely
functional core.  I.e., all the functions that have side effects have
and "!" on their ends (or at least they did when I learned the
language), and there are styles of programming in Scheme that
discourage using any of those functions.

|>oug

P.S.  The last time I took a language class (about five or six years
ago), the most interesting languages I thought were descended from
Self, not any functional language.  (And Self, of course is descended
from Smalltalk, which is descended from Lisp.)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-30 Thread Douglas Alan
Lenard Lindstrom <[EMAIL PROTECTED]> writes:

> Explicitly clear the exception? With sys.exc_clear?

Yes.  Is there a problem with that?

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-06-30 Thread Douglas Alan
I wrote:

> P.S.  The last time I took a language class (about five or six years
> ago), the most interesting languages I thought were descended from
> Self, not any functional language.  (And Self, of course is descended
> from Smalltalk, which is descended from Lisp.)

I think that Cecil is the particular language that I was most thinking
of:

   http://en.wikipedia.org/wiki/Cecil_programming_language

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom <[EMAIL PROTECTED]> writes:

>>> Explicitly clear the exception? With sys.exc_clear?

>> Yes.  Is there a problem with that?

> As long as nothing tries to re-raise the exception I doubt it breaks
> anything:
>
>  >>> import sys
>  >>> try:
>   raise StandardError("Hello")
> except StandardError:
>   sys.exc_clear()
>   raise
>
>
> Traceback (most recent call last):
>File "", line 5, in 
>  raise
> TypeError: exceptions must be classes, instances, or strings
> (deprecated), not NoneType

I guess I don't really see that as a problem.  Exceptions should
normally only be re-raised where they are caught.  If a piece of code
has decided to handle an exception, and considers it dealt with, there
is no reason for it not to clear the exception, and good reason for it
to do so.  Also, any caught exception is automatically cleared when
the catching procedure returns anyway, so it's not like Python has
ever considered a caught exception to be precious information that
ought to be preserved long past the point where it is handled.

> But it is like calling the garbage collector. You are tuning the
> program to ensure some resource isn't exhausted.

I'm not sure I see the analogy: Calling the GC can be expensive,
clearing an exception is not.  The exception is going to be cleared
anyway when the procedure returns, the GC wouldn't likely be.

It's much more like explicitly assigning None to a variable that
contains a large data structure when you no longer need the contents
of the variable.  Doing this sort of thing can be a wise thing to do
in certain situations.

> It relies on implementation specific behavior to be provably
> reliable*.

As Python is not a formally standardized language, and one typically
relies on the fact that CPython itself is ported to just about every
platform known to Man, I don't find this to be a particular worry.

> If this is indeed the most obvious way to do things in your
> particular use case then Python, and many other languages, is
> missing something. If the particular problem is isolated,
> formalized, and general solution found, then a PEP can be
> submitted. If accepted, this would ensure future and cross-platform
> compatibility.

Well, I think that the refcounting semantics of CPython are useful,
and allow one to often write simpler, easier-to-read and maintain
code.  I think that Jython and IronPython, etc., should adopt these
semantics, but I imagine they might not for performance reasons.  I
don't generally use Python for it's speediness, however, but rather
for it's pleasant syntax and semantics and large, effective library.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom <[EMAIL PROTECTED]> writes:

>> You don't necessarily want a function that raises an exception to
>> deallocate all of its resources before raising the exception, since
>> you may want access to these resources for debugging, or what have
>> you.

> No problem:
>
> [...]
>
>  >>> class MyFile(file):
>   def __exit__(self, exc_type, exc_val, exc_tb):
>   if exc_type is not None:
>   self.my_last_posn = self.tell()
>   return file.__exit__(self, exc_type, exc_val, exc_tb)

I'm not sure I understand you here.  You're saying that I should have
the foresight to wrap all my file opens is a special class to
facilitate debugging?

If so, (1) I don't have that much foresight and don't want to have
to.  (2) I debug code that other people have written, and they often
have less foresight than me.  (3) It would make my code less clear to
ever file open wrapped in some special class.

Or are you suggesting that early in __main__.main(), when I wish to
debug something, I do something like:

   __builtins__.open = __builtins__.file = MyFile

?

I suppose that would work.  I'd still prefer to clear exceptions,
though, in those few cases in which a function has caught an exception
and isn't going to be returning soon and have the resources generally
kept alive in the traceback.  To me, that's the more elegant and
general solution.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom <[EMAIL PROTECTED]> writes:

>> I'm not sure I understand you here.  You're saying that I should have
>> the foresight to wrap all my file opens is a special class to
>> facilitate debugging?

> Obviously you had the foresight to realize with statements could
> compromise debugging. I never considered it myself.

It's not really so much a matter of having foresight, as much as
having had experience debugging a fair amount of code.  And, at times,
having benefited from the traditional idiomatic way of coding in
Python, where files are not explicitly closed.

Since there are benefits with the typical coding style, and I find
there to be no significant downside, other than if, perhaps some code
holds onto tracebacks, I suggest that the problem be idiomatically
addressed in the *few* code locations that hold onto tracebacks,
rather than in all the *myriad* code locations that open and close
files.

>> Or are you suggesting that early in __main__.main(), when I wish to
>> debug something, I do something like:
>>__builtins__.open = __builtins__.file = MyFile
>> ?
>> I suppose that would work.

> No, I would never suggest replacing a builtin like that. Even
> replacing a definite hook like __import__ is risky, should more than
> one package try and do it in a program.

That misinterpretation of your idea would only be reasonable while
actually debugging, not for standard execution.  Standard rules of
coding elegance don't apply while debugging, so I think the
misinterpretation might be a reasonable alternative.  Still I think
I'd just prefer to stick to the status quo in this regard.

> As long as the code isn't dependent on explicitly cleared
> exceptions. But if it is I assume it is well documented.

Typically the resource in question is an open file.  These usually
don't have to be closed in a particularly timely fashion.  If, for
some reason, a files absolutelys need to be closed rapidly, then it's
probably best to use "with" in such a case.  Otherwise, I vote for the
de facto standard idiom of relying on the refcounter along with
explicitly clearing exceptions in the situations we've previously
discusses.

If some code doesn't explicitly clear an exception, though, and holds
onto the the most recent one while running in a loop (or what have
you), in the cases we are considering, it hardly seems like the end of
the world.  It will just take a little bit longer for a single file to
be closed than might ideally be desired.  But this lack of ideal
behavior is usually not going to cause much trouble.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom <[EMAIL PROTECTED]> writes:

>> Also, any caught exception is automatically cleared when
>> the catching procedure returns anyway, so it's not like Python has
>> ever considered a caught exception to be precious information that
>> ought to be preserved long past the point where it is handled.

> That's the point. Python takes care of clearing the traceback. Calls
> to exc_clear are rarely seen.

But that's probably because it's very rare to catch an exception and
then not return quickly.  Typically, the only place this would happen
is in main(), or one of its helpers.

> If they are simply a performance tweak then it's not an issue *. I
> was just concerned that the calls were necessary to keep resources
> from being exhausted.

Well, if you catch an exception and don't return quickly, you have to
consider not only the possibility that there could be some open files
left in the traceback, but also that there could be a large and now
useless data structures stored in the traceback.

Some people here have been arguing that all code should use "with" to
ensure that the files are closed.  But this still wouldn't solve the
problem of the large data structures being left around for an
arbitrary amount of time.

> But some things will make it into ISO Python.

Is there a movement afoot of which I'm unaware to make an ISO standard
for Python?

> Just as long as you have weighed the benefits against a future move
> to a JIT-accelerated, continuation supporting PyPy interpreter that
> might not use reference counting.

I'll worry about that day when it happens, since many of my calls to
the standard library will probably break anyway at that point.  Not to
mention that I don't stay within the confines of Python 2.2, which is
where Jython currently is.  (E.g., Jython does not have generators.)
Etc.

>> I think that Jython and IronPython, etc., should adopt these
>> semantics, but I imagine they might not for performance reasons.  I
>> don't generally use Python for it's speediness, however, but rather
>> for it's pleasant syntax and semantics and large, effective
>> library.

> Yet improved performance appeared to be a priority in Python 2.4
> development, and Python's speed continues to be a concern.

I don't think the refcounting semantics should slow Python down much
considering that it never has aimed for C-level performance anyway.
(Some people claim it's a drag on supporting threads.  I'm skeptical,
though.)  I can see it being a drag on something like Jython, though,
were you are going through a number of different layers to get from
Jython code to the hardware.

Also, I imagine that no one wants to put in the work in Jython to have
a refcounter when the garbage collector comes with the JVM for free.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-05 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

>> Some people here have been arguing that all code should use "with" to
>> ensure that the files are closed.  But this still wouldn't solve the
>> problem of the large data structures being left around for an
>> arbitrary amount of time.

> I don't think anyone has suggested that. Let me be clear about *my*
> position: When you need to ensure that a file has been closed by a
> certain time, you need to be explicit about it. When you don't care,
> just that it will be closed "soonish" then relying on normal object
> lifetime calls is sufficient. This is true regardless of whether
> object lifetimes are handled via refcount or via "true" garbage
> collection.

But it's *not* true at all when relying only on a "true GC"!  Your
program could easily run out of file descriptors if you only have a
real garbage collector and code this way (and are opening lots of
files).  This is why destructors are useless in Java -- you can't rely
on them *ever* being called.  In Python, however, destructors are
quite useful due to the refcounter.

> Relying on the specific semantics of refcounting to give
> certain lifetimes is a logic error.
>
> For example:
>
> f = some_file() #maybe it's the file store for a database implementation
> f.write('a bunch of stuff')
> del f
> #insert code that assumes f is closed.

That's not a logic error if you are coding in CPython, though I agree
that in this particular case the explicit use of "with" would be
preferable due to its clarity.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-06 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

> Sure, but thats part of the general refcounting vs GC argument -
> refcounting gives (a certain level of) timeliness in resource
> collection, GC often only runs under memory pressure. If you're
> saying that we should keep refcounting because it provides better
> handling of non-memory limited resources like file handles, I
> probably wouldn't argue. But saying we should keep refcounting
> because people like to and should write code that relies on implicit
> scope level object destruction I very strongly argue against.

And why would you do that?  People rely very heavily in C++ on when
destructors will be called, and they are in fact encouraged to do so.
They are, in fact, encouraged to do so *so* much that constructs like
"finally" and "with" have been rejected by the C++ BDFL.  Instead, you
are told to use smart pointers, or what have you, to clean up your
allocated resources.

I so no reason not to make Python at least as expressive a programming
language as C++.

>> > Relying on the specific semantics of refcounting to give
>> > certain lifetimes is a logic error.
>> >
>> > For example:
>> >
>> > f = some_file() #maybe it's the file store for a database implementation
>> > f.write('a bunch of stuff')
>> > del f
>> > #insert code that assumes f is closed.

>> That's not a logic error if you are coding in CPython, though I agree
>> that in this particular case the explicit use of "with" would be
>> preferable due to its clarity.

> I stand by my statement. I feel that writing code in this manner is
> like writing C code that assumes uninitialized pointers are 0 -
> regardless of whether it works, it's erroneous and bad practice at
> best, and actively harmful at worst.

That's a poor analogy.  C doesn't guarantee that pointers will be
initialized to 0, and in fact, they typically are not.  CPython, on
other other hand, guarantees that the refcounter behaves a certain
way.

There are languages other than C that guarantee that values are
initialized in certain ways.  Are you going to also assert that in
those languages you should not rely on the initialization rules?

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-09 Thread Douglas Alan
"Chris Mellon" <[EMAIL PROTECTED]> writes:

>> And why would you do that?  People rely very heavily in C++ on when
>> destructors will be called, and they are in fact encouraged to do so.
>> They are, in fact, encouraged to do so *so* much that constructs like
>> "finally" and "with" have been rejected by the C++ BDFL.  Instead, you
>> are told to use smart pointers, or what have you, to clean up your
>> allocated resources.

> For the record, C++ doesn't have a BDFL.

Yes, I know.

   http://dictionary.reference.com/browse/analogy

> And yes, I know that it's used all the time in C++ and is heavily
> encouraged. However, C++ has totally different object semantics than
> Python,

That would depend on how you program in C++.  If you use a framework
based on refcounted smart pointers, then it is rather similar.
Especially if you back that up in your application with a conservative
garbage collector, or what have you.

>> I so no reason not to make Python at least as expressive a programming
>> language as C++.

> I have an overwhelming urge to say something vulgar here. I'm going
> to restrain myself and point out that this isn't a discussion about
> expressiveness.

Says who?

>> That's a poor analogy.  C doesn't guarantee that pointers will be
>> initialized to 0, and in fact, they typically are not.  CPython, on
>> other other hand, guarantees that the refcounter behaves a certain
>> way.

> It's a perfect analogy, because the value of an uninitialized pointer
> in C is *implementation dependent*.

Name one common C implementation that guarantees that uninitialized
pointers will be initialized to null.  None that I have *ever* used
make such a guarantee.  In fact, uninitialized values have always been
garbage with every C compiler I have ever used.

If gcc guaranteed that uninitialized variables would always be zeroed,
and you knew that your code would always be compiled with gcc, then
you would be perfectly justified in coding in a style that assumed
null values for uninitialized variables.  Those are some big if's,
though.

> The Python language reference explicitly does *not* guarantee the
> behavior of the refcounter.

Are you suggesting that it is likely to change?  If so, I think you
will find a huge uproar about it.

> By relying on it, you are relying on an implementation specific,
> non-specified behavior.

I'm relying on a feature that has worked fine since the early '90s,
and if it is ever changed in the future, I'm sure that plenty of other
language changes will come along with it that will make adapting code
that relies on this feature to be the least of my porting worries.

> Exactly like you'd be doing if you rely on the value of
> uninitialized variables in C.

Exactly like I'd be doing if I made Unix system calls in my C code.
After all, system calls are implementation dependent, aren't they?
That doesn't mean that I don't rely on them every day.

>> There are languages other than C that guarantee that values are
>> initialized in certain ways.  Are you going to also assert that in
>> those languages you should not rely on the initialization rules?

> Of course not. Because they *do* guarantee and specify that. C
> doesn't, and neither does Python.

CPython does by tradition *and* by popular will.

Also the language reference manual specifically indicates that CPython
uses a refcounter and documents that it collects objects as soon as
they become unreachable (with the appropriate caveats about circular
references, tracing, debugging, and stored tracebacks).

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's "only one way to do it" philosophy isn't good?

2007-07-09 Thread Douglas Alan
Steve Holden <[EMAIL PROTECTED]> writes:

>> I'm relying on a feature that has worked fine since the early '90s,
>> and if it is ever changed in the future, I'm sure that plenty of other
>> language changes will come along with it that will make adapting code
>> that relies on this feature to be the least of my porting worries.

> Damn, it seems to be broken on my Jython/IronPython installations,
> maybe I should complain. Oh no, I can't, because it *isn't* *part*
> *of* *the* *language*. ...

As I have mentioned *many* times, I'm coding in CPython 2.5, and I
typically make extensive use of Unix-specific calls.  Consequently, I
have absolutely no interest in making my code compatible with Jython
or IronPython, since Jython is stuck at 2.2, IronPython at 2.4, and
neither provide full support for the Python Standard Library or access
to Unix-specific functionality.

I might at some point want to write some Jython code to make use of
Java libraries, but when I code in Jython, I will have absolutely no
interest in trying to make that code compatible with CPython, since
that cannot be if my Jython code calls libraries that are not
available to CPython.

>>> Exactly like you'd be doing if you rely on the value of

>>> uninitialized variables in C.

>> Exactly like I'd be doing if I made Unix system calls in my C code.
>> After all, system calls are implementation dependent, aren't they?
>> That doesn't mean that I don't rely on them every day.

> That depends on whether you program to a specific standard or not.

What standard would that be?  Posix is too restrictive.
BSD/OSX/Linux/Solaris are all different.  I make my program work on
the platform I'm writing it for (keeping in mind what platforms I
might want to port to in the future, in order to avoid obvious
portability pitfalls), and then if the program needs to be ported
eventually to another platforms, I figure out how to do that when the
time comes.

>>> Of course not. Because they *do* guarantee and specify that. C
>>> doesn't, and neither does Python.

>> CPython does by tradition *and* by popular will.

> But you make the mistake of assuming that Python is CPython, which it isn't.

I do not make that mistake.  I refer to CPython as "Python" as does
99% of the Python community.  When I talk about Jython, I call in
"Jython" and when I talk about "IronPython" I refer to it as
"IronPython".  None of this implies that I don't understand that
CPython has features in it that a more strict interpretation of the
word "Python" doesn't necessarily have, just as when I call a tomato a
"vegetable" that doesn't mean that I don't understand that it is
really a fruit.

>> Also the language reference manual specifically indicates that
>> CPython uses a refcounter and documents that it collects objects as
>> soon as they become unreachable (with the appropriate caveats about
>> circular references, tracing, debugging, and stored tracebacks).

> Indeed, but that *is* implementation dependent. As long as you stick
> to CPython you'll be fine. That's allowed. Just be careful about the
> discussions you get into :-)

I've stated over and over again that all I typically care about is
CPython, and what I'm criticized for is for my choice to program for
CPython, rather than for a more generalized least-common-denominator
"Python".

When I program for C++, I also program for the compilers and OS'es
that I will be using, as trying to write C++ code that will compile
under all C++ compilers and OS'es is an utterly losing proposition.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The best platform and editor for Python

2007-07-10 Thread Douglas Alan
[EMAIL PROTECTED] (Alex Martelli) writes:

> Kay Schluehr <[EMAIL PROTECTED]> wrote:

>> half of the community is happy with Emacs and the other half wants to
>> program in a VS-like environment, neither consensus nor progress has

> Calling all vi/vim users (and we'll heartily appreciate the support
> of TextMate fans, BBEdit ones, etc, etc) -- we're at risk being
> defined out of existence, since we're neither happy with Emacs

That's because the Emacs users are the only ones who matter.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-10-27 Thread Douglas Alan
Steven D'Aprano <[EMAIL PROTECTED]> writes:

> I understand that Python's object and calling semantics are exactly the 
> same as Emerald (and likely other languages as well), and that both 
> Emerald and Python are explicitly based on those of CLU, as described by 
> by Barbara Liskov in 1979:
>
> "In particular it is not call by value because mutations of
> arguments performed by the called routine will be visible to the
> caller. And it is not call by reference because access is not
> given to the variables of the caller, but merely to certain
> objects."

It is quite true that Python's calling semantics are the same as
CLU's, and that CLU called these semantics "call by sharing".  It's
not quite true that CLU invented these calling semantics, however.
They were the calling semantics of Lisp long before CLU existed.  I'm
not sure that Lisp had a name for it, however.  Lisp people tended to
refer to all variable assignment as "binding".

I agree with those who object to calling the Python/CLU/Lisp calling
semantics as either "call by value" or "call by reference", which is
why I've been a bit dismayed over the years that the term "call by
sharing" hasn't caught on.  When this terminology is used, the meaning is
quite unambiguous.

It should be noted that many other mainstream languages now use call
by sharing.  It's not at all the least peculiar to Python.  Such
languages include Java, JavaScript, Ruby, C#, and ActionScript.

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-10-27 Thread Douglas Alan
greg <[EMAIL PROTECTED]> writes:

> Seems to me that (1) describes exactly how parameter passing
> works in Python. So why insist that it's *not* call by value?

Because there's an important distinction to be made, and the
distinction has been written up in the Computer Science literature
since Lisp first starting using the same argument passing semantics as
Python back in 1958.  The semantics are called "call by sharing".

Many mainstream programming languages other than Python now use call
by sharing.  They include Java, JavaScript, Ruby, ActionScript, and C#.

|>oug

P.S. Lisp didn't call it "call by sharing" -- Lisp called it
"binding".  The designers of CLU invented the term "call by sharing"
back in the 70s.  (Or at least I believe they invented the term.  They
certainly did use the term.)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-10-28 Thread Douglas Alan
Joe Strout <[EMAIL PROTECTED]> writes:

> There are only the two cases, which Greg quite succinctly and
> accurately described above.  One is by value, the other is by
> reference.  Python quite clearly uses by value.

You make a grave error in asserting that there are only two cases.
Algol, for instance, used call-by-name, which is neither call-by-value
or call-by-reference.  There are a number of other evaluation
strategies.  For a primer on the subject see the following Wikipedia
page:

   http://en.wikipedia.org/wiki/Evaluation_strategy

CLU used the termed "call-by-sharing" for the evaluation strategy
shared by Python, Lisp, CLU, Java, Ruby, and JavaScript, etc.

It should be noted that the Wikipedia page does not document
"call-by-sharing", in specific and refers to Python's strategy as a
type of call-by-value.  It also notes that call-by-value is not a
single evaluation strategy, but rather a family of evaluation
strategies, and that the version of the strategy used by Java (and
hence Python) shares features with call-by-reference strategies.

Consequently, many people prefer to use a different name from
"call-by-value" for Python/Java/Lisp's strategy in order to avoid
confusion.  In any case, no one can disagree with the fact that the
evaluation strategy used by Python et. al., differs significantly from
the call-by-value evaluation strategy used by C and the like, whatever
you wish to call it.

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-10-30 Thread Douglas Alan
greg <[EMAIL PROTECTED]> writes:

> Douglas Alan wrote:
>> greg <[EMAIL PROTECTED]> writes:
>>
>>>Seems to me that (1) describes exactly how parameter passing
>>>works in Python. So why insist that it's *not* call by value?
>> Because there's an important distinction to be made,
>
> The distinction isn't about parameter passing, though, it's about
> the semantics of *assignment*. Once you understand how assigment
> works in Python, all you need to know then is that parameters are
> passed by assigning the actual parameter to the formal
> parameter. All else follows from that.
>
> This holds for *all* languages that I know about, both
> static and dynamic.

Then you don't know about all that many languages.  There are
languages that use call-by-name, and those that use
call-by-value-return.  Some use call-by-need and others do
call-by-macro-expansion.  Etc.  These languages generally don't use
these same semantics for assignment.

All languages that I know of that use call-by-sharing also do
assignment-by-sharing.  Not all languages that do
assignment-by-sharing always do only call-by-sharing, however.  For
instance, most dialects of Lisp have procedural macros.  The calling
semantics of procedural macros are quite different from the calling
semantics of normal functions, even though procedural macros are Lisp
functions.  Other dialects of Lisp provide the ability to state that
certain function arguments won't be evaluated at call time.  All
dialects of Lisp, however, do assignment-by-sharing, or "binding" as
it is called in the Lisp community.

Also, one could certainly invent additional languages that do behave
in the typical manners.

If you are merely asserting, however, that understanding how Python
does assignment will help you understand how Python does argument
passing, then you are certainly correct.  This, however, does not
imply that there is not a pre-existing precise terminology to describe
Python's calling semantics, and that this term can be useful in
describing how Python works.

If I tell you, for instance, that Java, Python, Ruby, JavaScript,
Lisp, and CLU all use call-by-sharing, then I have said something that
makes a similarity among these languages easier to state and easier to
grasp.

> Once you know how assignment works in the language concerned, then
> you know how parameter passing works as well. There is no need for
> new terms.

This is simply not true.

>> and the distinction has been written up in the Computer Science
>> literature since Lisp first starting using the same argument
>> passing semantics as Python back in 1958.  The semantics are called
>> "call by sharing".
>
> I still think it's an unnecessary term, resulting from confusion on
> the part of the authors about the meanings of the existing terms.

Trust me, Barbara Liskov was not one bit confused when she invented
the term "call-by-sharing". And her language CLU was one of the most
prescient to have ever been designed and implmented.

> If there's any need for a new term, it would be "assignment by
> sharing". Although there's already a term in use for that, too --
> it's known as reference assignment.

Reference assignement doesn't imply that the object is allocated on a
heap, and "call-by-sharing" does.

>> Many mainstream programming languages other than Python now use call
>> by sharing.  They include Java, JavaScript, Ruby, ActionScript, and C#.
>
> I would say they use assignment by sharing, and call by value.

We can also argue about how many angels can dance on the head of a
pin, but the term "call-by-sharing" has been around since the 70s, and
it is already well-defined and well understood.

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-10-31 Thread Douglas Alan
greg <[EMAIL PROTECTED]> writes:

> Douglas Alan wrote:
>> greg <[EMAIL PROTECTED]> writes:
>
>>> This holds for *all* languages that I know about, both static and
>>> dynamic.
>
>> Then you don't know about all that many languages.  There are
>> languages that use call-by-name, and those that use
>> call-by-value-return.  Some use call-by-need and others do
>> call-by-macro-expansion.  Etc.
>
> I didn't mean that these are the only two parameter passing
> mechanisms in existence -- I know there are others.

I don't follow you.  You stated that once you understand how
assignment works, you understand the calling mechanism.  That's just
not true.  Algol, for instance, did assignment-by-value but
call-by-name.

>> If I tell you, for instance, that Java, Python, Ruby, JavaScript,
>> Lisp, and CLU all use call-by-sharing, then I have said something that
>> makes a similarity among these languages easier to state and easier to
>> grasp.
>
> If you told me they use "assignment by sharing", that would tell me
> a lot *more* about the language than just talking about parameter
> passing.

Not really.  Call-by-sharing virtually implies that the language does
assignment-by-sharing.  (I know of no counter-examples, and it is
difficult to see how a violation of this rule-of-thumb would be useful
in any new language.)  Stating that a language does
assignment-by-sharing does not imply that it does call-by-sharing.  Or
at least not exclusively so.  Cf. certain dialects of Lisp.  Also C#,
which supports a variety of argument passing strategies.

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-11-07 Thread Douglas Alan
Joe Strout <[EMAIL PROTECTED]> writes:

> As for where I get my definitions from, I draw from several sources:
>
> 1. Dead-tree textbooks

You've been reading the wrong textbooks.  Read Liskov -- she's called
CLU (and hence Python's) calling strategy "call-by-sharing" since the
70s.

> 2. Wikipedia [2] (and yes, I know that has to be taken with a grain of
> salt, but it's so darned convenient)
> 3. My wife, who is a computer science professor and does compiler
> research
> 4. http://javadude.com/articles/passbyvalue.htm (a brief but excellent
> article)
> 5. Observations of the "ByVal" (default) mode in RB and VB.NET
> 6. My own experience implementing the RB compiler (not that
> implementation details matter, but it forced me to think very
> carefully about references and parameter passing for a very long time)
>
> Not that I'm trying to argue from authority; I'm trying to argue from
> logic.  I suspect, though, that your last comment gets to the crux of
> the matter, and reinforces my guess above: you don't think c-b-v means
> what most people think it means.  Indeed, you don't think any of the
> languages shown at [1] are, in fact, c-b-v languages.  If so, then we
> should focus on that and see if we can find a definitive answer.

I'll give you the definitive answer from a position of authority,
then.  I took Barbara Liskov's graduate-level class in programming
language theory at MIT, and she called what Python does
"call-by-sharing".

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-11-07 Thread Douglas Alan
Joe Strout <[EMAIL PROTECTED]> writes:

> Yes, OK, that's great.  But there are several standard pass-by-
> somethings that are defined by the CS community, and which are simple
> and clear and apply to a wide variety of languages.  "Pass by object"
> isn't one of them.

"Call-by-sharing" *is* one of them, and the term has been around since
the 70s:

   http://hopl.murdoch.edu.au/showlanguage.prx?exp=637

> I guess if you want to campaign for it as a shorthand for "object
> reference passed by value," you could do that, and it's not
> outrageous.

There's no need for a campaign.  The term has already been used in the
academic literature for 34 years.

> But to anybody new to the term, you should explain it as exactly
> that, rather than try to claim that Python is somehow different from
> other OOP languages where everybody calls it simply pass by value.

It's not true that "everybody calls it simply pass by value".

> OK, if there were such a thing as "pass-by-object" in the standard
> lexicon of evaluation strategies, I would be perfectly happy saying
> that a system has it if it behaves as though it has it, regardless of
> the underpinnings.

There is "call-by-sharing" in the standard lexicon of evaluation
strategies, and it's been in the lexicon since 1974.

> However, if you really think the term is that handy, and we want to
> agree to say "Python uses pass by object" and answer the inevitable
> "huh?" question with "that's shorthand for object references passed by
> value," then I'd be OK with that.

Excellent.  We can all agree to get along then!

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-11-19 Thread Douglas Alan
greg <[EMAIL PROTECTED]> writes:

> Steven D'Aprano wrote:

>> At least some sections of the Java community seem to prefer a
>> misleading and confusing use of the word "value" over clarity and
>> simplicity, but I for one do not agree with them.

> I don't see anything inherently confusing or misleading
> about it. Confusion only arises when some people jump up
> and say that it's wrong to use the terms that way, because
> it might cause confusion...

Personally, I find this whole debate kind of silly, as it is based on
a completely fallacious either/or dichotomy.

(1) It is unarguably true that Python and Java use a type of
call-by-value.  This follows from the standard definition of
call-by-value, and common usage in, for example, the Scheme and
Java communities, etc.

(2) It is also unarguably true that saying that Python or Java use
"call-by-value", and saying nothing more is going to be profoundly
confusing to anyone who is learning these languages.

It's like the difference between

   Q. What is a giraffe?

   A. A giraffe is a type of animal.

and

   Q. What is Greg?

   A. Greg is a type of animal.

In both cases, the answers are strictly correct, but in the second
case, the answer is also deeply misleading.

Q. How do we generally solve this problem when speaking?

A. We invent more specific terms and then generally stick to the more
specific terms when the more general terms would be misleading.

I.e.,

   Q. What is Greg?

   A. Greg is a human being.

and

   Q. What type of calling semantics do Python and Java use?

   A. Call-by-sharing.

I assert that anyone who does not understand all of the above, is
helping to spread confusion.

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Finding the instance reference of an object

2008-11-19 Thread Douglas Alan
Joe Strout <[EMAIL PROTECTED]> writes:

>>   Q. What type of calling semantics do Python and Java use?
>>
>>   A. Call-by-sharing.
>
> Fair enough, but if the questioner then says "WTF is call-by-sharing,"
> we should answer "call-by-sharing is the term we prefer for call-by-
> value in the case where the value is an object reference (as is always
> the case in Python)."

Personally, I think that it is much preferable to leave
"call-by-value" completely out of any such discussion, as it provably
leads to a great deal of confusion and endless, pointless debate.
It's better to just start from a clean slate and explain how
call-by-sharing works, and to assert that it is quite different from
the calling semantics of languages such as C or Pascal or Fortran, so
the student must set aside any preconceptions about how argument
passing works.

Call-by-sharing is technically a type of call-by-value only for those
who are devotees of academic programming language zoology.  For
everyone else, call-by-sharing is its own beast.  One might point all
of this out in the discussion, however, if it will help the other
person understand.  You never know -- they might be a fan of academic
programming zoology.

|>oug
--
http://mail.python.org/mailman/listinfo/python-list


Re: Any advantage in LISPs having simpler grammars than Python?

2006-03-07 Thread Douglas Alan
Terry Hancock <[EMAIL PROTECTED]> writes:

> I think experienced Lisp programmers must learn to visually parse
> the *words* in the Lisp program to determine the structure, but I
> find that really unhelpful, myself.

Experienced Lisp programmers use indentation to visually parse the
program structure, just like Python programmers do for Python.
Experienced Lisp programmers learn to not see the parentheses when
they don't need to.

When I did a lot of Lisp programming, I often felt that it would be
kind of nice to have a version of Lisp that would infer many of the
parentheses from indentation, so that you could elide most of them.
But then again, the parentheses are very easy for an experienced Lisp
programmer to ignore, so such a change would have been a very hard
sell.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Any advantage in LISPs having simpler grammars than Python?

2006-03-07 Thread Douglas Alan
Grant Edwards <[EMAIL PROTECTED]> writes:

> On 2006-03-07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

>> Is there any advantage to a language having a nice mathematically
>> compact grammar like LISP does? (or at least used to?)

Yes, Lisp's syntax allows for a very powerful macro mechanism that is
extremely useful.

> Yes.  Grammars like LISP's make it easy for programs to generate and
> read code. Grammars like Python's make it easy for humans to
> generate and read code.

I find Lisp to be perfectly readable.  In fact, in some ways I find it
to be more readable than any other language, including Python.  I
like, for instance, the prefix notation, since then I can identify the
sort of expression that I am looking at without having to look ahead
into the expression.

For instance, if Python were to have been designed so that you would
write:

   let myVeryLongVariableName = 3

I would have preferred this over

   myVeryLongVariableName = 3

With the latter, I have to scan down the line to see that this line is
in an assignment statement.

(This problem is significantly worse in C++, where variable
declarations can be rather difficult to visually parse, unless all
classes begin with capital letters and nothing else does.)

>> Many have admired the mathematically simple grammar of LISP
>> in which much of the language is built up from conses IIRC.

>> Python's grammar seems complicated by comparison.

>> Is this anything to worry about?

No, not really.  Not unless you want a powerful macro facility or you
want to programmatically analyze or manipulate your software.

|>oug
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >