[Python-Dev] python-dev Summary for 2005-03-16 through 2005-03-31 [draft]

2005-04-01 Thread Brett C.
OK, so here is my final Summary.  Like to send it out some time this weekend so
please get corrections in ASAP.



=
Summary Announcements
=

---
My last summary
---
So, after nearly 2.5 years, this is my final python-dev Summary.  Steve
Bethard, Tim Lesher, and Tony Meyer will be taking over for me starting with
the April 1 - April 15 summary (and no, this is not an elaborate April Fool's).
 I have learned a ton during my time doing the Summaries and I appreciate
python-dev allowing me to do them all this time.  Hopefully I will be able to
contribute more now in a programming capacity thanks to having more free time.


PyCon was fantastic!

For those of you who missed PyCon, you missed a great one!  It is actually my
favorite PyCon to date.  Already looking forward to next year.


Python fireside chat

Scott David Daniels requested a short little blurb from me expounding on my
thoughts on Python.  Not one to pass on an opportunity to just open myself and
possibly shoot myself in the foot, I figured I would take up the idea.  So hear
we go.

First, I suspect Python 3000 stuff will start to make its way into Python.
Stuff that doesn't break backwards compatibility will most likely start to be
implemented as we head toward the Python 2.9 barrier (Guido has stated several
times that there will never be a Python 2.10).  Things that are not
backwards-compatible will most likely end up being hashed out in various PEPs.
 All of this will allow the features in Python 3000 to be worked in over time
so there is not a huge culture shock.

As for things behind the scenes, work on the back-end will move forward.  Guido
himself has suggested that JIT work should be looked into (according to an
interview at http://www.devsource.com/article2/0,1759,1778272,00.asp).  I know
I plan to fiddle with the back-end to see if the compiler can be made to do
more work.

Otherwise I expect changes to be made, flame wars to come and go, and for
someone else to write the python-dev Summaries.  =)


=
Summaries
=


Python 2.4.1 out

Anthony Baxter, on behalf of python-dev, has released `Python 2.4.1`_.

.. _Python 2.4.1: http://www.python.org/2.4.1/

Contributing threads:
  - `RELEASED Python 2.4.1, release candidate 1
`__
  - `RELEASED Python 2.4.1, release candidate 2
`__
  - `BRANCH FREEZE for 2.4.1 final, 2005-03-30 00:00 UTC
`__
  - `RELEASED Python 2.4.1 (final)
`__


-
AST branch update
-
I, along with some other people, sprinted on the AST branch at PyCon.  This led
to a much more fleshed out design document (found in Python/compile.txt in the
AST branch), the ability to build on Windows, and applying Nick Coghlan's fix
for hex numbers.

Nick also did some more patch work and asked how AST work should be tagged.
There is now an AST category on SourceForge that people should use to flag
things as for the AST.  They should also, by default, assign such items to me
("bcannon" on SF).  We have also taken to flagging threads on the AST with
"[AST]" as the first item in the subject line.

There was also a slight discussion/clarification on the functions named
marshal_write_*() that output a byte format for the AST that is supposed to be
agnostic of implementation.  This will most likely end up being used as the way
to pass AST objects back and forth between C and Python code.  But with the
name collision of the word "marshal" with the actual 'marshal' module, it needs
to be changed.  I have suggested

- byte_encode
- linear_form
- zephyr_encoding
- flat_form
- flat_prefix
- prefix_form

while Nick Coghlan suggsted

- linear_ast
- bytestream_ast

Obviously I prefer "form" and Nick prefers "ast".  With Nick's reply being
independent of mine it will most likely have "linear" or "byte" in the name.

With the patches for descriptors and generator expressions sitting on SF,
syntactic support for all of Python 2.4 should get applied shortly.  After that
it will come down to bug hunting and such.  There is a todo list in the design
doc for those interested in helping out.

Contributing threads:
  - `Procedure for AST Branch patches
`__
  - `[AST] A somewhat less trivial patch than the last one. . .
`__
  - `[AST] question about marshal_write_*() fxns
`__


---
Putting docst

Re: [Python-Dev] Pickling instances of nested classes

2005-04-01 Thread Walter Dörwald
Samuele Pedroni wrote:
[...]
And having the full name of the class available would certainly help 
in debugging.
that's probably the only plus point but the names would be confusing wrt
 modules vs. classes.
You'd propably need a different separator in repr. XIST does this:
>>> from ll.xist.ns import html
>>> html.a.Attrs.href

My point was that enabling reduce hooks at the metaclass level has
propably other interesting applications, is far less complicated than
your proposal to implement, it does not further complicate the notion of
what happens at class creation time, and indeed avoids the
implementation costs (for all python impls) of your proposal and still
allows fairly generic solutions to the problem at hand because the
solution can be formulated at the metaclass level.
Pickling classes like objects (i.e. by using the pickling methods in 
their (meta-)classes) solves only the second part of the problem: 
Finding the nested classes in the module on unpickling. The other 
problem is to add additional info to the inner class, which gets pickled 
and makes it findable on unpickling.

If pickle.py is patched along these lines [*] (strawman impl, not much
tested but test_pickle.py still passes, needs further work to support
__reduce_ex__ and cPickle would need similar changes) then this example 
works:

class HierarchMeta(type):
  """metaclass such that inner classes know their outer class, with 
pickling support"""
  def __new__(cls, name, bases, dic):
  sub = [x for x in dic.values() if isinstance(x,HierarchMeta)]
I did something similar to this in XIST, but the problem with this 
approach is that in:

class Foo(Elm):
   pass
class Bar(Elm):
   Baz = Foo
the class Foo will get its _outer_ set to Bar although it shouldn't.
[...]
  def __reduce__(cls):
  if hasattr(cls, '_outer_'):
  return getattr, (cls._outer_, cls.__name__)
  else:
  return cls.__name__
I like this approach: Instead of hardcoding how references to classes 
are pickled (pickle the __name__), deligate it to the metaclass.

BTW, if classes and functions are pickable, why aren't modules:
>>> import urllib, cPickle
>>> cPickle.dumps(urllib.URLopener)
'curllib\nURLopener\np1\n.'
>>> cPickle.dumps(urllib.splitport)
'curllib\nsplitport\np1\n.'
>>> cPickle.dumps(urllib)
Traceback (most recent call last):
  File "", line 1, in ?
  File "/usr/local/lib/python2.4/copy_reg.py", line 69, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle module objects
We'd just have to pickle the module name.
Bye,
   Walter Dörwald
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pickling instances of nested classes

2005-04-01 Thread Samuele Pedroni
Walter Dörwald wrote:
Samuele Pedroni wrote:
[...]
And having the full name of the class available would certainly help 
in debugging.

that's probably the only plus point but the names would be confusing wrt
 modules vs. classes.

You'd propably need a different separator in repr. XIST does this:
>>> from ll.xist.ns import html
>>> html.a.Attrs.href

My point was that enabling reduce hooks at the metaclass level has
propably other interesting applications, is far less complicated than
your proposal to implement, it does not further complicate the notion of
what happens at class creation time, and indeed avoids the
implementation costs (for all python impls) of your proposal and still
allows fairly generic solutions to the problem at hand because the
solution can be formulated at the metaclass level.

Pickling classes like objects (i.e. by using the pickling methods in 
their (meta-)classes) solves only the second part of the problem: 
Finding the nested classes in the module on unpickling. The other 
problem is to add additional info to the inner class, which gets 
pickled and makes it findable on unpickling.

If pickle.py is patched along these lines [*] (strawman impl, not much
tested but test_pickle.py still passes, needs further work to support
__reduce_ex__ and cPickle would need similar changes) then this 
example works:

class HierarchMeta(type):
  """metaclass such that inner classes know their outer class, with 
pickling support"""
  def __new__(cls, name, bases, dic):
  sub = [x for x in dic.values() if isinstance(x,HierarchMeta)]

I did something similar to this in XIST, but the problem with this 
approach is that in:

class Foo(Elm):
   pass
class Bar(Elm):
   Baz = Foo
the class Foo will get its _outer_ set to Bar although it shouldn't.
this should approximate that behavior better: [not tested]
  import sys
 
 def __new__(cls, name, bases, dic):
 sub = [x for x in dic.values() if isinstance(x,HierarchMeta)]
 newtype = type.__new__(cls, name, bases, dic)
 for x in sub:
 if not hasattr(x, '_outer_') and 
getattr(sys.modules.get(x.__module__), x.__name__, None) is not x:
  x._outer_ = newtype
 return newtype

 .
we don't set _outer_ if a way to pickle the class is already there
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: python-dev Summary for 2005-03-16 through 2005-03-31[draft]

2005-04-01 Thread Terry Reedy
>This led to a much more fleshed out design document
> (found in Python/compile.txt in the AST branch),

The directory URL

http://cvs.sourceforge.net/viewcvs.py/python/python/dist/src/Python/?only_with_tag=ast-branch

or even the file URL

http://cvs.sourceforge.net/viewcvs.py/python/python/dist/src/Python/Attic/compile.txt?rev=1.1.2.10&only_with_tag=ast-branch&view=auto

would be helpful to people not fully familiar with the depository and the 
required prefix to 'Python' (versus 'python').  I initially found the 
two-year-old

ttp://cvs.sourceforge.net/viewcvs.py/python/python/nondist/sandbox/ast/


>The idea of moving docstrings after a 'def' was proposed

/after/before/




___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: python-dev Summary for 2005-03-16 through 2005-03-31 [draft]

2005-04-01 Thread Scott David Daniels
Brett C. wrote:
... I figured I would take up the idea.  So hear
 ^^   here  ^^
we go.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pickling instances of nested classes

2005-04-01 Thread Walter Dörwald
Samuele Pedroni wrote:
[...]
this should approximate that behavior better: [not tested]
  import sys
 
 def __new__(cls, name, bases, dic):
 sub = [x for x in dic.values() if isinstance(x,HierarchMeta)]
 newtype = type.__new__(cls, name, bases, dic)
 for x in sub:
 if not hasattr(x, '_outer_') and 
getattr(sys.modules.get(x.__module__), x.__name__, None) is not x:
  x._outer_ = newtype
 return newtype

 .
we don't set _outer_ if a way to pickle the class is already there
This doesn't fix
class Foo:
   class Bar:
  pass
class Baz:
   Bar = Foo.Bar
both this should be a simple fix.
Bye,
   Walter Dörwald
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Unicode byte order mark decoding

2005-04-01 Thread Evan Jones
I recently rediscovered this strange behaviour in Python's Unicode 
handling. I *think* it is a bug, but before I go and try to hack 
together a patch, I figure I should run it by the experts here on 
Python-Dev. If you understand Unicode, please let me know if there are 
problems with making these minor changes.

>>> import codecs
>>> codecs.BOM_UTF8.decode( "utf8" )
u'\ufeff'
>>> codecs.BOM_UTF16.decode( "utf16" )
u''
Why does the UTF-16 decoder discard the BOM, while the UTF-8 decoder 
turns it into a character? The UTF-16 decoder contains logic to 
correctly handle the BOM. It even handles byte swapping, if necessary. 
I propose that  the UTF-8 decoder should have the same logic: it should 
remove the BOM if it is detected at the beginning of a string. This 
will remove a bit of manual work for Python programs that deal with 
UTF-8 files created on Windows, which frequently have the BOM at the 
beginning. The Unicode standard is unclear about how it should be 
handled (version 4, section 15.9):

Although there are never any questions of byte order with UTF-8 text, 
this sequence can serve as signature for UTF-8 encoded text where the 
character set is unmarked. [...] Systems that use the byte order mark 
must recognize when an initial U+FEFF signals the byte order. In those 
cases, it is not part of the textual content and should be removed 
before processing, because otherwise it may be mistaken for a 
legitimate zero width no-break space.
At the very least, it would be nice to add a note about this to the 
documentation, and possibly add this example function that implements 
the "UTF-8 or ASCII?" logic:

def autodecode( s ):
if s.beginswith( codecs.BOM_UTF8 ):
# The byte string s is UTF-8
out = s.decode( "utf8" )
return out[1:]
else: return s.decode( "ascii" )
As a second issue, the UTF-16LE and UTF-16BE encoders almost do the 
right thing: They turn the BOM into a character, just like the Unicode 
specification says they should.

>>> codecs.BOM_UTF16_LE.decode( "utf-16le" )
u'\ufeff'
>>> codecs.BOM_UTF16_BE.decode( "utf-16be" )
u'\ufeff'
However, they also *incorrectly* handle the reversed byte order mark:
>>> codecs.BOM_UTF16_BE.decode( "utf-16le" )
u'\ufffe'
This is *not* a valid Unicode character. The Unicode specification 
(version 4, section 15.8) says the following about non-characters:

Applications are free to use any of these noncharacter code points 
internally but should never attempt to exchange them. If a 
noncharacter is received in open interchange, an application is not 
required to interpret it in any way. It is good practice, however, to 
recognize it as a noncharacter and to take appropriate action, such as 
removing it from the text. Note that Unicode conformance freely allows 
the removal of these characters. (See C10 in Section3.2, Conformance 
Requirements.)
My interpretation of the specification means that Python should 
silently remove the character, resulting in a zero length Unicode 
string. Similarly, both of the following lines should also result in a 
zero length Unicode string:

>>> '\xff\xfe\xfe\xff'.decode( "utf16" )
u'\ufffe'
>>> '\xff\xfe\xff\xff'.decode( "utf16" )
u'\u'
Thanks for your feedback,
Evan Jones
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unicode byte order mark decoding

2005-04-01 Thread M.-A. Lemburg
Evan Jones wrote:
> I recently rediscovered this strange behaviour in Python's Unicode
> handling. I *think* it is a bug, but before I go and try to hack
> together a patch, I figure I should run it by the experts here on
> Python-Dev. If you understand Unicode, please let me know if there are
> problems with making these minor changes.
> 
> 
 import codecs
 codecs.BOM_UTF8.decode( "utf8" )
> u'\ufeff'
 codecs.BOM_UTF16.decode( "utf16" )
> u''
> 
> Why does the UTF-16 decoder discard the BOM, while the UTF-8 decoder
> turns it into a character? 

The BOM (byte order mark) was a non-standard Microsoft invention
to detect Unicode text data as such (MS always uses UTF-16-LE for
Unicode text files).

It is not needed for the UTF-8 because that format doesn't rely on
the byte order and the BOM character at the beginning of a stream is
a legitimate ZWNBSP (zero width non breakable space) code point.

The "utf-16" codec detects and removes the mark, while the
two others "utf-16-le" (little endian byte order) and "utf-16-be"
(big endian byte order) don't.

> The UTF-16 decoder contains logic to
> correctly handle the BOM. It even handles byte swapping, if necessary. I
> propose that  the UTF-8 decoder should have the same logic: it should
> remove the BOM if it is detected at the beginning of a string. 

-1; there's no standard for UTF-8 BOMs - adding it to the
codecs module was probably a mistake to begin with. You usually
only get UTF-8 files with BOM marks as the result of recoding
UTF-16 files into UTF-8.

> This will
> remove a bit of manual work for Python programs that deal with UTF-8
> files created on Windows, which frequently have the BOM at the
> beginning. The Unicode standard is unclear about how it should be
> handled (version 4, section 15.9):
> 
>> Although there are never any questions of byte order with UTF-8 text,
>> this sequence can serve as signature for UTF-8 encoded text where the
>> character set is unmarked. [...] Systems that use the byte order mark
>> must recognize when an initial U+FEFF signals the byte order. In those
>> cases, it is not part of the textual content and should be removed
>> before processing, because otherwise it may be mistaken for a
>> legitimate zero width no-break space.
> 
> 
> At the very least, it would be nice to add a note about this to the
> documentation, and possibly add this example function that implements
> the "UTF-8 or ASCII?" logic:
> 
> def autodecode( s ):
> if s.beginswith( codecs.BOM_UTF8 ):
> # The byte string s is UTF-8
> out = s.decode( "utf8" )
> return out[1:]
> else: return s.decode( "ascii" )

Well, I'd say that's a very English way of dealing with encoded
text ;-)

BTW, how do you know that s came from the start of a file
and not from slicing some already loaded file somewhere
in the middle ?

> As a second issue, the UTF-16LE and UTF-16BE encoders almost do the
> right thing: They turn the BOM into a character, just like the Unicode
> specification says they should.
> 
 codecs.BOM_UTF16_LE.decode( "utf-16le" )
> u'\ufeff'
 codecs.BOM_UTF16_BE.decode( "utf-16be" )
> u'\ufeff'
> 
> However, they also *incorrectly* handle the reversed byte order mark:
> 
 codecs.BOM_UTF16_BE.decode( "utf-16le" )
> u'\ufffe'
> 
> This is *not* a valid Unicode character. The Unicode specification
> (version 4, section 15.8) says the following about non-characters:
> 
>> Applications are free to use any of these noncharacter code points
>> internally but should never attempt to exchange them. If a
>> noncharacter is received in open interchange, an application is not
>> required to interpret it in any way. It is good practice, however, to
>> recognize it as a noncharacter and to take appropriate action, such as
>> removing it from the text. Note that Unicode conformance freely allows
>> the removal of these characters. (See C10 in Section3.2, Conformance
>> Requirements.)
> 
> 
> My interpretation of the specification means that Python should silently
> remove the character, resulting in a zero length Unicode string.
> Similarly, both of the following lines should also result in a zero
> length Unicode string:
> 
 '\xff\xfe\xfe\xff'.decode( "utf16" )
> u'\ufffe'
 '\xff\xfe\xff\xff'.decode( "utf16" )
> u'\u'

Hmm, wouldn't it be better to raise an error ? After all,
a reversed BOM mark in the stream looks a lot like you're
trying to decode a UTF-16 stream assuming the wrong
byte order ?!

Other than that: +1 on fixing this case.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 01 2005)
>>> Python/Zope Consulting and Support ...http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! 

Re: [Python-Dev] Re: python-dev Summary for 2005-03-16 through 2005-03-31[draft]

2005-04-01 Thread Brett C.
Terry Reedy wrote:
>>This led to a much more fleshed out design document
>>(found in Python/compile.txt in the AST branch),
> 
> 
> The directory URL
> 
> http://cvs.sourceforge.net/viewcvs.py/python/python/dist/src/Python/?only_with_tag=ast-branch
> 
> or even the file URL
> 
> http://cvs.sourceforge.net/viewcvs.py/python/python/dist/src/Python/Attic/compile.txt?rev=1.1.2.10&only_with_tag=ast-branch&view=auto
> 
> would be helpful to people not fully familiar with the depository and the 
> required prefix to 'Python' (versus 'python').  I initially found the 
> two-year-old
> 
> ttp://cvs.sourceforge.net/viewcvs.py/python/python/nondist/sandbox/ast/
> 

Yeah, that has become a popular suggestion.  It has been fixed.  Just didn't
think about it.  One of those instances where I have been neck-deep in
python-dev for so long I forgot that not everyone has a CVS checkout.  =)

> 
> 
>>The idea of moving docstrings after a 'def' was proposed
> 
> 
> /after/before/
> 

Fixed.

Thanks, Terry.

-Brett
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unicode byte order mark decoding

2005-04-01 Thread Evan Jones
On Apr 1, 2005, at 15:19, M.-A. Lemburg wrote:
The BOM (byte order mark) was a non-standard Microsoft invention
to detect Unicode text data as such (MS always uses UTF-16-LE for
Unicode text files).
Well, it's origins do not really matter since at this point the BOM is 
firmly encoded in the Unicode standard. It seems to me that it is in 
everyone's best interest to support it.

It is not needed for the UTF-8 because that format doesn't rely on
the byte order and the BOM character at the beginning of a stream is
a legitimate ZWNBSP (zero width non breakable space) code point.
You are correct: it is a legitimate character. However, its use as a 
ZWNBSP character has been deprecated:

The overloading of semantics for this code point has caused problems 
for programs and protocols. The new character U+2060 WORD JOINER has 
the same semantics in all cases as U+FEFF, except that it cannot be 
used as a signature. Implementers are strongly encouraged to use word 
joiner in those circumstances whenever word joining semantics is 
intended.
Also, the Unicode specification is ambiguous on what an implementation 
should do about a leading ZWNBSP that is encoded in UTF-16. Like I 
mentioned, if you look at the Unicode standard, version 4, section 
15.9, it says:

2. Unmarked Character Set. In some circumstances, the character set 
information for a stream of coded characters (such as a file) is not 
available. The only information available is that the stream contains 
text, but the precise character set is not known.
This seems to indicate that it is permitted to strip the BOM from the 
beginning of UTF-8 text.

-1; there's no standard for UTF-8 BOMs - adding it to the
codecs module was probably a mistake to begin with. You usually
only get UTF-8 files with BOM marks as the result of recoding
UTF-16 files into UTF-8.
This is clearly incorrect. The UTF-8 is specified in the Unicode 
standard version 4, section 15.9:

In UTF-8, the BOM corresponds to the byte sequence .
I normally find files with UTF-8 BOMs from many Windows applications 
when you save a text file as UTF8. I think that Notepad or WordPad does 
this, for example. I think UltraEdit also does the same thing. I know 
that Scintilla definitely does.

At the very least, it would be nice to add a note about this to the
documentation, and possibly add this example function that implements
the "UTF-8 or ASCII?" logic.
Well, I'd say that's a very English way of dealing with encoded
text ;-)
Please note I am saying only that something like this may want to me 
considered for addition to the documentation, and not to the Python 
standard library. This example function more closely replicates the 
logic that is used on those Windows applications when opening ".txt" 
files. It uses the default locale if there is no BOM:

def autodecode( s ):
if s.beginswith( codecs.BOM_UTF8 ):
# The byte string s is UTF-8
out = s.decode( "utf8" )
return out[1:]
else: return s.decode()
BTW, how do you know that s came from the start of a file
and not from slicing some already loaded file somewhere
in the middle ?
Well, the same argument could be applied to the UTF-16 decoder know 
that the string came from the start of a file, and not from slicing 
some already loaded file? The standard states that:

In the UTF-16 encoding scheme, U+FEFF at the very beginning of a file 
or stream explicitly signals the byte order.
So it is perfectly permissible to perform this type of processing if 
you consider a string to be equivalent to a stream.

My interpretation of the specification means that Python should 
silently
remove the character, resulting in a zero length Unicode string.
Hmm, wouldn't it be better to raise an error ? After all,
a reversed BOM mark in the stream looks a lot like you're
trying to decode a UTF-16 stream assuming the wrong
byte order ?!
Well, either one is possible, however the Unicode standard suggests, 
but does not require, silently removing them:

It is good practice, however, to recognize it as a noncharacter and to 
take appropriate action, such as removing it from the text. Note that 
Unicode conformance freely allows the removal of these characters.
I would prefer silently ignoring them from the str.decode() function, 
since I believe in "be strict in what you emit, but liberal in what you 
accept." I think that this only applies to str.decode(). Any other 
attempt to create non-characters, such as unichr( 0x ), *should* 
raise an exception because clearly the programmer is making a mistake.

Other than that: +1 on fixing this case.
Cool!
Evan Jones
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com