[ python-Bugs-1523610 ] PyArg_ParseTupleAndKeywords potential core dump

2006-08-09 Thread SourceForge.net
Bugs item #1523610, was opened at 2006-07-17 02:23
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1523610&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.4
>Status: Closed
Resolution: Fixed
Priority: 8
Submitted By: Eric Huss (ehuss)
Assigned to: Nobody/Anonymous (nobody)
Summary: PyArg_ParseTupleAndKeywords potential core dump

Initial Comment:
After getting bit by bug 893549, I was noticing that
sometimes I was getting a core dump instead of a
TypeError when PyArg_ParseTupleAndKeywords was skipping
over a type the "skipitem" code does not understand.

There are about 4 problems I will describe (though they
are related, so it's probably not worth filing seperate
bugs).

The problem is that the "levels" variable is passed to
the seterror function uninitialized.  If levels does
not happen to contain any 0 elements, then the
iteration code in seterror will go crazy (I will get to
this in a moment).

In the one place where "skipitem" is called, you will
notice it immediately calls seterror() if it returned
an error message.  However, "levels" is not set by the
skipitem function, and thus seterror iterates over an
uninitialized value.  I suggest setting levels[0] = 0
somewhere in the beginning of the code, since the
expectations of setting the "levels" seems awefully
delicate.

(As a side note, there's no bounds checking on the
levels variable, either.  It seems unlikely that
someone will have 32 levels of nested variables, but I
think it points to a general problem with how the
variable is passed around.)

A second fix is to set levels[0] = 0 if setitem fails
before calling seterror().

Now, returning to the "seterror goes crazy" problem I
mentioned earlier, the following code in the seterror
function:

while (levels[i] > 0 && (int)(p-buf) < 220) {

should be:

while (levels[i] > 0 && (int)(p-buf) > 220) {

At least, that's what I'm assuming it is supposed to
be.  I think it should be clear why this is bad.

But wait, there's more!  The snprintf in seterror call
uses the incorrect size of buf.  The following line:

PyOS_snprintf(p, sizeof(buf) - (buf - p),

should be:

PyOS_snprintf(p, sizeof(buf) - (p - buf),

My particular platform (FreeBSD) puts a NUL character
at the end of the buffer.  However, since the size of
the buffer is computed incorrectly, this line of code
stomps on the stack (overwritting the levels value in
my case).

Let me know if you have any questions, or want any
sample code.


--

>Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 07:08

Message:
Logged In: YES 
user_id=849994

Fixed the "levels overflow" problem by introducing an upper
bound on the tuple nesting depth in rev. 51158.

--

Comment By: Georg Brandl (gbrandl)
Date: 2006-07-26 08:04

Message:
Logged In: YES 
user_id=849994

Fixed the "levels[0] = 0" and the "p-buf" issue in rev.
50843.  Still waiting for input on python-dev about the
levels overflow, though I think it can be ignored.

--

Comment By: Eric Huss (ehuss)
Date: 2006-07-17 02:28

Message:
Logged In: YES 
user_id=393416

Oops, skip the section about <220 being >220.  I've been
staring at it too long.  The rest of the issues should be
valid, though.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1523610&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537195 ] Missing platform information in subprocess documentation

2006-08-09 Thread SourceForge.net
Bugs item #1537195, was opened at 2006-08-09 07:28
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537195&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Aaron Bingham (adbingham)
Assigned to: Nobody/Anonymous (nobody)
Summary: Missing platform information in subprocess documentation

Initial Comment:
In the Python 2.4 documentation for the
subprocess.Popen class
(http://www.python.org/doc/current/lib/node235.html),
many of the platform differences are documented
clearly.  However the preexec_fn and close_fds keyword
arguments are not supported on Windows and this is not
mentioned anywhere obvious.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537195&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537195 ] Missing platform information in subprocess documentation

2006-08-09 Thread SourceForge.net
Bugs item #1537195, was opened at 2006-08-09 07:28
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537195&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Aaron Bingham (adbingham)
Assigned to: Nobody/Anonymous (nobody)
Summary: Missing platform information in subprocess documentation

Initial Comment:
In the Python 2.4 documentation for the
subprocess.Popen class
(http://www.python.org/doc/current/lib/node235.html),
many of the platform differences are documented
clearly.  However the preexec_fn and close_fds keyword
arguments are not supported on Windows and this is not
mentioned anywhere obvious.

--

>Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 07:32

Message:
Logged In: YES 
user_id=849994

This was already fixed in SVN and will be in the next docs
release.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537195&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1536021 ] hash(method) sometimes raises OverflowError

2006-08-09 Thread SourceForge.net
Bugs item #1536021, was opened at 2006-08-07 16:21
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536021&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Christian Tanzer (tanzer)
Assigned to: Nobody/Anonymous (nobody)
Summary: hash(method) sometimes raises OverflowError

Initial Comment:
I've run into a problem with a big application that I
wasn't able to
reproduce with a small example.

The code (exception handler added to demonstrate and
work around the
problem): 

try :
h = hash(p)
except OverflowError, e:
print type(p), p, id(p), e
h = id(p) & 0x0FFF

prints the following output:


>
   3066797028 long int too large to convert to int

This happens with Python 2.5b3, but didn't happen with
Python 2.4.3.

I assume that the hash-function for function/methods
returns the `id`
of the function. The following code demonstrates the
same problem with
a Python class whose `__hash__` returns the `id` of the
object:

$ python2.4
Python 2.4.3 (#1, Jun 30 2006, 10:02:59) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
-1211078036
$ python2.5 
Python 2.5b3 (r25b3:51041, Aug  7 2006, 15:35:35) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
Traceback (most recent call last):
  File "", line 1, in 
OverflowError: long int too large to convert to int



--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 09:58

Message:
Logged In: YES 
user_id=21627

Thanks for the report. Fixed in r51160

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-08 12:25

Message:
Logged In: YES 
user_id=4771

The hash of instance methods changed, and id() changed
to return non-negative numbers (so that id() is not
the default hash any more).  But I cannot see how your
problem shows up.  The only thing I could imagine is
that the Script_Category class has a custom __hash__()
method which returns a value that is sometimes a long,
as it would be if it were based on id().  (It has
always been documented that just returning id() in
custom __hash__() methods doesn't work because of
this, but on 32-bit machines the problem only became
apparent with the change in id() in Python 2.5.)

--

Comment By: Nick Coghlan (ncoghlan)
Date: 2006-08-07 17:03

Message:
Logged In: YES 
user_id=1038590

MvL diagnosed the problem on python-dev as being due to
id(obj) now always returning positive values (which may
sometimes be a long).

This seems like sufficient justification to change the
hashing implementation to tolerate long values being
returned from __hash__ methods (e.g. by using the hash of a
returned long value, instead of trying to convert it to a C
int directly).

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536021&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1536021 ] hash(method) sometimes raises OverflowError

2006-08-09 Thread SourceForge.net
Bugs item #1536021, was opened at 2006-08-07 14:21
Message generated for change (Comment added) made by tanzer
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536021&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: Python 2.5
>Status: Open
>Resolution: None
Priority: 5
Submitted By: Christian Tanzer (tanzer)
Assigned to: Nobody/Anonymous (nobody)
Summary: hash(method) sometimes raises OverflowError

Initial Comment:
I've run into a problem with a big application that I
wasn't able to
reproduce with a small example.

The code (exception handler added to demonstrate and
work around the
problem): 

try :
h = hash(p)
except OverflowError, e:
print type(p), p, id(p), e
h = id(p) & 0x0FFF

prints the following output:


>
   3066797028 long int too large to convert to int

This happens with Python 2.5b3, but didn't happen with
Python 2.4.3.

I assume that the hash-function for function/methods
returns the `id`
of the function. The following code demonstrates the
same problem with
a Python class whose `__hash__` returns the `id` of the
object:

$ python2.4
Python 2.4.3 (#1, Jun 30 2006, 10:02:59) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
-1211078036
$ python2.5 
Python 2.5b3 (r25b3:51041, Aug  7 2006, 15:35:35) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
Traceback (most recent call last):
  File "", line 1, in 
OverflowError: long int too large to convert to int



--

>Comment By: Christian Tanzer (tanzer)
Date: 2006-08-09 09:24

Message:
Logged In: YES 
user_id=2402

> The only thing I could imagine is that the Script_Category 
> class has a custom __hash__() method which returns a value 
> that is sometimes a long, as it would be if it were 
> based on id(). 

That was indeed the problem in my code (returning `id(self)`).

> It has always been documented that just returning id() 
> in custom __hash__() methods doesn't work because of
> this 

AFAIR, it was once documented that the default hash value is
the id of an object. And I just found a message by the BFDL
himself proclaiming so:
http://python.project.cwi.nl/search/hypermail/python-recent/0168.html.

OTOH, I don't remember seeing anything about this in AMK's
`What's new in Python 2.x` documents (but found an entry in
NEWS.txt for some 2.5 alpha).

I've now changed all my broken `__hash__` methods (not that
many fortunately) but it might be a good idea to document
this change in a more visible way.

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 07:58

Message:
Logged In: YES 
user_id=21627

Thanks for the report. Fixed in r51160

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-08 10:25

Message:
Logged In: YES 
user_id=4771

The hash of instance methods changed, and id() changed
to return non-negative numbers (so that id() is not
the default hash any more).  But I cannot see how your
problem shows up.  The only thing I could imagine is
that the Script_Category class has a custom __hash__()
method which returns a value that is sometimes a long,
as it would be if it were based on id().  (It has
always been documented that just returning id() in
custom __hash__() methods doesn't work because of
this, but on 32-bit machines the problem only became
apparent with the change in id() in Python 2.5.)

--

Comment By: Nick Coghlan (ncoghlan)
Date: 2006-08-07 15:03

Message:
Logged In: YES 
user_id=1038590

MvL diagnosed the problem on python-dev as being due to
id(obj) now always returning positive values (which may
sometimes be a long).

This seems like sufficient justification to change the
hashing implementation to tolerate long values being
returned from __hash__ methods (e.g. by using the hash of a
returned long value, instead of trying to convert it to a C
int directly).

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536021&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/pyt

[ python-Bugs-1536021 ] hash(method) sometimes raises OverflowError

2006-08-09 Thread SourceForge.net
Bugs item #1536021, was opened at 2006-08-07 14:21
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536021&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
>Category: Documentation
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Christian Tanzer (tanzer)
>Assigned to: A.M. Kuchling (akuchling)
Summary: hash(method) sometimes raises OverflowError

Initial Comment:
I've run into a problem with a big application that I
wasn't able to
reproduce with a small example.

The code (exception handler added to demonstrate and
work around the
problem): 

try :
h = hash(p)
except OverflowError, e:
print type(p), p, id(p), e
h = id(p) & 0x0FFF

prints the following output:


>
   3066797028 long int too large to convert to int

This happens with Python 2.5b3, but didn't happen with
Python 2.4.3.

I assume that the hash-function for function/methods
returns the `id`
of the function. The following code demonstrates the
same problem with
a Python class whose `__hash__` returns the `id` of the
object:

$ python2.4
Python 2.4.3 (#1, Jun 30 2006, 10:02:59) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
-1211078036
$ python2.5 
Python 2.5b3 (r25b3:51041, Aug  7 2006, 15:35:35) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
Traceback (most recent call last):
  File "", line 1, in 
OverflowError: long int too large to convert to int



--

>Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 09:47

Message:
Logged In: YES 
user_id=849994

Andrew, do you want to add a whatsnew entry?

--

Comment By: Christian Tanzer (tanzer)
Date: 2006-08-09 09:24

Message:
Logged In: YES 
user_id=2402

> The only thing I could imagine is that the Script_Category 
> class has a custom __hash__() method which returns a value 
> that is sometimes a long, as it would be if it were 
> based on id(). 

That was indeed the problem in my code (returning `id(self)`).

> It has always been documented that just returning id() 
> in custom __hash__() methods doesn't work because of
> this 

AFAIR, it was once documented that the default hash value is
the id of an object. And I just found a message by the BFDL
himself proclaiming so:
http://python.project.cwi.nl/search/hypermail/python-recent/0168.html.

OTOH, I don't remember seeing anything about this in AMK's
`What's new in Python 2.x` documents (but found an entry in
NEWS.txt for some 2.5 alpha).

I've now changed all my broken `__hash__` methods (not that
many fortunately) but it might be a good idea to document
this change in a more visible way.

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 07:58

Message:
Logged In: YES 
user_id=21627

Thanks for the report. Fixed in r51160

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-08 10:25

Message:
Logged In: YES 
user_id=4771

The hash of instance methods changed, and id() changed
to return non-negative numbers (so that id() is not
the default hash any more).  But I cannot see how your
problem shows up.  The only thing I could imagine is
that the Script_Category class has a custom __hash__()
method which returns a value that is sometimes a long,
as it would be if it were based on id().  (It has
always been documented that just returning id() in
custom __hash__() methods doesn't work because of
this, but on 32-bit machines the problem only became
apparent with the change in id() in Python 2.5.)

--

Comment By: Nick Coghlan (ncoghlan)
Date: 2006-08-07 15:03

Message:
Logged In: YES 
user_id=1038590

MvL diagnosed the problem on python-dev as being due to
id(obj) now always returning positive values (which may
sometimes be a long).

This seems like sufficient justification to change the
hashing implementation to tolerate long values being
returned from __hash__ methods (e.g. by using the hash of a
returned long value, instead of trying to convert it to a C
int directly).

--

You can respond by v

[ python-Bugs-1533105 ] NetBSD build with --with-pydebug causes SIGSEGV

2006-08-09 Thread SourceForge.net
Bugs item #1533105, was opened at 2006-08-02 13:00
Message generated for change (Comment added) made by splitscreen
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1533105&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Matt Fleming (splitscreen)
Assigned to: Nobody/Anonymous (nobody)
Summary: NetBSD build with --with-pydebug causes SIGSEGV

Initial Comment:
The testInfiniteRecursion test in
Lib/test/test_exceptions.py causes Python to segfault
if it has been compiled on NetBSD with --with-pydebug.
This is due to the fact that the default stack size on
NetBSD is 2MB and Python tries to allocate memory for
debugging information on the stack.

The documentation (README under 'Setting the
optimization/debugging options'?) should be updated to
state that if you want to run the test suite with
debugging enabled in the interpreter, you are advised
to increase the stack size, probably to 4096.

This issue is also in release24-maint.

Matt

--

>Comment By: Matt Fleming (splitscreen)
Date: 2006-08-09 10:23

Message:
Logged In: YES 
user_id=1126061

Patches for trunk and 2.4 attached.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-08-09 05:40

Message:
Logged In: YES 
user_id=33168

Matt, can you make a patch?  Doc changes are fine to go into
2.5 (and 2.4).

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1533105&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537167 ] 2nd issue with need for speed patch

2006-08-09 Thread SourceForge.net
Bugs item #1537167, was opened at 2006-08-09 08:01
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Robin Bryce (robinbryce2)
>Assigned to: Phillip J. Eby (pje)
Summary: 2nd issue with need for speed patch

Initial Comment:
This is not a duplicate of the "realease manager
pronouncement on 302 Fix needed" issue raised on pydev.

If a custom importer is present, import.c skips the
builtin import machinery if the find_module method of
that  importer returns None. For python 2.4.3 if
find_module returns none the normal builtin machinery
gets a lookin. The relevent change was the addition of
a continue statement with svn commit r46372 (at around
line 1283 of import.c on the trunk). 

I don't understand, in the face of this change, how pep
302 importers are expected to cascade. returning None
from find_module is the way an importer says "no I
can't load this module but I cant say for certain this
means ImportError" isnt it ?

One (unintended?) consequence of this change is the
following corner case:

As __import__ allows non dotted module names
__import__('fakemod.a/b') *will* succede on python
2.4.3 provided b is a directory under the package a
that contains an __init__.py. In python 2.5b3 this fails.

I've atatched a detailed repro case of this particular
corner case.




--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 14:27

Message:
Logged In: YES 
user_id=21627

I'd say it is a bug that Python 2.4 allows non-dotted module
names for __import__. Can you come up with a change in
behaviour for "regular" module names?

As for cascading: path importers doe not cascade. Per
sys.path item, there can be at most one path importer. They
"cascade" in the sense that search continues with other
sys.path items if it wasn't found in one sys.path entry.
This cascading continues to work with 2.5.

Phillip, can you take a look?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1536021 ] hash(method) sometimes raises OverflowError

2006-08-09 Thread SourceForge.net
Bugs item #1536021, was opened at 2006-08-07 10:21
Message generated for change (Comment added) made by akuchling
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536021&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Christian Tanzer (tanzer)
Assigned to: A.M. Kuchling (akuchling)
Summary: hash(method) sometimes raises OverflowError

Initial Comment:
I've run into a problem with a big application that I
wasn't able to
reproduce with a small example.

The code (exception handler added to demonstrate and
work around the
problem): 

try :
h = hash(p)
except OverflowError, e:
print type(p), p, id(p), e
h = id(p) & 0x0FFF

prints the following output:


>
   3066797028 long int too large to convert to int

This happens with Python 2.5b3, but didn't happen with
Python 2.4.3.

I assume that the hash-function for function/methods
returns the `id`
of the function. The following code demonstrates the
same problem with
a Python class whose `__hash__` returns the `id` of the
object:

$ python2.4
Python 2.4.3 (#1, Jun 30 2006, 10:02:59) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
-1211078036
$ python2.5 
Python 2.5b3 (r25b3:51041, Aug  7 2006, 15:35:35) 
[GCC 3.4.6 (Gentoo 3.4.6-r1, ssp-3.4.5-1.0,
pie-8.7.9)] on linux2
Type "help", "copyright", "credits" or "license"
for more information.
>>> class X(object):
...   def __hash__(self): return id(self)
... 
>>> hash (X())
Traceback (most recent call last):
  File "", line 1, in 
OverflowError: long int too large to convert to int



--

>Comment By: A.M. Kuchling (akuchling)
Date: 2006-08-09 09:20

Message:
Logged In: YES 
user_id=11375

Added.  Closing this bug.

--

Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 05:47

Message:
Logged In: YES 
user_id=849994

Andrew, do you want to add a whatsnew entry?

--

Comment By: Christian Tanzer (tanzer)
Date: 2006-08-09 05:24

Message:
Logged In: YES 
user_id=2402

> The only thing I could imagine is that the Script_Category 
> class has a custom __hash__() method which returns a value 
> that is sometimes a long, as it would be if it were 
> based on id(). 

That was indeed the problem in my code (returning `id(self)`).

> It has always been documented that just returning id() 
> in custom __hash__() methods doesn't work because of
> this 

AFAIR, it was once documented that the default hash value is
the id of an object. And I just found a message by the BFDL
himself proclaiming so:
http://python.project.cwi.nl/search/hypermail/python-recent/0168.html.

OTOH, I don't remember seeing anything about this in AMK's
`What's new in Python 2.x` documents (but found an entry in
NEWS.txt for some 2.5 alpha).

I've now changed all my broken `__hash__` methods (not that
many fortunately) but it might be a good idea to document
this change in a more visible way.

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 03:58

Message:
Logged In: YES 
user_id=21627

Thanks for the report. Fixed in r51160

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-08 06:25

Message:
Logged In: YES 
user_id=4771

The hash of instance methods changed, and id() changed
to return non-negative numbers (so that id() is not
the default hash any more).  But I cannot see how your
problem shows up.  The only thing I could imagine is
that the Script_Category class has a custom __hash__()
method which returns a value that is sometimes a long,
as it would be if it were based on id().  (It has
always been documented that just returning id() in
custom __hash__() methods doesn't work because of
this, but on 32-bit machines the problem only became
apparent with the change in id() in Python 2.5.)

--

Comment By: Nick Coghlan (ncoghlan)
Date: 2006-08-07 11:03

Message:
Logged In: YES 
user_id=1038590

MvL diagnosed the problem on python-dev as being due to
id(obj) now always returning positive values (which may
sometimes be a long).

This seems like sufficient justification to change the
hashing implementation to tolerate long values being
returned from __hash__

[ python-Bugs-1537167 ] 2nd issue with need for speed patch

2006-08-09 Thread SourceForge.net
Bugs item #1537167, was opened at 2006-08-09 06:01
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Robin Bryce (robinbryce2)
Assigned to: Phillip J. Eby (pje)
Summary: 2nd issue with need for speed patch

Initial Comment:
This is not a duplicate of the "realease manager
pronouncement on 302 Fix needed" issue raised on pydev.

If a custom importer is present, import.c skips the
builtin import machinery if the find_module method of
that  importer returns None. For python 2.4.3 if
find_module returns none the normal builtin machinery
gets a lookin. The relevent change was the addition of
a continue statement with svn commit r46372 (at around
line 1283 of import.c on the trunk). 

I don't understand, in the face of this change, how pep
302 importers are expected to cascade. returning None
from find_module is the way an importer says "no I
can't load this module but I cant say for certain this
means ImportError" isnt it ?

One (unintended?) consequence of this change is the
following corner case:

As __import__ allows non dotted module names
__import__('fakemod.a/b') *will* succede on python
2.4.3 provided b is a directory under the package a
that contains an __init__.py. In python 2.5b3 this fails.

I've atatched a detailed repro case of this particular
corner case.




--

>Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 13:28

Message:
Logged In: YES 
user_id=849994

Guido agreed that the 2.4 behavior is to be regarded as a
bug:
http://mail.python.org/pipermail/python-dev/2006-May/065174.html

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 12:27

Message:
Logged In: YES 
user_id=21627

I'd say it is a bug that Python 2.4 allows non-dotted module
names for __import__. Can you come up with a change in
behaviour for "regular" module names?

As for cascading: path importers doe not cascade. Per
sys.path item, there can be at most one path importer. They
"cascade" in the sense that search continues with other
sys.path items if it wasn't found in one sys.path entry.
This cascading continues to work with 2.5.

Phillip, can you take a look?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1534765 ] logging's fileConfig causes KeyError on shutdown

2006-08-09 Thread SourceForge.net
Bugs item #1534765, was opened at 2006-08-04 19:58
Message generated for change (Comment added) made by splitscreen
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1534765&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: mdbeachy (mdbeachy)
Assigned to: Vinay Sajip (vsajip)
Summary: logging's fileConfig causes KeyError on shutdown

Initial Comment:
If logging.config.fileConfig() is called after logging
handlers already exist, a KeyError is thrown in the
atexit call to logging.shutdown().

This looks like it's fixed in the 2.5 branch but since
I've bothered to figure out what was going on I'm
sending this in anyway. There still might be a 2.4.4,
right? (Also, my fix looks better than what was done
for 2.5, but I suppose the flush/close I added may not
be necessary.)

Attached is a demo and a patch against 2.4.3.

Thanks,
Mike

--

Comment By: Matt Fleming (splitscreen)
Date: 2006-08-09 14:10

Message:
Logged In: YES 
user_id=1126061

Bug confirmed in release24-maint. Patch looks good to me,
although I think the developers prefer unified diffs, not
contextual, just to keep in mind for the future. And also, I
had to manually patch the Lib/logging/config.py file because
for some reason, the paths in your patch all use lowercase
letters.

Thanks for the patch.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1534765&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537167 ] 2nd issue with need for speed patch

2006-08-09 Thread SourceForge.net
Bugs item #1537167, was opened at 2006-08-09 07:01
Message generated for change (Comment added) made by robinbryce2
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Robin Bryce (robinbryce2)
Assigned to: Phillip J. Eby (pje)
Summary: 2nd issue with need for speed patch

Initial Comment:
This is not a duplicate of the "realease manager
pronouncement on 302 Fix needed" issue raised on pydev.

If a custom importer is present, import.c skips the
builtin import machinery if the find_module method of
that  importer returns None. For python 2.4.3 if
find_module returns none the normal builtin machinery
gets a lookin. The relevent change was the addition of
a continue statement with svn commit r46372 (at around
line 1283 of import.c on the trunk). 

I don't understand, in the face of this change, how pep
302 importers are expected to cascade. returning None
from find_module is the way an importer says "no I
can't load this module but I cant say for certain this
means ImportError" isnt it ?

One (unintended?) consequence of this change is the
following corner case:

As __import__ allows non dotted module names
__import__('fakemod.a/b') *will* succede on python
2.4.3 provided b is a directory under the package a
that contains an __init__.py. In python 2.5b3 this fails.

I've atatched a detailed repro case of this particular
corner case.




--

>Comment By: Robin Bryce (robinbryce2)
Date: 2006-08-09 15:33

Message:
Logged In: YES 
user_id=1547259

The 'illeagal' module name is a red herring. The problem
exists  with leagal paths also::

Python 2.5b3 (trunk:51136, Aug  9 2006, 15:17:14)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
>>> from fakemod import urlpathimport
>>> urlpathimport.install()
>>> m=__import__('fakemod.a')
*** fullname='fakemod.a' initpath='fakemod' ***
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named a
>>>
[EMAIL PROTECTED]:~/devel/blackmile$ python
Python 2.4.3 (#2, Apr 27 2006, 14:43:58)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
>>> from fakemod import urlpathimport
>>> urlpathimport.install()
>>> m=__import__('fakemod.a')
*** fullname='fakemod.a' initpath='fakemod' ***
>>>

Working on a test case. At present I think it is imposible
for a 2.5 custom importer to choose *not* to import a
standard python module by returning None from find_module.
Because if it returns None the standard import is skipped.

gbrandl, I think it was your commit that added the
'continue' statement, what is the reasoning behind making
that optimisation ?

Cheers,
Robin

--

Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 14:28

Message:
Logged In: YES 
user_id=849994

Guido agreed that the 2.4 behavior is to be regarded as a
bug:
http://mail.python.org/pipermail/python-dev/2006-May/065174.html

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 13:27

Message:
Logged In: YES 
user_id=21627

I'd say it is a bug that Python 2.4 allows non-dotted module
names for __import__. Can you come up with a change in
behaviour for "regular" module names?

As for cascading: path importers doe not cascade. Per
sys.path item, there can be at most one path importer. They
"cascade" in the sense that search continues with other
sys.path items if it wasn't found in one sys.path entry.
This cascading continues to work with 2.5.

Phillip, can you take a look?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1535502 ] Python 2.5 windows builds should link hashlib with OpenSSL

2006-08-09 Thread SourceForge.net
Bugs item #1535502, was opened at 2006-08-06 21:38
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1535502&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Gregory P. Smith (greg)
>Assigned to: Anthony Baxter (anthonybaxter)
Summary: Python 2.5 windows builds should link hashlib with OpenSSL

Initial Comment:
The Windows builds of Python 2.5 need to be updated to
build and link the hashlib modules with OpenSSL 0.9.8.  

The OpenSSL implementations of the hash algorithms are
*much* faster (often 2-3x) than the fallback C
implementations that python includes for use when
OpenSSL isn't available.

I just tested the python 2.5b3 installer on windows. 
its using the fallback versions rather than OpenSSL:

here's a simple way to check from a running python:

Without OpenSSL:

>>> import hashlib
>>> hashlib.sha1


With OpenSSL:

>>> import hashlib
>>> hashlib.sha1



(please use openssl 0.9.8; older versions don't include
sha256 and sha512 implementations)

--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 16:34

Message:
Logged In: YES 
user_id=21627

I changed the patch to support the Win64 build process, and
added packaging support (msi.py). It looks fine to me now.

Anthony, is this ok to apply?

--

Comment By: Gregory P. Smith (greg)
Date: 2006-08-08 09:48

Message:
Logged In: YES 
user_id=413

attached is a patch that works for me on Win XP with MSVS
2003 (vc++ 7.1):  build_hashlib_with_ssl-01.patch

It does several things:

build_ssl.py  --  this is fixed to use do_masm.bat instead
of a modified 32all.bat to build OpenSSL with x86 asm
optimizations on Win32.  It is also fixed to work when under
a directory tree with spaces in the directory names.

_ssl.mak  --  since both _ssl and _hashlib depend on OpenSSL
it made the most sense for both to be built by the same
makefile.  I added _hashlib's build here.

_ssl.vcproj  --  adds the dependancy on Modules/_hashopenssl.c


Things still TODO - make sure _hashlib.pyd is added to the
MSI installer.

--

Comment By: Gregory P. Smith (greg)
Date: 2006-08-08 08:02

Message:
Logged In: YES 
user_id=413

i've attached a patch to PCbuild/build_ssl.py that should
build the assembly optimized OpenSSL on windows by default.

Still todo: a _hashlib.vcproj file is needed.  though
wouldn't it be easier for me to just build _hashlib.pyd from
within the _ssl.mak file?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1535502&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537445 ] urllib2 httplib _read_chunked timeout

2006-08-09 Thread SourceForge.net
Bugs item #1537445, was opened at 2006-08-09 17:00
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537445&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: devloop (devloop)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib2 httplib _read_chunked timeout

Initial Comment:
Hello !

In a code of mine if have the lines :
try:
  req = urllib2.Request(url)
  u = urllib2.urlopen(req)
  data=u.read()
except urllib2.URLError,err:
  // error handling here

The urllib2.URLError normally catchs the 
socket.timeout but someone send me a bug report of a 
tiemout error :

File "/usr/lib/python2.4/socket.py", line 285, in read
data = self._sock.recv(recv_size)
File "/usr/lib/python2.4/httplib.py", line 460, in read
return self._read_chunked(amt)
File "/usr/lib/python2.4/httplib.py", line 495, in
_read_chunked
line = self.fp.readline()
File "/usr/lib/python2.4/socket.py", line 325, in
readline
data = recv(1)
socket.timeout: timed out

Is it a bug with httplib and 'Content-Encoding: 
chunked' header ?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537445&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1536825 ] distutils.sysconfig.get_config_h_filename gets useless file

2006-08-09 Thread SourceForge.net
Bugs item #1536825, was opened at 2006-08-08 18:48
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536825&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Distutils
>Group: 3rd Party
>Status: Closed
Resolution: None
Priority: 5
Submitted By: Parzival Herzog (walt-kelly)
Assigned to: Nobody/Anonymous (nobody)
Summary: distutils.sysconfig.get_config_h_filename gets useless file

Initial Comment:
python -v:   
Python 2.4.1 (#2, Aug 25 2005, 18:20:57)   
[GCC 4.0.1 (4.0.1-2mdk for Mandriva Linux release   
2006.0)] on linux2   
   
While attempting to install cElementTree, the setup   
build failed because the setup.py attempted to parse   
the sysconfig.h file, using the filename provided by  
distutils.sysconfig.get_config_h_filename(). The file  
retrieved was "/usr/include/python2.4/pyconfig.h",  
which contains the text:  
--  
#define _MULTIARCH_HEADER python2.4/pyconfig.h  
#include   
--  
  
The cElementTree setup.py script then parsed this  
file, and attempted to configure itself as follows:  
  
# determine suitable defines (based on Python's  
setup.py file)  
config_h = sysconfig.get_config_h_filename()  
config_h_vars =  
sysconfig.parse_config_h(open(config_h))  
for feature_macro in ["HAVE_MEMMOVE",  
"HAVE_BCOPY"]:  
if config_h_vars.has_key(feature_macro):  
defines.append((feature_macro, "1"))  
  
  
Since file with useful information is in  
"/usr/include/multiarch-i386-linux/python2.4/pyconfig.h",  
the subsequent build failed due to no HAVE_MEMMOVE or  
HAVE_BCOPY macro being defined.  
  
So, either  
1) the cElementTree setup.py script is too clever for  
its own good, or is not clever enough, (tell that to  
F. Lundh) or  
2) the sysconfig.get_config_h_filename is returning  
the wrong filename (i.e. a file that does not  
actually contain the needed configuration  
information), or  
3) sysconfig.parse_config_h should, but does not  
follow the multi-arch-dispatch.h include, to get at  
the real sysconfig_h defines, or  
4) Mandriva 2006 has messed up the Python  
distribution with that multi-arch-dispatch thing.  
  
I'm hoping that a solution is found rectifying either  
(2) or (3).  
  
  
  
  

--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 16:38

Message:
Logged In: YES 
user_id=21627

My analysis is that it is (4). If Mandriva thinks it can
change Python header files just like that and rearrange the
layout of a Python installation, they also ought to fix
distutils correspondingly.

Before any code is added to distutils that supports this
kind of installation, I'd like to see a PEP first stating
the problem that is solved with that multi-arch-dispatch
thing, and suggests a solution that will survive possible
upcoming changes to that mechanism.

Closing as a third-party bug.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1536825&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1517990 ] IDLE on macosx keybindings need work

2006-08-09 Thread SourceForge.net
Bugs item #1517990, was opened at 2006-07-06 04:26
Message generated for change (Comment added) made by kbk
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1517990&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: IDLE
Group: Python 2.5
Status: Closed
Resolution: Fixed
Priority: 7
Submitted By: Ronald Oussoren (ronaldoussoren)
Assigned to: Ronald Oussoren (ronaldoussoren)
Summary: IDLE on macosx keybindings need work

Initial Comment:
There is a serious issue with the keybindings for IDLE on OSX: a lot of 
them don't work correctly. One example of a not-working key-binding is 
'CMD-W', this should close the current window but doesn't.  'CMD-N' does 
create a new window though, so at least some keybindings work.


--

>Comment By: Kurt B. Kaiser (kbk)
Date: 2006-08-09 11:22

Message:
Logged In: YES 
user_id=149084

I see you added a comment to Python's NEWS.

IDLE has its own NEWS file in idlelib: NEWS.txt!

--

Comment By: Ronald Oussoren (ronaldoussoren)
Date: 2006-07-25 16:36

Message:
Logged In: YES 
user_id=580910

I've checked in OSX specific keybindings in revision 50833

This seems to fix all issues I had with the key bindings. The bindings are 
still 
not entirely compatible with that of Cocoa's textview[1], but that can't be 
helped.

I'm therefore closing this issue.

P.S. thanks for prodding me on this one, I might have let this slip beyond 2.5 
otherwise.

[1] See http://hcs.harvard.edu/~jrus/Site/System%20Bindings.html

--

Comment By: Ronald Oussoren (ronaldoussoren)
Date: 2006-07-25 15:26

Message:
Logged In: YES 
user_id=580910

Cmd-W is fixed. I'm currenlty working my way through the keybindings and (a) 
check if they are correct for OSX and (b) if they actually work.

Sorry about the missing NEWS items, I'll commit those soon.

--

Comment By: Kurt B. Kaiser (kbk)
Date: 2006-07-24 13:29

Message:
Logged In: YES 
user_id=149084

I see you made a change yesterday to EditorWindow
which appears to address the cmd-w bug. Could you
make an entry in NEWS.txt when you modify IDLE's
functionality?

--

Comment By: Ronald Oussoren (ronaldoussoren)
Date: 2006-07-18 08:09

Message:
Logged In: YES 
user_id=580910

The keybinding definition itself seems to be correct (although I haven't 
reviewed 
it completely yet). The problem at this point is that IDLE doesn't respond to 
some (or even most) of them. I suspect that AquaTk is at fault here, it is 
really 
lousy at times.

--

Comment By: Kurt B. Kaiser (kbk)
Date: 2006-07-17 14:06

Message:
Logged In: YES 
user_id=149084

Unfortunately, I don't have a Mac to work with.

The current Mac keybindings were devised by Tony
Lownds (tonylownds) during the transition to OSX.

Would you like to create a new section in 
config-keys.def named OSX and work up some new
bindings?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1517990&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1526585 ] Concatenation on a long string breaks

2006-08-09 Thread SourceForge.net
Bugs item #1526585, was opened at 2006-07-21 17:18
Message generated for change (Settings changed) made by arigo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1526585&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.5
>Status: Closed
>Resolution: Fixed
Priority: 8
Submitted By: Jp Calderone (kuran)
Assigned to: Armin Rigo (arigo)
Summary: Concatenation on a long string breaks

Initial Comment:
Consider this transcript:

[EMAIL PROTECTED]:~/Projects/python/trunk$ ./python
Python 2.5b2 (trunk:50698, Jul 18 2006, 10:08:36)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for
more information.
>>> x = 'x' * (2 ** 31 - 1)
>>> x = x + 'x'
Traceback (most recent call last):
  File "", line 1, in 
SystemError: Objects/stringobject.c:4103: bad argument
to internal function
>>> len(x)
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'x' is not defined
>>> 


I would expect some exception other than SystemError
and for the locals namespace to not become corrupted.


--

>Comment By: Armin Rigo (arigo)
Date: 2006-08-09 15:39

Message:
Logged In: YES 
user_id=4771

Committed in rev 51178.

Closing this report; the repetition problem is in
another tracker and is mentioned in PEP 356
(the 2.5 release schedule).

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-08 09:24

Message:
Logged In: YES 
user_id=4771

I was away.  I will try to get around to it before
release candidate one.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-08-04 05:27

Message:
Logged In: YES 
user_id=33168

Armin, yes that sounds reasonable.  Please checkin as soon
as possible now that the trunk is not frozen.

--

Comment By: Armin Rigo (arigo)
Date: 2006-07-27 08:48

Message:
Logged In: YES 
user_id=4771

Almost missed kuran's note.  Kuran: I suppose you meant to
use 2**31 instead of 2**32, but you've found another
important bug:

>>> s = 'x' * (2**32-2)
>>> N = len(s)
>>> N
2147483647
>>> 2**32
4294967296L

Argh!  Another check is missing somewhere.

--

Comment By: Armin Rigo (arigo)
Date: 2006-07-27 08:38

Message:
Logged In: YES 
user_id=4771

We could reuse the --memlimit option of regrtest in the
following way:

At the moment it makes no sense to specify a --memlimit
larger than Py_ssize_t, like 3GB on 32-bit systems.  At
least test_bigmem fails completely in this case.  From this
it seems that the --memlimit actually tells, more precisely,
how much of its *address space* the Python test process is
allowed to consume.  So the value should be clamped to a
maximum of MAX(Py_ssize_t).  This would solve the current
test_bigmem issue.

If we do so, then the condition "--memlimit >=
MAX(Py_ssize_t)" is precisely what should be checked to know
if we can run the test for the bug in the present tracker,
and other tests of the same kind, which check what occurs
when the *address space* is exhausted.

In this way, specifying --memlimit=3G would enable either
test_bigmem (on 64-bit systems) or some new
test_filladdressspace (on 32-bit systems), as appropriate.

Sounds reasonable?

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-07-26 15:50

Message:
Logged In: YES 
user_id=33168

You're correct that bigmem is primarily for testing
int/Py_ssize_t.  But it doesn't have to be.  It has support
for machines with largish amounts of memory (and limiting
test runs).  I didn't know where else to put such a test.  I
agree that this bug would only occur on 32-bit platforms. 
Most machines can't run it, so about the only other option I
can think of would be to put it in it's own file and add a
-u option.  That seemed like even more work.

I'm not tied to bigmem at all, but it would be nice to have
a test for this somewhere.  I'm sure there are a bunch of
other places we have this sort of overflow and it would be
good to test them somewhere.  Do whatever you think is best.

--

Comment By: Armin Rigo (arigo)
Date: 2006-07-26 09:01

Message:
Logged In: YES 
user_id=4771

I'm unsure about how the bigmem tests should be used.
I think that I am not supposed to set a >2G limit on
a 32-bit machine, right?  When I do, I get 9 failures:
8 OverflowErrors and a stranger AssertionError in
test_hash.  I think that these tests are meant to
test the int/Py_ssize_t difference on 64-bit 
machines instead.  The bug th

[ python-Bugs-1526585 ] Concatenation on a long string breaks

2006-08-09 Thread SourceForge.net
Bugs item #1526585, was opened at 2006-07-21 17:18
Message generated for change (Comment added) made by arigo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1526585&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.5
Status: Closed
Resolution: Fixed
Priority: 8
Submitted By: Jp Calderone (kuran)
Assigned to: Armin Rigo (arigo)
Summary: Concatenation on a long string breaks

Initial Comment:
Consider this transcript:

[EMAIL PROTECTED]:~/Projects/python/trunk$ ./python
Python 2.5b2 (trunk:50698, Jul 18 2006, 10:08:36)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for
more information.
>>> x = 'x' * (2 ** 31 - 1)
>>> x = x + 'x'
Traceback (most recent call last):
  File "", line 1, in 
SystemError: Objects/stringobject.c:4103: bad argument
to internal function
>>> len(x)
Traceback (most recent call last):
  File "", line 1, in 
NameError: name 'x' is not defined
>>> 


I would expect some exception other than SystemError
and for the locals namespace to not become corrupted.


--

>Comment By: Armin Rigo (arigo)
Date: 2006-08-09 15:42

Message:
Logged In: YES 
user_id=4771

Committed in rev 51178.

Closing this report; the repetition problem is in
another tracker and is mentioned in PEP 356
(the 2.5 release schedule).

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-09 15:39

Message:
Logged In: YES 
user_id=4771

Committed in rev 51178.

Closing this report; the repetition problem is in
another tracker and is mentioned in PEP 356
(the 2.5 release schedule).

--

Comment By: Armin Rigo (arigo)
Date: 2006-08-08 09:24

Message:
Logged In: YES 
user_id=4771

I was away.  I will try to get around to it before
release candidate one.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-08-04 05:27

Message:
Logged In: YES 
user_id=33168

Armin, yes that sounds reasonable.  Please checkin as soon
as possible now that the trunk is not frozen.

--

Comment By: Armin Rigo (arigo)
Date: 2006-07-27 08:48

Message:
Logged In: YES 
user_id=4771

Almost missed kuran's note.  Kuran: I suppose you meant to
use 2**31 instead of 2**32, but you've found another
important bug:

>>> s = 'x' * (2**32-2)
>>> N = len(s)
>>> N
2147483647
>>> 2**32
4294967296L

Argh!  Another check is missing somewhere.

--

Comment By: Armin Rigo (arigo)
Date: 2006-07-27 08:38

Message:
Logged In: YES 
user_id=4771

We could reuse the --memlimit option of regrtest in the
following way:

At the moment it makes no sense to specify a --memlimit
larger than Py_ssize_t, like 3GB on 32-bit systems.  At
least test_bigmem fails completely in this case.  From this
it seems that the --memlimit actually tells, more precisely,
how much of its *address space* the Python test process is
allowed to consume.  So the value should be clamped to a
maximum of MAX(Py_ssize_t).  This would solve the current
test_bigmem issue.

If we do so, then the condition "--memlimit >=
MAX(Py_ssize_t)" is precisely what should be checked to know
if we can run the test for the bug in the present tracker,
and other tests of the same kind, which check what occurs
when the *address space* is exhausted.

In this way, specifying --memlimit=3G would enable either
test_bigmem (on 64-bit systems) or some new
test_filladdressspace (on 32-bit systems), as appropriate.

Sounds reasonable?

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-07-26 15:50

Message:
Logged In: YES 
user_id=33168

You're correct that bigmem is primarily for testing
int/Py_ssize_t.  But it doesn't have to be.  It has support
for machines with largish amounts of memory (and limiting
test runs).  I didn't know where else to put such a test.  I
agree that this bug would only occur on 32-bit platforms. 
Most machines can't run it, so about the only other option I
can think of would be to put it in it's own file and add a
-u option.  That seemed like even more work.

I'm not tied to bigmem at all, but it would be nice to have
a test for this somewhere.  I'm sure there are a bunch of
other places we have this sort of overflow and it would be
good to test them somewhere.  Do whatever you think is best.

--

Comment By: Armin Rigo (arigo)
Date: 2006-07-26 09:01

Message:
Logged In: YES 
user_id=4771

I'm unsure about how the big

[ python-Bugs-1537167 ] 2nd issue with need for speed patch

2006-08-09 Thread SourceForge.net
Bugs item #1537167, was opened at 2006-08-09 07:01
Message generated for change (Comment added) made by robinbryce2
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Robin Bryce (robinbryce2)
Assigned to: Phillip J. Eby (pje)
Summary: 2nd issue with need for speed patch

Initial Comment:
This is not a duplicate of the "realease manager
pronouncement on 302 Fix needed" issue raised on pydev.

If a custom importer is present, import.c skips the
builtin import machinery if the find_module method of
that  importer returns None. For python 2.4.3 if
find_module returns none the normal builtin machinery
gets a lookin. The relevent change was the addition of
a continue statement with svn commit r46372 (at around
line 1283 of import.c on the trunk). 

I don't understand, in the face of this change, how pep
302 importers are expected to cascade. returning None
from find_module is the way an importer says "no I
can't load this module but I cant say for certain this
means ImportError" isnt it ?

One (unintended?) consequence of this change is the
following corner case:

As __import__ allows non dotted module names
__import__('fakemod.a/b') *will* succede on python
2.4.3 provided b is a directory under the package a
that contains an __init__.py. In python 2.5b3 this fails.

I've atatched a detailed repro case of this particular
corner case.




--

>Comment By: Robin Bryce (robinbryce2)
Date: 2006-08-09 16:50

Message:
Logged In: YES 
user_id=1547259

I've tried the attatched test case patch against
release24-maint and it passes.

--

Comment By: Robin Bryce (robinbryce2)
Date: 2006-08-09 15:33

Message:
Logged In: YES 
user_id=1547259

The 'illeagal' module name is a red herring. The problem
exists  with leagal paths also::

Python 2.5b3 (trunk:51136, Aug  9 2006, 15:17:14)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
>>> from fakemod import urlpathimport
>>> urlpathimport.install()
>>> m=__import__('fakemod.a')
*** fullname='fakemod.a' initpath='fakemod' ***
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named a
>>>
[EMAIL PROTECTED]:~/devel/blackmile$ python
Python 2.4.3 (#2, Apr 27 2006, 14:43:58)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
>>> from fakemod import urlpathimport
>>> urlpathimport.install()
>>> m=__import__('fakemod.a')
*** fullname='fakemod.a' initpath='fakemod' ***
>>>

Working on a test case. At present I think it is imposible
for a 2.5 custom importer to choose *not* to import a
standard python module by returning None from find_module.
Because if it returns None the standard import is skipped.

gbrandl, I think it was your commit that added the
'continue' statement, what is the reasoning behind making
that optimisation ?

Cheers,
Robin

--

Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 14:28

Message:
Logged In: YES 
user_id=849994

Guido agreed that the 2.4 behavior is to be regarded as a
bug:
http://mail.python.org/pipermail/python-dev/2006-May/065174.html

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 13:27

Message:
Logged In: YES 
user_id=21627

I'd say it is a bug that Python 2.4 allows non-dotted module
names for __import__. Can you come up with a change in
behaviour for "regular" module names?

As for cascading: path importers doe not cascade. Per
sys.path item, there can be at most one path importer. They
"cascade" in the sense that search continues with other
sys.path items if it wasn't found in one sys.path entry.
This cascading continues to work with 2.5.

Phillip, can you take a look?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1488934 ] file.write + closed pipe = no error

2006-08-09 Thread SourceForge.net
Bugs item #1488934, was opened at 2006-05-15 12:10
Message generated for change (Comment added) made by edemaine
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1488934&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Erik Demaine (edemaine)
Assigned to: A.M. Kuchling (akuchling)
Summary: file.write + closed pipe = no error

Initial Comment:
I am writing a Python script on Linux that gets called
via ssh (ssh hostname script.py) and I would like it to
know when its stdout gets closed because the ssh
connection gets killed.  I assumed that it would
suffice to write to stdout, and that I would get an
error if stdout was no longer connected to anything. 
This is not the case, however.  I believe it is because
of incorrect error checking in Objects/fileobject.c's
file_write.

Consider this example:

while True:
__print 'Hello'
__time.sleep (1)

If this program is run via ssh and then the ssh
connection dies, the program continues running forever
(or at least, over 10 hours).  No exceptions are thrown.

In contrast, this example does die as soon as the ssh
connection dies (within one second):

while True:
__os.write (1, 'Hello')
__time.sleep (1)

I claim that this is because os.write does proper error
checking, but file.write seems not to.  I was surprised
to find this intricacy in fwrite().  Consider the
attached C program, test.c.  (Warning: If you run it,
it will create a file /tmp/hello, and it will keep
running until you kill it.)  While the ssh connection
remains open, fwrite() reports a length of 6 bytes
written, ferror() reports no error, and errno remains
0.  Once the ssh connection dies, fwrite() still
reports a length of 6 bytes written (surprise!), but
ferror(stdout) reports an error, and errno changes to 5
(EIO).  So apparently one cannot tell from the return
value of fwrite() alone whether the write actually
succeeded; it seems necessary to call ferror() to
determine whether the write caused an error.

I think the only change necessary is on line 2443 of
file_write() in Objects/fileobject.c (in svn version
46003):

2441n2 = fwrite(s, 1, n, f->f_fp);
2442Py_END_ALLOW_THREADS
2443if (n2 != n) {
2444PyErr_SetFromErrno(PyExc_IOError);
2445clearerr(f->f_fp);

I am not totally sure whether the "n2 != n" condition
should be changed to "n2 != n || ferror (f->f_fp)" or
simply "ferror (f->f_fp)", but I believe that the
condition should be changed to one of these
possibilities.  The current behavior is wrong.

Incidentally, you'll notice that the C code has to turn
off signal SIGPIPE (like Python does) in order to not
die right away.  However, I could not get Python to die
by re-enabling SIGPIPE.  I tried "signal.signal
(signal.SIGPIPE, signal.SIG_DFL)" and "signal.signal
(signal.SIGPIPE, lambda x, y: sys.exit ())" and neither
one caused death of the script when the ssh connection
died.  Perhaps I'm not using the signal module correctly?

I am on Linux 2.6.11 on a two-CPU Intel Pentium 4, and
I am running the latest Subversion version of Python,
but my guess is that this error transcends most if not
all versions of Python.

--

>Comment By: Erik Demaine (edemaine)
Date: 2006-08-09 12:13

Message:
Logged In: YES 
user_id=265183

Just to clarify (as I reread your question): I'm killing the
ssh via UNIX (or Cygwin) 'kill' command, not via CTRL-C.  I
didn't try, but it may be that CTRL-C works fine.

--

Comment By: Erik Demaine (edemaine)
Date: 2006-07-02 08:35

Message:
Logged In: YES 
user_id=265183

A simple test case is this Python script (fleshed out from
previous example), also attached:

import sys, time
while True:
__print 'Hello'
__sys.stdout.flush ()
__time.sleep (1)

Save as blah.py on machine foo, run 'ssh foo python blah.py'
on machine bar--you will see 'Hello' every second--then, in
another shell on bar, kill the ssh process on bar.  blah.py
should still be running on foo.  ('foo' and 'bar' can
actually be the same machine.)

The example from the original bug report that uses
os.write() instead of print was an example that *does* work.


--

Comment By: A.M. Kuchling (akuchling)
Date: 2006-06-03 16:16

Message:
Logged In: YES 
user_id=11375

I agree with your analysis, and think your suggested fixes are correct.

However, I'm unable to construct a small test case that exercises this bug.  I 
can't even replicate the problem with SSH; when I run a remote script with 
SSH and then kill SSH with Ctrl-C, the write() gets a -1.  Are you terminating 
SSH in some other way? 

[ python-Feature Requests-1534942 ] Print identical floats consistently

2006-08-09 Thread SourceForge.net
Feature Requests item #1534942, was opened at 2006-08-04 23:19
Message generated for change (Comment added) made by josiahcarlson
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1534942&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Submitted By: Marc W. Abel (gihon)
Assigned to: Nobody/Anonymous (nobody)
Summary: Print identical floats consistently

Initial Comment:
Hello again and thank you,

This is a rewrite of now-closed bug #1534769.

As you know, 

>>> print .1
>>> print (.1,)

give different results because the __str__ call from
print becomes a __repr__ call on the tuple, and it
stays a __repr__ beneath that point in any recursion. 
>From the previous discussion, we need behavior like
this so that strings are quoted inside tuples.

I suggest that print use a third builtin that is
neither __str__ nor __repr__.  The name isn't
important, but suppose we call it __strep__ in this
feature request.  __strep__ would pass __strep__ down
in the recursion, printing floats with __str__ and
everything else with __repr__.

This would then

>>> print .1and
>>> print (.1,)

with the same precision.

Marc


--

Comment By: Josiah Carlson (josiahcarlson)
Date: 2006-08-09 09:14

Message:
Logged In: YES 
user_id=341410

Please note that 'print non_string' is a convenience.  Its
output is neither part of the language spec, nor is the
propagation of str/repr calls.

If you want to control how items are formatted during print,
you should use the built-in string formatting mechanisms. 
The standard 'print "%.1f"%(.1,)' and 'print
"%(x).1f"%({x:.1})' works with all pythons, and there is an
updated templating mechanism available in more recent Python
versions.

I'm not the last word on this, but I don't see an actual
use-case that isn't satisfied by using built-in
string-formatting.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1534942&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537601 ] Installation on Windows Longhorn

2006-08-09 Thread SourceForge.net
Bugs item #1537601, was opened at 2006-08-10 00:42
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537601&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: O.R.Senthil Kumaran (orsenthil)
Assigned to: Nobody/Anonymous (nobody)
Summary: Installation on Windows Longhorn

Initial Comment:
Windows Longhorn is a next version of Microsft 
Windows. We have Beta builds of Longhorn in our labs.
I tried installing Python 2.4.3 on Windows Longhorn, 
the Installation dialog box halts a setup dialog box 
which reads:
"Please wait while the installer finishes determining 
your disk space requirements."
Observed this on Python 2.4.3 as well as Python-2.5b3

ActivePython 2.4 however Installs fine.

Please refer the screenshots attached.

Thanks,
Senthil


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537601&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1534942 ] Print identical floats consistently

2006-08-09 Thread SourceForge.net
Feature Requests item #1534942, was opened at 2006-08-05 06:19
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1534942&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: Python 2.6
Status: Open
Resolution: None
Priority: 5
Submitted By: Marc W. Abel (gihon)
Assigned to: Nobody/Anonymous (nobody)
Summary: Print identical floats consistently

Initial Comment:
Hello again and thank you,

This is a rewrite of now-closed bug #1534769.

As you know, 

>>> print .1
>>> print (.1,)

give different results because the __str__ call from
print becomes a __repr__ call on the tuple, and it
stays a __repr__ beneath that point in any recursion. 
>From the previous discussion, we need behavior like
this so that strings are quoted inside tuples.

I suggest that print use a third builtin that is
neither __str__ nor __repr__.  The name isn't
important, but suppose we call it __strep__ in this
feature request.  __strep__ would pass __strep__ down
in the recursion, printing floats with __str__ and
everything else with __repr__.

This would then

>>> print .1and
>>> print (.1,)

with the same precision.

Marc


--

>Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 20:35

Message:
Logged In: YES 
user_id=849994

I recommend closing this. Introducing yet another to-string
magic function is not going to make things simpler, and who
knows if the str/repr distinction is going to make it into
3.0 anyway.

--

Comment By: Josiah Carlson (josiahcarlson)
Date: 2006-08-09 16:14

Message:
Logged In: YES 
user_id=341410

Please note that 'print non_string' is a convenience.  Its
output is neither part of the language spec, nor is the
propagation of str/repr calls.

If you want to control how items are formatted during print,
you should use the built-in string formatting mechanisms. 
The standard 'print "%.1f"%(.1,)' and 'print
"%(x).1f"%({x:.1})' works with all pythons, and there is an
updated templating mechanism available in more recent Python
versions.

I'm not the last word on this, but I don't see an actual
use-case that isn't satisfied by using built-in
string-formatting.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1534942&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1533491 ] C/API sec 10 is clipped

2006-08-09 Thread SourceForge.net
Bugs item #1533491, was opened at 2006-08-02 22:21
Message generated for change (Comment added) made by gbrandl
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1533491&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Documentation
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Jim Jewett (jimjjewett)
Assigned to: Nobody/Anonymous (nobody)
Summary: C/API sec 10 is clipped

Initial Comment:
As of 2.5b2, section 10 of the C/API reference manual 
seems clipped.  Sections 10.4, 10.5, and 10.6 are at 
best placeholders, and 10.8 isn't even that.  (It 
looks like they could be either on their own as 
sections, or inlined to an earlier section, and both 
places are TBD, but the section doesn't make this as 
obvious.)

--

>Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 20:40

Message:
Logged In: YES 
user_id=849994

Note that "as of 2.5b2" is misleading. It seems like the
mentioned sections have never been written at all.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1533491&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1537685 ] import on cElementTree on Windows

2006-08-09 Thread SourceForge.net
Bugs item #1537685, was opened at 2006-08-09 17:13
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537685&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Thomas B Hickey (thomasbhickey)
Assigned to: Nobody/Anonymous (nobody)
Summary: import on cElementTree on Windows

Initial Comment:
run this one-line file on Windows 2000:

import xml.etree.cElementTree

It generates the message:
usage: copy.py inputFile outputFile

If you give it a couple of file names it at least reads
in the first.

--Th

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537685&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1112549 ] cgi.FieldStorage memory usage can spike in line-oriented ops

2006-08-09 Thread SourceForge.net
Bugs item #1112549, was opened at 2005-01-30 08:40
Message generated for change (Comment added) made by gvanrossum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1112549&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 8
Submitted By: Chris McDonough (chrism)
Assigned to: Nobody/Anonymous (nobody)
Summary: cgi.FieldStorage memory usage can spike in line-oriented ops

Initial Comment:
Various parts of cgi.FieldStorage call its
"read_lines_to_outerboundary", "read_lines" and
"skip_lines" methods.These methods use the
"readline" method of the file object that represents an
input stream.  The input stream is typically data
supplied by an untrusted source (such as a user
uploading a file from a web browser).  The input data
is not required by the RFC 822/1521/1522/1867
specifications to contain any newline characters.  For
example, it is within the bounds of the specification
to supply a a multipart/form-data input stream with a
"file-data" part that consists of a 2GB string composed
entirely of "x" characters (which happens to be
something I did that led me to noticing this bug).

The simplest fix is to make use of the "size" argument
of the readline method of the file object where it is
used within all parts of FieldStorage that make use of
it.  A patch against the Python 2.3.4 cgi.py module
that does this is attached.

--

>Comment By: Guido van Rossum (gvanrossum)
Date: 2006-08-09 17:58

Message:
Logged In: YES 
user_id=6380

+1.

minor nits:

in the main patch: instead of

+if line.endswith('\n'):
+last_line_lfend = True
+else:
+last_line_lfend = False

you can just use

  last_line_lfend = line.endswith('\n')

in the unit test: instead of

  if type(a) != type(0):

use

  if not isinstance(a, int):

so that if some future release changes file.closed to return
a bool (as it should :-) this test won't break.

Is tehre a reason why you're not patching the fp.readline()
call in parse_multipart()?  It would seem to have the same
issue (even if it isn't used in Zope :-).

--

Comment By: Chris McDonough (chrism)
Date: 2006-08-07 13:51

Message:
Logged In: YES 
user_id=32974

Yup, test/output/test_cgi did need fixing.  Apologies, I did not understand 
the test regime.  A new patch file named test_output_test_cgi-
svn-50879.patch has been uploaded with the required change.  regrtest.py 
now passes.

As far as verify vs. vereq, the test_cgi module uses verify all over the place. 
 
I'm apt to not change all of those places, so to change it in just that one 
place 
is probably ineffective.  Same for type comparison vs. isinstance.  I'm trying 
to 
make the smallest change possible as opposed to refactoring the test 
module.

I've uploaded a patch which contains a) the fix to cgi.py, b) the fix to 
test_cgi.py, and the fix to output/test_cgi.

The stylistic change wrt to last_line_lfend is fine with me, but I'll leave 
that 
jugment to someone else.

I'm not sure how to ensure the fix doesn't create other problems other than 
what has already been done by proving it still passes the tests it was 
subjected to in the "old" test suite and adds new tests that prove it no longer 
has the denial of service problem.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-08-07 01:50

Message:
Logged In: YES 
user_id=33168

Doesn't this require a change to test/output/test_cgi or
something like that?  Does this test pass when run under
regrtest.py ?

The verify(x == y) should be vereq(x, y).
type(a) != type(0), should be not isinstance(a, int).

The last chunk of the patch in cgi.py should be:
last_line_lfend = line.endswith('\n')

rather than the 4 lines of if/else.

I don't know if this patch really addresses the problem or
creates other problems.  However, if someone more
knowledgable is confident about this patch, I'm fine with
this going in.  At this point, it might be better to wait
for 2.5.1 though.


--

Comment By: Chris McDonough (chrism)
Date: 2006-07-27 17:42

Message:
Logged In: YES 
user_id=32974

The files I've just uploaded are revisions to the cgi and test_cgi modules for 
the 
current state of the SVN trunk.  If someone could apply these, it would be 
appreciated, or give me access and I'll be happy to.

FTR, this is a bug which exposes systems which use the cgi.FieldStorage class 
(most Python web frameworks do) to a denial of service potential.

--

Comment B

[ python-Feature Requests-1537721 ] csv module: add header row to DictWriter

2006-08-09 Thread SourceForge.net
Feature Requests item #1537721, was opened at 2006-08-10 10:20
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1537721&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: ed_abraham (ed_abraham)
Assigned to: Nobody/Anonymous (nobody)
Summary: csv module: add header row to DictWriter

Initial Comment:
I use the DictWriter class from the csv module, and
have to manually write the header row. A mindless chore
which I would like to see eliminated. Can we have a
writeheader method added to the class? Something like
the following:

def writeheader(self, headernames = {}):
"""Write a header row"""
if not headernames:
headernames = dict(zip(self.fieldnames,
self.fieldnames))
self.writerow(headernames)

This would let you either use the fieldnames directly,
or supply your own pretty header names. 

Would be nice to have another keyword argument to
DictWriter, 'header = False'. If header was true, then
the __init__ method could call writeheader(). 


At the moment I have to write things like

fields = ['a','b','c']
w = csv.DictWriter(fid, fields)
w.writerow(dict(zip(fields, fields)))
for row in rows:
w.writerow(row)

The proposed changes would let me write the simpler

w = csv.DictWriter(fid, ['a','b','c'], header = True)
for row in rows:
w.writerow(row)


A problem is that including a new keyword argument
would break code which used position to fill the
keyword arguments and to supply arguments through *args
to the writer class.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1537721&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1112549 ] cgi.FieldStorage memory usage can spike in line-oriented ops

2006-08-09 Thread SourceForge.net
Bugs item #1112549, was opened at 2005-01-30 08:40
Message generated for change (Comment added) made by gvanrossum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1112549&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 8
Submitted By: Chris McDonough (chrism)
Assigned to: Nobody/Anonymous (nobody)
Summary: cgi.FieldStorage memory usage can spike in line-oriented ops

Initial Comment:
Various parts of cgi.FieldStorage call its
"read_lines_to_outerboundary", "read_lines" and
"skip_lines" methods.These methods use the
"readline" method of the file object that represents an
input stream.  The input stream is typically data
supplied by an untrusted source (such as a user
uploading a file from a web browser).  The input data
is not required by the RFC 822/1521/1522/1867
specifications to contain any newline characters.  For
example, it is within the bounds of the specification
to supply a a multipart/form-data input stream with a
"file-data" part that consists of a 2GB string composed
entirely of "x" characters (which happens to be
something I did that led me to noticing this bug).

The simplest fix is to make use of the "size" argument
of the readline method of the file object where it is
used within all parts of FieldStorage that make use of
it.  A patch against the Python 2.3.4 cgi.py module
that does this is attached.

--

>Comment By: Guido van Rossum (gvanrossum)
Date: 2006-08-09 18:23

Message:
Logged In: YES 
user_id=6380

BTW it would be better if all patches were in a single file
-- then you can delete the older patches (if SF lets you do
that).

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2006-08-09 17:58

Message:
Logged In: YES 
user_id=6380

+1.

minor nits:

in the main patch: instead of

+if line.endswith('\n'):
+last_line_lfend = True
+else:
+last_line_lfend = False

you can just use

  last_line_lfend = line.endswith('\n')

in the unit test: instead of

  if type(a) != type(0):

use

  if not isinstance(a, int):

so that if some future release changes file.closed to return
a bool (as it should :-) this test won't break.

Is tehre a reason why you're not patching the fp.readline()
call in parse_multipart()?  It would seem to have the same
issue (even if it isn't used in Zope :-).

--

Comment By: Chris McDonough (chrism)
Date: 2006-08-07 13:51

Message:
Logged In: YES 
user_id=32974

Yup, test/output/test_cgi did need fixing.  Apologies, I did not understand 
the test regime.  A new patch file named test_output_test_cgi-
svn-50879.patch has been uploaded with the required change.  regrtest.py 
now passes.

As far as verify vs. vereq, the test_cgi module uses verify all over the place. 
 
I'm apt to not change all of those places, so to change it in just that one 
place 
is probably ineffective.  Same for type comparison vs. isinstance.  I'm trying 
to 
make the smallest change possible as opposed to refactoring the test 
module.

I've uploaded a patch which contains a) the fix to cgi.py, b) the fix to 
test_cgi.py, and the fix to output/test_cgi.

The stylistic change wrt to last_line_lfend is fine with me, but I'll leave 
that 
jugment to someone else.

I'm not sure how to ensure the fix doesn't create other problems other than 
what has already been done by proving it still passes the tests it was 
subjected to in the "old" test suite and adds new tests that prove it no longer 
has the denial of service problem.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-08-07 01:50

Message:
Logged In: YES 
user_id=33168

Doesn't this require a change to test/output/test_cgi or
something like that?  Does this test pass when run under
regrtest.py ?

The verify(x == y) should be vereq(x, y).
type(a) != type(0), should be not isinstance(a, int).

The last chunk of the patch in cgi.py should be:
last_line_lfend = line.endswith('\n')

rather than the 4 lines of if/else.

I don't know if this patch really addresses the problem or
creates other problems.  However, if someone more
knowledgable is confident about this patch, I'm fine with
this going in.  At this point, it might be better to wait
for 2.5.1 though.


--

Comment By: Chris McDonough (chrism)
Date: 2006-07-27 17:42

Message:
Logged In: YES 
user_id=32974

The files I've just uploaded are revisions to the cgi and test_cgi modules for 
the 
current state of the SVN trunk.  If someone could ap

[ python-Bugs-1537167 ] 2nd issue with need for speed patch

2006-08-09 Thread SourceForge.net
Bugs item #1537167, was opened at 2006-08-09 08:01
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537167&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Robin Bryce (robinbryce2)
Assigned to: Phillip J. Eby (pje)
Summary: 2nd issue with need for speed patch

Initial Comment:
This is not a duplicate of the "realease manager
pronouncement on 302 Fix needed" issue raised on pydev.

If a custom importer is present, import.c skips the
builtin import machinery if the find_module method of
that  importer returns None. For python 2.4.3 if
find_module returns none the normal builtin machinery
gets a lookin. The relevent change was the addition of
a continue statement with svn commit r46372 (at around
line 1283 of import.c on the trunk). 

I don't understand, in the face of this change, how pep
302 importers are expected to cascade. returning None
from find_module is the way an importer says "no I
can't load this module but I cant say for certain this
means ImportError" isnt it ?

One (unintended?) consequence of this change is the
following corner case:

As __import__ allows non dotted module names
__import__('fakemod.a/b') *will* succede on python
2.4.3 provided b is a directory under the package a
that contains an __init__.py. In python 2.5b3 this fails.

I've atatched a detailed repro case of this particular
corner case.




--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-08-10 01:33

Message:
Logged In: YES 
user_id=21627

The patch is originally mine; it does not have to do much
with the need-for-speed sprint. The rationale is to reduce
the number of stat/open calls when loading a module and the
directory doesn't even exist (e.g. for the first sys.path
entry, which is python25.zip). It originally put True/False
on sys.path_importer_cache.

Philipp Eby changed it to put the NullImporter on
path_importer_cache, and not fall back to the builtin import
if the path importer returns None.

It never was the intention of the entire machinery that such
a fallback is implemented. Instead, it always should have
continued with the next sys.path entry instead.

If a path import claims responsibility for a sys.path entry,
and then finds it cannot fulfill the responsibility, and
wants to fall back to the traditional file-based lookup, it
needs to implement that itself. I would advise against doing
so, though, and make the path import reject responsibility
for the sys.path entry in the first place if that entry
really is an on-disk directory.

--

Comment By: Robin Bryce (robinbryce2)
Date: 2006-08-09 17:50

Message:
Logged In: YES 
user_id=1547259

I've tried the attatched test case patch against
release24-maint and it passes.

--

Comment By: Robin Bryce (robinbryce2)
Date: 2006-08-09 16:33

Message:
Logged In: YES 
user_id=1547259

The 'illeagal' module name is a red herring. The problem
exists  with leagal paths also::

Python 2.5b3 (trunk:51136, Aug  9 2006, 15:17:14)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
>>> from fakemod import urlpathimport
>>> urlpathimport.install()
>>> m=__import__('fakemod.a')
*** fullname='fakemod.a' initpath='fakemod' ***
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named a
>>>
[EMAIL PROTECTED]:~/devel/blackmile$ python
Python 2.4.3 (#2, Apr 27 2006, 14:43:58)
[GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more
information.
>>> from fakemod import urlpathimport
>>> urlpathimport.install()
>>> m=__import__('fakemod.a')
*** fullname='fakemod.a' initpath='fakemod' ***
>>>

Working on a test case. At present I think it is imposible
for a 2.5 custom importer to choose *not* to import a
standard python module by returning None from find_module.
Because if it returns None the standard import is skipped.

gbrandl, I think it was your commit that added the
'continue' statement, what is the reasoning behind making
that optimisation ?

Cheers,
Robin

--

Comment By: Georg Brandl (gbrandl)
Date: 2006-08-09 15:28

Message:
Logged In: YES 
user_id=849994

Guido agreed that the 2.4 behavior is to be regarded as a
bug:
http://mail.python.org/pipermail/python-dev/2006-May/065174.html

--

Comment By: Martin v. Löwis (loewis)
Date: 2006-08-09 14:27

Message:
Logged In: YES 
user_

[ python-Bugs-1537601 ] Installation on Windows Longhorn

2006-08-09 Thread SourceForge.net
Bugs item #1537601, was opened at 2006-08-09 21:12
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537601&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: Python 2.4
>Status: Closed
>Resolution: Duplicate
Priority: 5
Submitted By: O.R.Senthil Kumaran (orsenthil)
Assigned to: Nobody/Anonymous (nobody)
Summary: Installation on Windows Longhorn

Initial Comment:
Windows Longhorn is a next version of Microsft 
Windows. We have Beta builds of Longhorn in our labs.
I tried installing Python 2.4.3 on Windows Longhorn, 
the Installation dialog box halts a setup dialog box 
which reads:
"Please wait while the installer finishes determining 
your disk space requirements."
Observed this on Python 2.4.3 as well as Python-2.5b3

ActivePython 2.4 however Installs fine.

Please refer the screenshots attached.

Thanks,
Senthil


--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-08-10 01:37

Message:
Logged In: YES 
user_id=21627

This is a duplicate of http://python.org/sf/1512604
Please comment there if you have further remarks.

If you are a beta tester of Vista/Longhorn Server, please
report this as a bug to Microsoft; I believe it is a bug in
Installer 4.0.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1537601&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1224621 ] tokenize module does not detect inconsistent dedents

2006-08-09 Thread SourceForge.net
Bugs item #1224621, was opened at 2005-06-21 02:10
Message generated for change (Comment added) made by kbk
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1224621&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
>Priority: 7
Submitted By: Danny Yoo (dyoo)
>Assigned to: Raymond Hettinger (rhettinger)
Summary: tokenize module does not detect inconsistent dedents

Initial Comment:
The attached code snippet 'testcase.py' should produce an 
IndentationError, but does not.  The code in tokenize.py is too 
trusting, and needs to add a check against bad indentation as it 
yields DEDENT tokens.

I'm including a diff to tokenize.py that should at least raise an 
exception on bad indentation like this.

Just in case, I'm including testcase.py here too:
--
import tokenize
from StringIO import StringIO
sampleBadText = """
def foo():
bar
  baz
"""
print list(tokenize.generate_tokens(
StringIO(sampleBadText).readline))

--

>Comment By: Kurt B. Kaiser (kbk)
Date: 2006-08-09 21:40

Message:
Logged In: YES 
user_id=149084

Tokenize Rev 39046 21Jun05 breaks tabnanny.

tabnanny doesn't handle the IndentationError exception
when tokenize detects a dedent.

I patched up ScriptBinding.py in IDLE.  The 
IndentationError probably should pass the same parms as
TokenError and tabnanny should catch it.

--

Comment By: Armin Rigo (arigo)
Date: 2005-09-02 08:40

Message:
Logged In: YES 
user_id=4771

Here is a proposed patch.  It relaxes the dedent policy a
bit.  It assumes that the first line may already have some
initial indentation, as is the case when tokenizing from the
middle of a file (as inspect.getsource() does).

It should also be back-ported to 2.4, given that the
previous patch was.  For 2.4, only the non-test part of the
patch applies cleanly; I suggest to ignore the test part and
just apply it, given that there are much more tests in 2.5
for inspect.getsource() anyway.

The whole issue of inspect.getsource() being muddy anyway, I
will go ahead and check this patch in unless someone spots a
problem.  For now the previously-applied patch makes parts
of PyPy break with an uncaught IndentationError.

--

Comment By: Armin Rigo (arigo)
Date: 2005-09-02 08:10

Message:
Logged In: YES 
user_id=4771

Reopening this bug report: this might fix the problem at
hand, but it breaks inspect.getsource() on cases where it
used to work.  See attached example.

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2005-06-21 03:54

Message:
Logged In: YES 
user_id=80475

Fixed.  
See Lib/tokenize.py 1.38 and 1.36.4.1


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1224621&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com