[ python-Bugs-1174606 ] Reading /dev/zero causes SystemError

2005-04-06 Thread SourceForge.net
Bugs item #1174606, was opened at 2005-04-01 06:48
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470

Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Adam Olsen (rhamphoryncus)
Assigned to: Nobody/Anonymous (nobody)
Summary: Reading /dev/zero causes SystemError

Initial Comment:
$ python -c 'open("/dev/zero").read()'
Traceback (most recent call last):
  File "", line 1, in ?
SystemError: ../Objects/stringobject.c:3316: bad
argument to internal function

Compare with this two variants:

$ python -c 'open("/dev/zero").read(2**31-1)'
Traceback (most recent call last):
  File "", line 1, in ?
MemoryError

$ python -c 'open("/dev/zero").read(2**31)'
Traceback (most recent call last):
  File "", line 1, in ?
OverflowError: long int too large to convert to int

The unsized read should produce either MemoryError or
OverflowError instead of SystemError.

Tested with Python 2.2, 2.3, and 2.4.

--

>Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 09:06

Message:
Logged In: YES 
user_id=21627

The surprising aspect is that the memory *is* being used.
Python allocates 2GB of memory, and then passes this to
read(2) (through stdio) to fill it with the contents of
/dev/zero. This should cause a write operation to the memory
pages, which in turn should cause them to consume actual
memory. For some reason, this doesn't happen.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 08:52

Message:
Logged In: YES 
user_id=81797

Linux can do a very fast allocation if it has swap
available.  It reserves space, but does not actually assign
the memory until you try to use it.  In my case, I have 1GB
of RAM, around 700MB free, and another 2GB in swap.  So, I
have plenty unless I use it.  In C I can malloc 1GB and
unless I write every page in that block the system doesn't
really give the pages to the process.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 08:40

Message:
Logged In: YES 
user_id=21627

The problem is different. Instead, _PyString_Resize
complains that the new buffersize of the string is negative.
This in turn happens because the string manages to get
larger >2GB, which in turn happens because buffersize is
size_t, yet _PyString_Resize expects int.

I don't know how Linux manages to allocate such a large
string without thrashing.

There is a minor confusion with stat() as well:
new_buffersize tries to find out how much bytes are left to
the end of the file. In the case of /dev/zero, both fstat
and lseek are "lying" by returning 0. As lseek returns 0,
ftell is invoked and returns non-zero. Then, newbuffer does
not trust the values, and just adds BIGCHUNK.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 05:39

Message:
Logged In: YES 
user_id=81797

I am able to reproduce this on a Fedora Core 3 Linux system:

>>> fp = open('/dev/zero', 'rb')
>>> d = fp.read()
Traceback (most recent call last):
  File "", line 1, in ?
MemoryError
>>> print os.stat('/dev/zero').st_size
0

What about only trusting st_size if the file is a regular
file, not a directory or other type of special file?

Sean

--

Comment By: Armin Rigo (arigo)
Date: 2005-04-02 14:31

Message:
Logged In: YES 
user_id=4771

os.stat() doesn't always give consistent results on dev files.  On my machine 
for some reason os.stat('/dev/null') appears to be random (and extremely 
large).  I suspect that on the OP's machine os.stat('/dev/zero') is not 0 
either, but a random number that turns out to be negative, hence a "bad 
argument" SystemError.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-01 23:42

Message:
Logged In: YES 
user_id=21627

I think it should trust the stat result, and then find that
it cannot allocate that much memory.

Actually, os.stat("/dev/zero").st_size is 0, so something
else must be going on.

--

Comment By: Armin Rigo (arigo)
Date: 2005-04-01 11:58

Message:
Logged In: YES 
user_id=4771

I think that file.read() with no argument needs to be more conservative.  
Currently it asks and trusts the stat() to get the file size, but this can lead 
to just plain wrong results on special devices.  (I had the problem that 
open('/dev/null').read() would give a MemoryError!)

We can argue whether plain read() on special devices is a good idea or not, but 
I guess that not blindly trusting stat() if it returns huge values could be a 
good idea.


[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-06 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-06 03:03
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 09:11

Message:
Logged In: YES 
user_id=21627

To add robustness, it would be possible to catch read errors
from _urandomfd, and try reopening it if it got somehow closed.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 05:11

Message:
Logged In: YES 
user_id=81797

Just providing some feedback:

I'm able to reproduce this.  Importing random will cause
this file descriptor to be called.  Opening urandom on every
call could lead to unacceptable syscall overhead for some. 
Perhaps there should be a "urandomcleanup" method that
closes the file descriptor, and then random could get the
bytes from urandom(), and clean up after itself?

Personally, I only clean up the file descriptors I have
allocated when I fork a new process.  On the one hand I
agree with you about sucking up a fd in the standard
library, but on the other hand I'm thinking that you just
shouldn't be closing file descriptors for stuff you'll be
needing.  That's my two cents on this bug.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-06 03:06

Message:
Logged In: YES 
user_id=110477

There are many modules that have a dependency on random, for
instance os.tempnam(), and a program could well
inadvertently use it before closing file descriptors.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1174606 ] Reading /dev/zero causes SystemError

2005-04-06 Thread SourceForge.net
Bugs item #1174606, was opened at 2005-04-01 04:48
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1174606&group_id=5470

Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Adam Olsen (rhamphoryncus)
Assigned to: Nobody/Anonymous (nobody)
Summary: Reading /dev/zero causes SystemError

Initial Comment:
$ python -c 'open("/dev/zero").read()'
Traceback (most recent call last):
  File "", line 1, in ?
SystemError: ../Objects/stringobject.c:3316: bad
argument to internal function

Compare with this two variants:

$ python -c 'open("/dev/zero").read(2**31-1)'
Traceback (most recent call last):
  File "", line 1, in ?
MemoryError

$ python -c 'open("/dev/zero").read(2**31)'
Traceback (most recent call last):
  File "", line 1, in ?
OverflowError: long int too large to convert to int

The unsized read should produce either MemoryError or
OverflowError instead of SystemError.

Tested with Python 2.2, 2.3, and 2.4.

--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 07:17

Message:
Logged In: YES 
user_id=81797

I'm quite sure that the 2GB is not getting filled when doing
this.  After running the first command, and checking
/proc/meminfo, I see that only 46MB is shown as free, which
means that there was no more than this amount of RAM consumed.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 07:06

Message:
Logged In: YES 
user_id=21627

The surprising aspect is that the memory *is* being used.
Python allocates 2GB of memory, and then passes this to
read(2) (through stdio) to fill it with the contents of
/dev/zero. This should cause a write operation to the memory
pages, which in turn should cause them to consume actual
memory. For some reason, this doesn't happen.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 06:52

Message:
Logged In: YES 
user_id=81797

Linux can do a very fast allocation if it has swap
available.  It reserves space, but does not actually assign
the memory until you try to use it.  In my case, I have 1GB
of RAM, around 700MB free, and another 2GB in swap.  So, I
have plenty unless I use it.  In C I can malloc 1GB and
unless I write every page in that block the system doesn't
really give the pages to the process.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 06:40

Message:
Logged In: YES 
user_id=21627

The problem is different. Instead, _PyString_Resize
complains that the new buffersize of the string is negative.
This in turn happens because the string manages to get
larger >2GB, which in turn happens because buffersize is
size_t, yet _PyString_Resize expects int.

I don't know how Linux manages to allocate such a large
string without thrashing.

There is a minor confusion with stat() as well:
new_buffersize tries to find out how much bytes are left to
the end of the file. In the case of /dev/zero, both fstat
and lseek are "lying" by returning 0. As lseek returns 0,
ftell is invoked and returns non-zero. Then, newbuffer does
not trust the values, and just adds BIGCHUNK.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 03:39

Message:
Logged In: YES 
user_id=81797

I am able to reproduce this on a Fedora Core 3 Linux system:

>>> fp = open('/dev/zero', 'rb')
>>> d = fp.read()
Traceback (most recent call last):
  File "", line 1, in ?
MemoryError
>>> print os.stat('/dev/zero').st_size
0

What about only trusting st_size if the file is a regular
file, not a directory or other type of special file?

Sean

--

Comment By: Armin Rigo (arigo)
Date: 2005-04-02 12:31

Message:
Logged In: YES 
user_id=4771

os.stat() doesn't always give consistent results on dev files.  On my machine 
for some reason os.stat('/dev/null') appears to be random (and extremely 
large).  I suspect that on the OP's machine os.stat('/dev/zero') is not 0 
either, but a random number that turns out to be negative, hence a "bad 
argument" SystemError.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-01 21:42

Message:
Logged In: YES 
user_id=21627

I think it should trust the stat result, and then find that
it cannot allocate that much memory.

Actually, os.stat("/dev/zero").st_size is 0, so something
else must be going on.

--

Comment By: Armin Rigo (arigo)
Date: 2005-04-01 09:58

Message:
Logged In: YES 
user_id=4771

I think that file.read() with no argument needs to be more conser

[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-06 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-06 01:03
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 07:20

Message:
Logged In: YES 
user_id=81797

Yeah, I was thinking the same thing.  It doesn't address the
consumed file handle, but it does address the "robustness"
issue.  It complicates the code, but should work.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 07:11

Message:
Logged In: YES 
user_id=21627

To add robustness, it would be possible to catch read errors
from _urandomfd, and try reopening it if it got somehow closed.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 03:11

Message:
Logged In: YES 
user_id=81797

Just providing some feedback:

I'm able to reproduce this.  Importing random will cause
this file descriptor to be called.  Opening urandom on every
call could lead to unacceptable syscall overhead for some. 
Perhaps there should be a "urandomcleanup" method that
closes the file descriptor, and then random could get the
bytes from urandom(), and clean up after itself?

Personally, I only clean up the file descriptors I have
allocated when I fork a new process.  On the one hand I
agree with you about sucking up a fd in the standard
library, but on the other hand I'm thinking that you just
shouldn't be closing file descriptors for stuff you'll be
needing.  That's my two cents on this bug.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-06 01:06

Message:
Logged In: YES 
user_id=110477

There are many modules that have a dependency on random, for
instance os.tempnam(), and a program could well
inadvertently use it before closing file descriptors.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-06 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-05 18:03
Message generated for change (Comment added) made by majid
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Fazal Majid (majid)
Date: 2005-04-06 00:27

Message:
Logged In: YES 
user_id=110477

Unfortunately, catching exceptions is not sufficient - the
file descriptor may have been reassigned. Fortunately in my
case, to a socket which raised ENOSYS, but if it had been a
normal file, this would have been much harder to trace
because reading from it would cause weird errors for readers
of the reassigned fd without triggering an exception in
os.urandom() itself.

As for not closing file descriptors you haven't opened
yourself, if the process is the result of a vfork/exec (in
my case Python processes started by a cluster manager, sort
of like init), the child process has no clue what file
descriptors, sockets or the like it has inherited from its
parent process, and the safest course is to close them all.
Indeed, that's what W. Richard Stevens recommends in
"Advanced Programming for the UNIX environment".

As far as I can tell, os.urandom() is used mostly to seed
the RNG in random.py, and thus is not a high-drequency
operation. It is going to be very hard to document this
adequately for coders to defend against - in my case, the
problem was being triggered by os.tempnam() from within
Webware's PSP compiler. There are so many functions that
depend on random (sometimes in non-obvious ways), you can't
flag them all so their users know they should use
urandomcleanup.

One possible solution would be for os.py to offer a
go_daemon() function that implements the fd closing, signal
masking, process group and terminal disassociation required
by true daemons. This function could take care of internal
book-keeping like calling urandomcleanup.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 00:20

Message:
Logged In: YES 
user_id=81797

Yeah, I was thinking the same thing.  It doesn't address the
consumed file handle, but it does address the "robustness"
issue.  It complicates the code, but should work.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 00:11

Message:
Logged In: YES 
user_id=21627

To add robustness, it would be possible to catch read errors
from _urandomfd, and try reopening it if it got somehow closed.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-05 20:11

Message:
Logged In: YES 
user_id=81797

Just providing some feedback:

I'm able to reproduce this.  Importing random will cause
this file descriptor to be called.  Opening urandom on every
call could lead to unacceptable syscall overhead for some. 
Perhaps there should be a "urandomcleanup" method that
closes the file descriptor, and then random could get the
bytes from urandom(), and clean up after itself?

Personally, I only clean up the file descriptors I have
allocated when I fork a new process.  On the one hand I
agree with you about sucking up a fd in the standard
library, but on the other hand I'm thinking that you just
shouldn't be closing file descriptors for stuff you'll be
needing.  That's my two cents on this bug.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-05 18:06

Message:
Logged In: YES 
user_id=110477

There are many modules that have a dependency on random, for
instance os.tempnam(), a

[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-06 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-06 01:03
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 09:04

Message:
Logged In: YES 
user_id=81797

The child is a copy of the parent.  Therefore, if in the
parent you open a few file descriptors, those are the ones
you should close in the child.  That is exactly what I've
done in the past when I forked a child, and it has worked
very well.

I suspect Stevens would make an exception to his guideline
in the event that closing a file descriptor results in
library routine failures.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-06 07:27

Message:
Logged In: YES 
user_id=110477

Unfortunately, catching exceptions is not sufficient - the
file descriptor may have been reassigned. Fortunately in my
case, to a socket which raised ENOSYS, but if it had been a
normal file, this would have been much harder to trace
because reading from it would cause weird errors for readers
of the reassigned fd without triggering an exception in
os.urandom() itself.

As for not closing file descriptors you haven't opened
yourself, if the process is the result of a vfork/exec (in
my case Python processes started by a cluster manager, sort
of like init), the child process has no clue what file
descriptors, sockets or the like it has inherited from its
parent process, and the safest course is to close them all.
Indeed, that's what W. Richard Stevens recommends in
"Advanced Programming for the UNIX environment".

As far as I can tell, os.urandom() is used mostly to seed
the RNG in random.py, and thus is not a high-drequency
operation. It is going to be very hard to document this
adequately for coders to defend against - in my case, the
problem was being triggered by os.tempnam() from within
Webware's PSP compiler. There are so many functions that
depend on random (sometimes in non-obvious ways), you can't
flag them all so their users know they should use
urandomcleanup.

One possible solution would be for os.py to offer a
go_daemon() function that implements the fd closing, signal
masking, process group and terminal disassociation required
by true daemons. This function could take care of internal
book-keeping like calling urandomcleanup.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 07:20

Message:
Logged In: YES 
user_id=81797

Yeah, I was thinking the same thing.  It doesn't address the
consumed file handle, but it does address the "robustness"
issue.  It complicates the code, but should work.

--

Comment By: Martin v. Löwis (loewis)
Date: 2005-04-06 07:11

Message:
Logged In: YES 
user_id=21627

To add robustness, it would be possible to catch read errors
from _urandomfd, and try reopening it if it got somehow closed.

--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 03:11

Message:
Logged In: YES 
user_id=81797

Just providing some feedback:

I'm able to reproduce this.  Importing random will cause
this file descriptor to be called.  Opening urandom on every
call could lead to unacceptable syscall overhead for some. 
Perhaps there should be a "urandomcleanup" method that
closes the file descriptor, and then random could get the
bytes from urandom(), and clean up after itself?

Personally, I only cl

[ python-Bugs-1177674 ] error locale.getlocale() with LANGUAGE=eu_ES

2005-04-06 Thread SourceForge.net
Bugs item #1177674, was opened at 2005-04-06 10:59
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177674&group_id=5470

Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Zunbeltz Izaola (zunbeltz)
Assigned to: Nobody/Anonymous (nobody)
Summary: error locale.getlocale() with LANGUAGE=eu_ES

Initial Comment:
My LANGUAGE is set tu eu_ES but the getlocale() 
output is (None, None).
This is check with the command

python -c "import locale; print locale.getlocale()"

python version 2.4
os. linux - ubuntu

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177674&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1168135 ] Python 2.5a0 Tutorial errors and observations

2005-04-06 Thread SourceForge.net
Bugs item #1168135, was opened at 2005-03-21 22:58
Message generated for change (Comment added) made by mrbax
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1168135&group_id=5470

Category: Documentation
Group: Python 2.5
Status: Open
Resolution: None
Priority: 5
Submitted By: Michael R Bax (mrbax)
Assigned to: Raymond Hettinger (rhettinger)
Summary: Python 2.5a0 Tutorial errors and observations

Initial Comment:
Please find attached my updated list of errors and 
observations in response to Python 2.5a0.

--

>Comment By: Michael R Bax (mrbax)
Date: 2005-04-06 05:39

Message:
Logged In: YES 
user_id=1055057

Thanks for your comments.  A handful of meta-observations:

Front Matter

> Your first correction is wrong.  -ly adverbs are never 
hyphenated
> (Chicago Manual of Style, Table 6.1, for instance).

"Wrong" is wrong.  In fact, the CMS Q&A explicitly states 
that "it has long been the practice elsewhere -- among British 
writers, for example -- to hyphenate ly + participle/adjective 
compounds. ... So it is a matter not of who is right or wrong 
but of whose rule you are going to follow."

But I'm happy to leave it as is, given that it is a regional 
preference.


3.1.2
> > >>> word[:1] = 'Splat'
> > -- This is trying to assign 5 letters to one?
>
> Slice assignment is replacement, not overwriting.  This is 
> replacing 1 byte with 5, which *is* valid, and perhaps the 
point
> being made.  Perhaps you would recommend another 
change to 
> be clearer?

I'm not recommending a change per se; I'm showing what a 
newbie reader thinks!  :-)


5.2
> > There is a way to remove an item from a list given its 
index 
> > instead of its value: the del statement. 
> > -- How is this different to pop?
>
> pop, builtin on top of del, also saves and returns the 
deleted value
> so it can be bound to something, which takes longer.  ie
> def pop(self, i):
>   ret = self[i]
>   del self[i]
>   return ret

Again, this is a question that the newbie reader will pose.  I 
may know the answer, but I am not asking the question for 
myself.  I think the question should be answered pre-
emptively in the tutorial!


9.2
> > Otherwise, all variables found outside of the innermost 
scope
> > are read-only.
> > -- Explain what happens when you try to assign to 
a 
> > read-only variable?
>
> You create a new local of the same name and will not be 
able to 
> read the masked variable.

Right -- again, this is for the benefit of the newbie.  Let's put 
that in the tutorial!  :-)

--

Comment By: Terry J. Reedy (tjreedy)
Date: 2005-03-30 13:01

Message:
Logged In: YES 
user_id=593130

Subject to reading that particular version (.5a0), I generally 
agree with your suggestions.  Here are some specific comments 
on your comments.  Feel free to incorporate them into a revised 
suggestion list if you wish.

---
Your first correction is wrong.  -ly adverbs are never hyphenated 
(Chicago Manual of Style, Table 6.1, for instance).

---
3.1.2
>>> word[:1] = 'Splat'
-- This is trying to assign 5 letters to one?

Slice assignment is replacement, not overwriting.  This is 
replacing 1 byte with 5, which *is* valid, and perhaps the point 
being made.  Perhaps you would recommend another change to 
be clearer?

---
##5.1.3
##Combining these two special cases, we see that "map(None, 
list1, list2)" is a convenient way of turning a pair of lists into a list 
of pairs
#   -- Shouldn't one rather use zip()?

I would agree that 'convenient' should be removed and a note 
added that this has been superceded by zip unless one wants 
the different behavior of extending shorter sequences.

--
5.1.3
filter(function, sequence)" returns a sequence (of the same type, 
if possible)
-- How could this ever be impossible?

I suppose a broken class, but then what would filter do?  If 
filter 'works' for all builtins, I agree that we should just say so.  
Perhaps 'returns a sequence of the same type (for all builtins 
and sensible user classes)' -- if that is true


5.2
There is a way to remove an item from a list given its index 
instead of its value: the del statement. 
-- How is this different to pop?

pop, builtin on top of del, also saves and returns the deleted value 
so it can be bound to something, which takes longer.  ie
def pop(self, i):
  ret = self[i]
  del self[i]
  return ret


5.3
Sequence unpacking requires that the list of variables on the left 
have the same number of elements as the length of the sequence
-- requires that the list of variables on the left has 
(grammar)
-- requires the list of variables on the left to have 
(alternate)

Since the code sequence on the left is not a Python list but only 
a common-meaning list, I think even better would be

[ python-Bugs-1177674 ] error locale.getlocale() with LANGUAGE=eu_ES

2005-04-06 Thread SourceForge.net
Bugs item #1177674, was opened at 2005-04-06 19:59
Message generated for change (Comment added) made by perky
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177674&group_id=5470

Category: Python Library
Group: None
>Status: Closed
>Resolution: Wont Fix
Priority: 5
Submitted By: Zunbeltz Izaola (zunbeltz)
Assigned to: Nobody/Anonymous (nobody)
Summary: error locale.getlocale() with LANGUAGE=eu_ES

Initial Comment:
My LANGUAGE is set tu eu_ES but the getlocale() 
output is (None, None).
This is check with the command

python -c "import locale; print locale.getlocale()"

python version 2.4
os. linux - ubuntu

--

>Comment By: Hye-Shik Chang (perky)
Date: 2005-04-06 22:08

Message:
Logged In: YES 
user_id=55188

It's an intended behavior and conforms POSIX standard.
locale doesn't set until program calls locale.setlocale()
explicitly.

miffy(perky):~% LC_ALL=ko_KR.UTF-8 python
Python 2.4 (#2, Feb  4 2005, 12:07:54)
[GCC 3.4.2 [FreeBSD] 20040728] on freebsd5
Type "help", "copyright", "credits" or "license" for more
information.
>>> import locale
>>> locale.getlocale()
(None, None)
>>> locale.setlocale(locale.LC_ALL, '')
'ko_KR.UTF-8'
>>> locale.getlocale()
['ko_KR', 'utf']


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177674&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177674 ] error locale.getlocale() with LANGUAGE=eu_ES

2005-04-06 Thread SourceForge.net
Bugs item #1177674, was opened at 2005-04-06 10:59
Message generated for change (Comment added) made by zunbeltz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177674&group_id=5470

Category: Python Library
Group: None
Status: Closed
Resolution: Wont Fix
Priority: 5
Submitted By: Zunbeltz Izaola (zunbeltz)
Assigned to: Nobody/Anonymous (nobody)
Summary: error locale.getlocale() with LANGUAGE=eu_ES

Initial Comment:
My LANGUAGE is set tu eu_ES but the getlocale() 
output is (None, None).
This is check with the command

python -c "import locale; print locale.getlocale()"

python version 2.4
os. linux - ubuntu

--

>Comment By: Zunbeltz Izaola (zunbeltz)
Date: 2005-04-06 13:42

Message:
Logged In: YES 
user_id=1139116

Sorry. The example is wrong. If I execute the comands you show
(setlocale() and them getlocale()) I get the following error

Traceback (most recent call last):
  File "", line 1, in ?
  File "/usr/lib/python2.4/locale.py", line 365, in getlocale
return _parse_localename(localename)
  File "/usr/lib/python2.4/locale.py", line 278, in
_parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: eu_ES

(I find this error when i want to use pybliographic and it
use setlocale before getlocale)

--

Comment By: Hye-Shik Chang (perky)
Date: 2005-04-06 13:08

Message:
Logged In: YES 
user_id=55188

It's an intended behavior and conforms POSIX standard.
locale doesn't set until program calls locale.setlocale()
explicitly.

miffy(perky):~% LC_ALL=ko_KR.UTF-8 python
Python 2.4 (#2, Feb  4 2005, 12:07:54)
[GCC 3.4.2 [FreeBSD] 20040728] on freebsd5
Type "help", "copyright", "credits" or "license" for more
information.
>>> import locale
>>> locale.getlocale()
(None, None)
>>> locale.setlocale(locale.LC_ALL, '')
'ko_KR.UTF-8'
>>> locale.getlocale()
['ko_KR', 'utf']


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177674&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177811 ] Exec Inside A Function

2005-04-06 Thread SourceForge.net
Bugs item #1177811, was opened at 2005-04-06 14:30
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177811&group_id=5470

Category: Python Interpreter Core
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Andrew Wilkinson (andrew_j_w)
Assigned to: Nobody/Anonymous (nobody)
Summary: Exec Inside A Function

Initial Comment:
When 'exec'ing code that creates a function inside a function the 
defined function (fact in the example below) is created with the 
module level namespace as it's parent scope. 
 
The following should return 2 however it raises a NameError as fact 
is not defined. 
 
def f(): 
    exec """ 
def fact(x): 
    if x==1: 
        return 1 
    else: 
        return x*fact(x-1) 
""" 
    return fact 
 
f()(2) 
 
If you run following code... 
 
def f(): 
    exec """ 
def fact(x): 
    if x==1: 
        return 1 
    else: 
        return x*fact(x-1) 
""" in locals() 
    return fact 
 
... it works as expected. 
 
The documentation states that "In all cases, if the optional parts 
are omitted, the code is executed in the current scope." That is 
clearly not the case here as the 'fact' function is set with the 
module level scope as it's parent scope. 
 
It would appear to me that either this a documentation bug or a 
flaw in exec. I sincerely hope this a bug in exec and not the 
desired behaviour as it doesn't make any sense to me... 
 

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177811&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177831 ] (?(id)yes|no) only works when referencing the first group

2005-04-06 Thread SourceForge.net
Bugs item #1177831, was opened at 2005-04-06 17:06
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177831&group_id=5470

Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: André Malo (ndparker)
Assigned to: Nobody/Anonymous (nobody)
Summary: (?(id)yes|no) only works when referencing the first group

Initial Comment:
(?(id)yes|no) only works when referencing the first group

Referencing other marked groups may lead to weird results.

The problem is, that the compiler stores the following
code:

   ...

(op = GROUPREF_EXISTS)

while the matcher expects:

   ...

where group is  and  is
.

This is the problematic code in sre_compile.py (1.57):

   168  elif op is GROUPREF_EXISTS:
   169  emit(OPCODES[op])
   170  emit((av[0]-1)*2)
   171  skipyes = _len(code); emit(0)
   172  _compile(code, av[1], flags)

changing line 170 to

emit(av[0]-1)

fixes the bug.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177831&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177964 ] Iterator on Fileobject gives no MemoryError

2005-04-06 Thread SourceForge.net
Bugs item #1177964, was opened at 2005-04-06 19:55
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177964&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Folke Lemaitre (zypher)
Assigned to: Nobody/Anonymous (nobody)
Summary: Iterator on Fileobject gives no MemoryError

Initial Comment:
The following problem has only been tested on linux.
Suppose at a certain time that your machine can
allocate a maximum of X megabytes of memory. Allocating
more than X should result in python MemoryErrors. Also
suppose you have a file containing one big line taking
more than X bytes (Large binary file for example).
In this case, if you read lines from a file through the
file objects iterator, you do NOT get the expected
MemoryError as result, but an empty list.

To reproduce, create a file twice as big as your
machines memory and disable the swap.

If you run the following code:
#import os.path
#
#def test(input):
#print "Testing %s (%sMB)"%(repr(input),
os.path.getsize(input)/(1024.0*1024.0))
#count = 0
#for line in open(input):
#count = count + 1
#   print "  >> Total Number of Lines: %s"%count
#
#if __name__ == "__main__":
#test('test.small')
#test('test.big')

you'll get something like:
# [EMAIL PROTECTED] devel $ python2.4 bug.py
# Testing 'test.small' (20.0MB)
#   >> Total Number of Lines: 1
# Testing 'test.big' (2000.0MB)
#   >> Total Number of Lines: 0



--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177964&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1177998 ] Add a settimeout to ftplib.FTP object

2005-04-06 Thread SourceForge.net
Feature Requests item #1177998, was opened at 2005-04-06 20:52
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1177998&group_id=5470

Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Juan Antonio Valiño García (juanval)
Assigned to: Nobody/Anonymous (nobody)
Summary: Add a settimeout to ftplib.FTP object

Initial Comment:
It will be usefull that the FTP object of ftplib had a 
settimeout method to setup a timeout for the connection, 
because the only way of doing this is to use the 
socket.setdefaulttimeout method, and this is very 
dangerous when you are using threads. 
 
Thanks and keep up the work ! 

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1177998&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-1177998 ] Add a settimeout to ftplib.FTP object

2005-04-06 Thread SourceForge.net
Feature Requests item #1177998, was opened at 2005-04-06 20:52
Message generated for change (Settings changed) made by juanval
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1177998&group_id=5470

>Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Juan Antonio Valiño García (juanval)
Assigned to: Nobody/Anonymous (nobody)
Summary: Add a settimeout to ftplib.FTP object

Initial Comment:
It will be usefull that the FTP object of ftplib had a 
settimeout method to setup a timeout for the connection, 
because the only way of doing this is to use the 
socket.setdefaulttimeout method, and this is very 
dangerous when you are using threads. 
 
Thanks and keep up the work ! 

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1177998&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1161595 ] Minor error in section 3.2

2005-04-06 Thread SourceForge.net
Bugs item #1161595, was opened at 2005-03-11 19:29
Message generated for change (Settings changed) made by jyby
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161595&group_id=5470

Category: Documentation
Group: None
>Status: Deleted
Resolution: Invalid
Priority: 1
Submitted By: Jeremy Barbay (jyby)
Assigned to: Nobody/Anonymous (nobody)
Summary: Minor error in section 3.2

Initial Comment:
In the section "3.2 First Steps Towards Programming " of 
the Python tutorial (http://docs.python.org/tut/node5.html), 
the output of both implementations of the Fibonacci 
sequence computation is incorrect. 
 
As written, only one 1 should be output.  
You should either remove one 1 from the input,  
or replace the lines "print b" and "print b,"  
by "print a" and "print a,". 
 
This is minor but might confuse unnecessarily  beginners. 

--

Comment By: Ilya Sandler (isandler)
Date: 2005-04-06 02:52

Message:
Logged In: YES 
user_id=971153

It indeed seems that the output in tutorial is correct

could you close or delete the bug then?

Thanks


--

Comment By: Jeremy Barbay (jyby)
Date: 2005-03-11 19:40

Message:
Logged In: YES 
user_id=149696

All my apologies: I didn't check my code correctly: 
as the algorithm is initializing a with  0  instead of 1, the output is 
correct. 

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161595&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178136 ] cgitb.py support for frozen images

2005-04-06 Thread SourceForge.net
Bugs item #1178136, was opened at 2005-04-06 23:45
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178136&group_id=5470

Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Barry Alan Scott (barry-scott)
Assigned to: Nobody/Anonymous (nobody)
Summary: cgitb.py support for frozen images

Initial Comment:
cgitb.py does not report the line numbers in the stack
trace
if the python program is frozen. This is because the source
code is not available.

The attached patch fixes this problem.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178136&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178141 ] urllib.py overwrite HTTPError code with 200

2005-04-06 Thread SourceForge.net
Bugs item #1178141, was opened at 2005-04-06 23:48
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178141&group_id=5470

Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Barry Alan Scott (barry-scott)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib.py overwrite HTTPError code with 200

Initial Comment:
I found this bug while trying to understand why a 404
Not Found error was reported as a 200 Not Found.

Turns out the HTTPError's self.code is overwritten with
200 after
the 404 was correctly assigned.

The attached patch fixes the problem.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178141&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178145 ] urllib2.py assumes 206 is an error

2005-04-06 Thread SourceForge.net
Bugs item #1178145, was opened at 2005-04-06 23:52
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178145&group_id=5470

Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Barry Alan Scott (barry-scott)
Assigned to: Nobody/Anonymous (nobody)
Summary: urllib2.py assumes 206 is an error

Initial Comment:
I'm writting code that uses the Range header. The
correct response is 206, but the urllib2.py is coded to
treat any code that is not 200 as error. The correct
code needs to treat 200 to 299 as success.

The attached patch fixes the problem.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178145&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178148 ] cgitb.py report wrong line number

2005-04-06 Thread SourceForge.net
Bugs item #1178148, was opened at 2005-04-07 00:04
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178148&group_id=5470

Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Barry Alan Scott (barry-scott)
Assigned to: Nobody/Anonymous (nobody)
Summary: cgitb.py report wrong line number

Initial Comment:
Given code like

try:
   raise 'bug'
except ValueError:
   pass # cgitb.py think 'bug' is here

cgitb.py will report that the exception 'bug' is at the
pass line.

This is a time waster until you figure that this
problem exists.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178148&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178255 ] 256 should read 255 in operator module docs

2005-04-06 Thread SourceForge.net
Bugs item #1178255, was opened at 2005-04-06 21:09
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178255&group_id=5470

Category: Documentation
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Dan Everhart (dieseldann)
Assigned to: Nobody/Anonymous (nobody)
Summary: 256 should read 255 in operator module docs

Initial Comment:
In section 3.10 of the Python Library Reference, in the
text near the bottom of the page which reads:

Example: Build a dictionary that maps the ordinals from
0 to 256 to their character equivalents.

the 256 should be replaced with 255, to match the code
given.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178255&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178255 ] 256 should read 255 in operator module docs

2005-04-06 Thread SourceForge.net
Bugs item #1178255, was opened at 2005-04-06 23:09
Message generated for change (Comment added) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178255&group_id=5470

Category: Documentation
Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Dan Everhart (dieseldann)
Assigned to: Nobody/Anonymous (nobody)
Summary: 256 should read 255 in operator module docs

Initial Comment:
In section 3.10 of the Python Library Reference, in the
text near the bottom of the page which reads:

Example: Build a dictionary that maps the ordinals from
0 to 256 to their character equivalents.

the 256 should be replaced with 255, to match the code
given.


--

>Comment By: Raymond Hettinger (rhettinger)
Date: 2005-04-06 23:40

Message:
Logged In: YES 
user_id=80475

Fixed.
Thanks for the report.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178255&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178269 ] operator.isMappingType and isSequenceType on instances

2005-04-06 Thread SourceForge.net
Bugs item #1178269, was opened at 2005-04-06 21:54
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178269&group_id=5470

Category: Documentation
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Dan Everhart (dieseldann)
Assigned to: Nobody/Anonymous (nobody)
Summary: operator.isMappingType and isSequenceType on instances

Initial Comment:
Python Library Reference section 3.10 (module operator)
claims that IsMappingType() and isSequenceType() return
true for instance objects.  Yet:

ActivePython 2.4 Build 243 (ActiveState Corp.) based on
Python 2.4 (#60, Nov 30 2004, 09:34:21) [MSC v.1310 32
bit (Intel)]
Type "help", "copyright", "credits" or "license" for
more informatio
>>> class c: pass
...
>>> x = c()
>>> from operator import *
>>> isMappingType(x)
False
>>> isSequenceType(x)
False
>>> x
<__main__.c instance at 0x009EA7B0>


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178269&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1178269 ] operator.isMappingType and isSequenceType on instances

2005-04-06 Thread SourceForge.net
Bugs item #1178269, was opened at 2005-04-06 23:54
Message generated for change (Settings changed) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178269&group_id=5470

Category: Documentation
Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Dan Everhart (dieseldann)
Assigned to: Nobody/Anonymous (nobody)
Summary: operator.isMappingType and isSequenceType on instances

Initial Comment:
Python Library Reference section 3.10 (module operator)
claims that IsMappingType() and isSequenceType() return
true for instance objects.  Yet:

ActivePython 2.4 Build 243 (ActiveState Corp.) based on
Python 2.4 (#60, Nov 30 2004, 09:34:21) [MSC v.1310 32
bit (Intel)]
Type "help", "copyright", "credits" or "license" for
more informatio
>>> class c: pass
...
>>> x = c()
>>> from operator import *
>>> isMappingType(x)
False
>>> isSequenceType(x)
False
>>> x
<__main__.c instance at 0x009EA7B0>


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1178269&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com