[ python-Bugs-1092701 ] os.remove fails on win32 with read-only file

2005-01-01 Thread SourceForge.net
Bugs item #1092701, was opened at 2004-12-29 14:30
Message generated for change (Comment added) made by aminusfu
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092701&group_id=5470

Category: Python Interpreter Core
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Joshua Weage (jpweage)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.remove fails on win32 with read-only file

Initial Comment:
On Windows XP SP2 and Python 2.3.3 or 2.4 a call to
os.remove returns Errno 13 permission denied on a
read-only file.  On linux, python will delete a
read-only file.



--

Comment By: Robert Brewer (aminusfu)
Date: 2005-01-01 09:13

Message:
Logged In: YES 
user_id=967320

Yup. I can reproduce that on Win2k.

Seems posixmodule.c uses _unlink, _wunlink, which are
documented at MS as failing on readonly:

"Each of these functions returns 0 if successful. Otherwise,
the function returns –1 and sets errno to EACCES, which
means the path specifies a read-only file, or to ENOENT,
which means the file or path is not found or the path
specified a directory."

Seems others have "fixed" it by just changing the mode and
trying again:
http://sources.redhat.com/ml/cygwin/2001-05/msg01209.html
https://www.cvshome.org/cyclic/cvs/unoff-watcom.txt


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092701&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1092225 ] IDLE hangs due to subprocess

2005-01-01 Thread SourceForge.net
Bugs item #1092225, was opened at 2004-12-28 10:31
Message generated for change (Comment added) made by kbk
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092225&group_id=5470

Category: IDLE
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: ZACK (kitanek)
>Assigned to: Kurt B. Kaiser (kbk)
Summary: IDLE hangs due to subprocess

Initial Comment:
IDLE GUI hangs after some time when launched in the
default mode (i.e. with the subprocess).
I have noticed that the subprocess generates fast
continuous stream of system calls even if the GUI is
not used and not visible (moved to another desktop).
Example output from strace (strace -f idle):

...
[pid  5359] <... select resumed> )  = 0 (Timeout)
[pid  5359] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5359] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5359] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5359] select(4, [3], [], [], {0, 5}

[pid  5355] <... futex resumed> )   = -1 ETIMEDOUT
(Connection timed out)
[pid  5355] write(7, "\0", 1 
[pid  5356] <... select resumed> )  = 1 (in [6])
[pid  5355] <... write resumed> )   = 1
[pid  5356] futex(0x81c7250, FUTEX_WAIT, 2, NULL

[pid  5355] futex(0x81c7250, FUTEX_WAKE, 1 
[pid  5356] <... futex resumed> )   = -1 EAGAIN
(Resource temporarily unavailable)
[pid  5355] <... futex resumed> )   = 0
[pid  5356] futex(0x81c7250, FUTEX_WAKE, 1 
[pid  5355] gettimeofday( 
[pid  5356] <... futex resumed> )   = 0
[pid  5355] <... gettimeofday resumed> {1104246902,
467914}, {4294967236, 0}) = 0
[pid  5356] read(6,  
[pid  5355] gettimeofday( 
[pid  5356] <... read resumed> "\0", 1) = 1
[pid  5355] <... gettimeofday resumed> {1104246902,
468040}, {4294967236, 0}) = 0
[pid  5356] select(7, [6], [], [], NULL 
[pid  5355] select(6, [5], [], [], {0, 5}

[pid  5357] <... select resumed> )  = 0 (Timeout)
[pid  5357] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5357] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5357] select(0, NULL, NULL, NULL, {0, 5}

[pid  5359] <... select resumed> )  = 0 (Timeout)
[pid  5359] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5359] futex(0x81fb798, FUTEX_WAKE, 1) = 0
[pid  5359] futex(0x81fb798, FUTEX_WAKE, 1) = 0
...

If IDLE is launched without the subprocess (idle -n)
than it seems to run just fine without the syscall storm:

futex(0x83d1fa0, FUTEX_WAIT, 200, NULL



--

>Comment By: Kurt B. Kaiser (kbk)
Date: 2005-01-01 12:17

Message:
Logged In: YES 
user_id=149084

The socket is polled by the GUI and the subprocess.  What you 
are seeing is the normal delays and polls.  Without the 
subprocess there is no socket and therefore no polling.

Further information is needed.  When IDLE "hangs", does the
GUI become inactive?  Can the subprocess be restarted?  Is
there any evidence (via ps etc.) that the system is running out
of some resource?  Does the problem occur if you send a 
continuous stream of characters to stdout?  Is the interval
to a "hang" always the same?

--

Comment By: ZACK (kitanek)
Date: 2004-12-28 10:38

Message:
Logged In: YES 
user_id=1159448

Sorry, I forgot the system specs:

Debian Linux, unstable branche (SID)
Kernel 2.6.9-1-686
Python 2.3.4 (#2, Dec  3 2004, 13:53:17)
[GCC 3.3.5 (Debian 1:3.3.5-2)] on linux2
glib-config --version = 1.2.10

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092225&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-846560 ] Slicing infinity

2005-01-01 Thread SourceForge.net
Feature Requests item #846560, was opened at 2003-11-21 13:02
Message generated for change (Comment added) made by apb_4
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=846560&group_id=5470

Category: Python Interpreter Core
Group: None
Status: Closed
Resolution: Rejected
Priority: 5
Submitted By: Alexander Rødseth (alexanro)
Assigned to: Michael Hudson (mwh)
Summary: Slicing infinity

Initial Comment:
It would be great to be able to use extended slices
instead of range.

Here's an example:

>>> for i in [0:10:2]:
...print i
...
0
2
4
6
8

A more explicit (but longer) way to write this, could be:

for i in infinity[0:10:2]: print i

One could alternatively write something like:

infinity = range(1000)
(but this range is too small)

or

infinity = range(sys.maxint)
(but this gives me a memory-error)

or

infinity = xrange(sys.maxint)
(but xrange cannot be sliced)


I've also tried experimenting with iterators and
generators,
but that would exclude slicing "in thin air" like:

for i in [0:10:2]: print i

--

Comment By: Adam (apb_4)
Date: 2005-01-01 20:28

Message:
Logged In: YES 
user_id=1188305

well u can use a range of 20,000,000 and probably some more 
although it takes a while. 2mil didn't take too long.

--

Comment By: Alexander Rødseth (alexanro)
Date: 2003-11-21 13:30

Message:
Logged In: YES 
user_id=679374

Okay, thanks for the checkup! :-)

--

Comment By: Michael Hudson (mwh)
Date: 2003-11-21 13:25

Message:
Logged In: YES 
user_id=6656

This is basically PEP 204:

http://www.python.org/peps/pep-0204.html

which has been rejected.  I'm not aware of any compelling
reasons to restart the discussion.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=846560&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1092502 ] Memory leak in socket.py on Mac OS X 10.3

2005-01-01 Thread SourceForge.net
Bugs item #1092502, was opened at 2004-12-28 21:09
Message generated for change (Comment added) made by bacchusrx
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092502&group_id=5470

Category: Python Library
Group: Platform-specific
Status: Open
Resolution: None
Priority: 5
Submitted By: bacchusrx (bacchusrx)
Assigned to: Nobody/Anonymous (nobody)
Summary: Memory leak in socket.py on Mac OS X 10.3

Initial Comment:
Some part of socket.py leaks memory on Mac OS X 10.3 (both with 
the python 2.3 that ships with the OS and with python 2.4).

I encountered the problem in John Goerzen's offlineimap. 
Transfers of messages over a certain size would cause the program 
to bail with malloc errors, eg

*** malloc: vm_allocate(size=5459968) failed (error code=3)
*** malloc[13730]: error: Can't allocate region

Inspecting the process as it runs shows that python's total memory
size grows wildly during such transfers.

The bug manifests in _fileobject.read() in socket.py. You can 
replicate the problem easily using the attached example with "nc -l 
-p 9330 < /dev/zero" running on some some remote host.

The way _fileobject.read() is written, socket.recv is called with the 
larger of the minimum rbuf size or whatever's left to be read. 
Whatever is received is then appended to a buffer which is joined 
and returned at the end of function.

It looks like each time through the loop, space for recv_size is 
allocated but not freed, so if the loop runs for enough iterations, 
python exhausts the memory available to it.

You can sidestep the condition if recv_size is small (like 
_fileobject.default_bufsize small).

I can't replicate this problem with python 2.3 on FreeBSD 4.9 or  
FreeBSD 5.2, nor on Mac OS X 10.3 if the logic from 
_fileobject.read() is re-written in Perl (for example).

--

>Comment By: bacchusrx (bacchusrx)
Date: 2005-01-01 18:01

Message:
Logged In: YES 
user_id=646321

I've been able to replicate the problem reliably on both 10.3.5 and 
10.3.7. I've attached two more examples to demonstrate:

Try this: Do, "dd if=/dev/zero of=./data bs=1024 count=10240" and save 
server.pl wherever you put "data". Have three terminals open. In one, 
run "perl server.pl -s0.25". In another, run "top -ovsize" and in the third 
run "python example2.py". 

After about 100 iterations, python's vsize is +1GB (just about the value 
of cumulative_req in example2.py) and if left running will cause a 
malloc error at around 360 iterations with a vsize over 3.6GB (again, just 
about what cumulative_req reports). Mind you, we've only received 
~512kbytes.

server.pl differs from the netcat method in that it (defaults) to sending 
only 1492 bytes at a time (configurable with the -b switch) and sleeps for 
however many seconds specified with the -s switch. This guarantees 
enough iterations to raise the error each time around. When omittting 
the -s switch to server.pl, I don't get the error, but throughput is good 
enough that the loop in readFromSockUntil() only runs a few times.

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 01:27

Message:
Logged In: YES 
user_id=139309

I just played with a bit more.  If I catch the MemoryError and try again, 
most of the time it will work (sometimes on the second try).  These 
malloc faults seem to be some kind of temporary condition.

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 01:18

Message:
Logged In: YES 
user_id=139309

I can't reproduce this on either version of Python a 10.3.7 machine w/ 
1gb ram.  Python's total memory usage seems stable to me even if the 
read is in a while loop.

I can't see anything in sock_recv or _fileobject.read that will in any way 
leak memory.

With a really large buffer size (always >17mb, but it does vary with each 
run) it will get a memory error but the Python process doesn't grow 
beyond 50mb at the samples I looked at.  That's pretty much the amount 
of RAM I'd expect it to use.  

It is kind of surprising it doesn't want to allocate a buffer of that size, 
because I have the RAM for it.. but I don't think this is a bug.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092502&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Feature Requests-985094 ] getattr(object, name) accepts only strings

2005-01-01 Thread SourceForge.net
Feature Requests item #985094, was opened at 2004-07-05 01:21
Message generated for change (Comment added) made by complex
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=985094&group_id=5470

Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Viktor Ferenczi (complex)
Assigned to: Nobody/Anonymous (nobody)
Summary: getattr(object,name) accepts only strings

Initial Comment:
getattr(object,name) function accepts only strings. This behavior prohibits 
some interesting usage of callables as names in database column and field 
references.

For example:

Someone references a database field:
value=record.field_name

Programmatically, when name is the name of the field:
value=getattr(record,field_name)

Calculated fields could be implemented by passing a callable as a field name:

def fn(record): return '%s (%s)'%(record.name,record.number)

value=getattr(record,fn)

The database backend checks if the name is callable and then call the name with 
the record.

But this cannot be implemented in the simple way if getattr checks if the name 
is a string or not. This is an unneccessary check in getattr, setattr and 
delattr, since prevents interesting solutions.

Temporary workaround:

value=record.__getattr__(fn)

There can be many unneccessary type checks and limitations in core and library 
functions. They should be removed to allow free usage.

--

>Comment By: Viktor Ferenczi (complex)
Date: 2005-01-02 01:29

Message:
Logged In: YES 
user_id=142612

Thanks for your comment. I know about property, of course. I
had to change an old application where reimplementing
everything with properties could be dificult. The problem
has been solved for now. I use SQLObject (www.sqlobject.org)
for new applications, whereever possible. Usage of other
Object-Relational mappings (or even ZODB) are under
consideration. - Viktor

--

Comment By: Armin Rigo (arigo)
Date: 2004-12-23 23:55

Message:
Logged In: YES 
user_id=4771

This is in part due to historical reasons.  I guess you know 
about "property"?  This is exactly what database people 
usually call calculated fields.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=985094&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-711268 ] A large block of commands after an " if" cannot be

2005-01-01 Thread SourceForge.net
Bugs item #711268, was opened at 2003-03-28 04:47
Message generated for change (Comment added) made by jepler
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=711268&group_id=5470

Category: Parser/Compiler
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Bram Moolenaar (vimboss)
Assigned to: Nobody/Anonymous (nobody)
Summary: A large block of commands after an "if" cannot be 

Initial Comment:
A Generated Python script Contains the code:
if 1:
  file = bugreport.vim
  ... long list of commands 

Executing this code with:
  exec cmds in globals(), globals()

Results in the error:
  SystemError: com_backpatch: offset too large

Looking into the code for com_backpatch() it appears
that the code is more than what can be jumped over.

Possible solutions:
1.  When there is too much code, use another jump
statement that allows for a larger offset.
2.  Always use a jump statement with a large offset
3.  When "if 1" is used, don't generate a jump
statement (not a real fix, but works for the situation
where I ran into the bug).

It looks like this bug exists in all versions of Python.

--

Comment By: Jeff Epler (jepler)
Date: 2005-01-01 18:33

Message:
Logged In: YES 
user_id=2772

Please see my (unsuitable for inclusion) patch at
http://mail.python.org/pipermail/python-list/2004-November/249827.html
I think that message suggests some steps that might result
in an acceptable patch.

--

Comment By: Facundo Batista (facundobatista)
Date: 2004-12-27 10:10

Message:
Logged In: YES 
user_id=752496

Also happens in 2.4.

I'm reopening the bug, in group 2.4.

--

Comment By: Bram Moolenaar (vimboss)
Date: 2004-12-27 10:04

Message:
Logged In: YES 
user_id=57665

It appears between Python 2.2 and 2.3 the efficiency of the
produced bytecode was improved.  You now need to repeat the
command 10923 times to produce the error.  Thus the problem
remains, it's just further away.

You can reproduce the problem with this program:
cmds = "if 1:\n"
for i in xrange(1, 10923):
cmds = cmds + " a = 'a'\n"
exec cmds in globals(), globals()

I verified with Python 2.3.3, don't have a newer version
right now.

--

Comment By: Brett Cannon (bcannon)
Date: 2004-12-26 13:11

Message:
Logged In: YES 
user_id=357491

I can't reproduce it with 2.3, 2.3 maintenance, 2.4 maintenance, or 2.5 in 
CVS using 8000 lines.

Closing as out of date.

--

Comment By: Facundo Batista (facundobatista)
Date: 2004-12-26 09:00

Message:
Logged In: YES 
user_id=752496

Can not reproduce the problem in Py2.3.4 using the method
posted by vimboss. It's already fixed?

--

Comment By: Facundo Batista (facundobatista)
Date: 2004-12-26 09:00

Message:
Logged In: YES 
user_id=752496

Please, could you verify if this problem persists in Python 2.3.4
or 2.4?

If yes, in which version? Can you provide a test case?

If the problem is solved, from which version?

Note that if you fail to answer in one month, I'll close this bug
as "Won't fix".

Thank you! 

.Facundo

--

Comment By: Bram Moolenaar (vimboss)
Date: 2003-03-28 14:03

Message:
Logged In: YES 
user_id=57665

I can reproduce the problem with this text:
if 1:
   a = "a"

Repeat the assignment 7282 times.  Feed this text to "exec".
With 7281 assignments you do not get the error.
Looks like 9 bytes are produced per assignment.

Good luck fixing this!

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2003-03-28 13:18

Message:
Logged In: YES 
user_id=6380

Hm, the 32-bit argument doesn't work because of what
backpatch does. It would require a totally different
approach to allow backpatching a larer offset, or we'd
always have to reserve 4 bytes for the offset. :-(

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2003-03-28 13:13

Message:
Logged In: YES 
user_id=6380

Just curious. How big was the block of code?

Also, I wonder if the error message is bogus; opcode
arguments can now be 32 bits AFAIK.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=711268&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1092502 ] Memory leak in socket.py on Mac OS X 10.3

2005-01-01 Thread SourceForge.net
Bugs item #1092502, was opened at 2004-12-28 21:09
Message generated for change (Comment added) made by etrepum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092502&group_id=5470

Category: Python Library
Group: Platform-specific
Status: Open
Resolution: None
Priority: 5
Submitted By: bacchusrx (bacchusrx)
Assigned to: Nobody/Anonymous (nobody)
Summary: Memory leak in socket.py on Mac OS X 10.3

Initial Comment:
Some part of socket.py leaks memory on Mac OS X 10.3 (both with 
the python 2.3 that ships with the OS and with python 2.4).

I encountered the problem in John Goerzen's offlineimap. 
Transfers of messages over a certain size would cause the program 
to bail with malloc errors, eg

*** malloc: vm_allocate(size=5459968) failed (error code=3)
*** malloc[13730]: error: Can't allocate region

Inspecting the process as it runs shows that python's total memory
size grows wildly during such transfers.

The bug manifests in _fileobject.read() in socket.py. You can 
replicate the problem easily using the attached example with "nc -l 
-p 9330 < /dev/zero" running on some some remote host.

The way _fileobject.read() is written, socket.recv is called with the 
larger of the minimum rbuf size or whatever's left to be read. 
Whatever is received is then appended to a buffer which is joined 
and returned at the end of function.

It looks like each time through the loop, space for recv_size is 
allocated but not freed, so if the loop runs for enough iterations, 
python exhausts the memory available to it.

You can sidestep the condition if recv_size is small (like 
_fileobject.default_bufsize small).

I can't replicate this problem with python 2.3 on FreeBSD 4.9 or  
FreeBSD 5.2, nor on Mac OS X 10.3 if the logic from 
_fileobject.read() is re-written in Perl (for example).

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 21:22

Message:
Logged In: YES 
user_id=139309

Ok.  I've tracked it down.  realloc(...) on Darwin doesn't actually resize 
memory unless it *has* to.  For shrinking an allocation, it does not have 
to, therefore realloc(...) with a smaller size is a no-op.

It seems that this may be a misunderstanding by Python.  The man page 
for realloc(...) does not say that it will EVER free memory, EXCEPT in the 
case where it has to allocate a larger region.

I'll attach an example that demonstrates this outside of Python.

--

Comment By: bacchusrx (bacchusrx)
Date: 2005-01-01 18:01

Message:
Logged In: YES 
user_id=646321

I've been able to replicate the problem reliably on both 10.3.5 and 
10.3.7. I've attached two more examples to demonstrate:

Try this: Do, "dd if=/dev/zero of=./data bs=1024 count=10240" and save 
server.pl wherever you put "data". Have three terminals open. In one, 
run "perl server.pl -s0.25". In another, run "top -ovsize" and in the third 
run "python example2.py". 

After about 100 iterations, python's vsize is +1GB (just about the value 
of cumulative_req in example2.py) and if left running will cause a 
malloc error at around 360 iterations with a vsize over 3.6GB (again, just 
about what cumulative_req reports). Mind you, we've only received 
~512kbytes.

server.pl differs from the netcat method in that it (defaults) to sending 
only 1492 bytes at a time (configurable with the -b switch) and sleeps for 
however many seconds specified with the -s switch. This guarantees 
enough iterations to raise the error each time around. When omittting 
the -s switch to server.pl, I don't get the error, but throughput is good 
enough that the loop in readFromSockUntil() only runs a few times.

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 01:27

Message:
Logged In: YES 
user_id=139309

I just played with a bit more.  If I catch the MemoryError and try again, 
most of the time it will work (sometimes on the second try).  These 
malloc faults seem to be some kind of temporary condition.

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 01:18

Message:
Logged In: YES 
user_id=139309

I can't reproduce this on either version of Python a 10.3.7 machine w/ 
1gb ram.  Python's total memory usage seems stable to me even if the 
read is in a while loop.

I can't see anything in sock_recv or _fileobject.read that will in any way 
leak memory.

With a really large buffer size (always >17mb, but it does vary with each 
run) it will get a memory error but the Python process doesn't grow 
beyond 50mb at the samples I looked at.  That's pretty much the amount 
of RAM I'd expect it to use.  

It is kind of surprising it doesn't want to allocate a buffer of that size, 
because I have the RAM for it.. but I don't think this is a bug.

-

[ python-Bugs-1092502 ] Memory leak in socket.py on Mac OS X 10.3

2005-01-01 Thread SourceForge.net
Bugs item #1092502, was opened at 2004-12-28 21:09
Message generated for change (Comment added) made by etrepum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092502&group_id=5470

Category: Python Library
Group: Platform-specific
Status: Open
Resolution: None
Priority: 5
Submitted By: bacchusrx (bacchusrx)
Assigned to: Nobody/Anonymous (nobody)
Summary: Memory leak in socket.py on Mac OS X 10.3

Initial Comment:
Some part of socket.py leaks memory on Mac OS X 10.3 (both with 
the python 2.3 that ships with the OS and with python 2.4).

I encountered the problem in John Goerzen's offlineimap. 
Transfers of messages over a certain size would cause the program 
to bail with malloc errors, eg

*** malloc: vm_allocate(size=5459968) failed (error code=3)
*** malloc[13730]: error: Can't allocate region

Inspecting the process as it runs shows that python's total memory
size grows wildly during such transfers.

The bug manifests in _fileobject.read() in socket.py. You can 
replicate the problem easily using the attached example with "nc -l 
-p 9330 < /dev/zero" running on some some remote host.

The way _fileobject.read() is written, socket.recv is called with the 
larger of the minimum rbuf size or whatever's left to be read. 
Whatever is received is then appended to a buffer which is joined 
and returned at the end of function.

It looks like each time through the loop, space for recv_size is 
allocated but not freed, so if the loop runs for enough iterations, 
python exhausts the memory available to it.

You can sidestep the condition if recv_size is small (like 
_fileobject.default_bufsize small).

I can't replicate this problem with python 2.3 on FreeBSD 4.9 or  
FreeBSD 5.2, nor on Mac OS X 10.3 if the logic from 
_fileobject.read() is re-written in Perl (for example).

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 21:23

Message:
Logged In: YES 
user_id=139309

#include 

#define NUM_ALLOCATIONS 10
#define ALLOC_SIZE 10485760
#define ALLOC_RESIZE 1492

int main(int argc, char **argv) {
/* exiting will free all this leaked memory */
for (i = 0; i < NUM_ALLOCATIONS; i++) {
void *orig_ptr, *new_ptr;
size_t new_size, orig_size;
orig_ptr = malloc(ALLOC_SIZE);
orig_size = malloc_size(orig_ptr);

if (orig_ptr == NULL) {
printf("failure to malloc %d\n", i);
abort();
}
new_ptr = realloc(orig_ptr, ALLOC_RESIZE);
new_size = malloc_size(new_ptr);
printf("resized %d[%p] -> %d[%p]\n",
orig_size, orig_ptr, new_size, new_ptr);
if (new_ptr == NULL) {
printf("failure to realloc %d\n", i);
abort();
}
}
return 0;
}

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 21:22

Message:
Logged In: YES 
user_id=139309

Ok.  I've tracked it down.  realloc(...) on Darwin doesn't actually resize 
memory unless it *has* to.  For shrinking an allocation, it does not have 
to, therefore realloc(...) with a smaller size is a no-op.

It seems that this may be a misunderstanding by Python.  The man page 
for realloc(...) does not say that it will EVER free memory, EXCEPT in the 
case where it has to allocate a larger region.

I'll attach an example that demonstrates this outside of Python.

--

Comment By: bacchusrx (bacchusrx)
Date: 2005-01-01 18:01

Message:
Logged In: YES 
user_id=646321

I've been able to replicate the problem reliably on both 10.3.5 and 
10.3.7. I've attached two more examples to demonstrate:

Try this: Do, "dd if=/dev/zero of=./data bs=1024 count=10240" and save 
server.pl wherever you put "data". Have three terminals open. In one, 
run "perl server.pl -s0.25". In another, run "top -ovsize" and in the third 
run "python example2.py". 

After about 100 iterations, python's vsize is +1GB (just about the value 
of cumulative_req in example2.py) and if left running will cause a 
malloc error at around 360 iterations with a vsize over 3.6GB (again, just 
about what cumulative_req reports). Mind you, we've only received 
~512kbytes.

server.pl differs from the netcat method in that it (defaults) to sending 
only 1492 bytes at a time (configurable with the -b switch) and sleeps for 
however many seconds specified with the -s switch. This guarantees 
enough iterations to raise the error each time around. When omittting 
the -s switch to server.pl, I don't get the error, but throughput is good 
enough that the loop in readFromSockUntil() only runs a few times.

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 01:27

Message:
Logged In: YES 
user_id=139309

I just played with a bit more.  If I catch the MemoryError and try again, 

[ python-Bugs-1092502 ] Memory leak in socket.py on Mac OS X 10.3

2005-01-01 Thread SourceForge.net
Bugs item #1092502, was opened at 2004-12-28 21:09
Message generated for change (Comment added) made by etrepum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1092502&group_id=5470

Category: Python Library
Group: Platform-specific
Status: Open
Resolution: None
Priority: 5
Submitted By: bacchusrx (bacchusrx)
Assigned to: Nobody/Anonymous (nobody)
Summary: Memory leak in socket.py on Mac OS X 10.3

Initial Comment:
Some part of socket.py leaks memory on Mac OS X 10.3 (both with 
the python 2.3 that ships with the OS and with python 2.4).

I encountered the problem in John Goerzen's offlineimap. 
Transfers of messages over a certain size would cause the program 
to bail with malloc errors, eg

*** malloc: vm_allocate(size=5459968) failed (error code=3)
*** malloc[13730]: error: Can't allocate region

Inspecting the process as it runs shows that python's total memory
size grows wildly during such transfers.

The bug manifests in _fileobject.read() in socket.py. You can 
replicate the problem easily using the attached example with "nc -l 
-p 9330 < /dev/zero" running on some some remote host.

The way _fileobject.read() is written, socket.recv is called with the 
larger of the minimum rbuf size or whatever's left to be read. 
Whatever is received is then appended to a buffer which is joined 
and returned at the end of function.

It looks like each time through the loop, space for recv_size is 
allocated but not freed, so if the loop runs for enough iterations, 
python exhausts the memory available to it.

You can sidestep the condition if recv_size is small (like 
_fileobject.default_bufsize small).

I can't replicate this problem with python 2.3 on FreeBSD 4.9 or  
FreeBSD 5.2, nor on Mac OS X 10.3 if the logic from 
_fileobject.read() is re-written in Perl (for example).

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 21:25

Message:
Logged In: YES 
user_id=139309

that code paste is missing an "int i" at the beginning of main..

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 21:23

Message:
Logged In: YES 
user_id=139309

#include 

#define NUM_ALLOCATIONS 10
#define ALLOC_SIZE 10485760
#define ALLOC_RESIZE 1492

int main(int argc, char **argv) {
/* exiting will free all this leaked memory */
for (i = 0; i < NUM_ALLOCATIONS; i++) {
void *orig_ptr, *new_ptr;
size_t new_size, orig_size;
orig_ptr = malloc(ALLOC_SIZE);
orig_size = malloc_size(orig_ptr);

if (orig_ptr == NULL) {
printf("failure to malloc %d\n", i);
abort();
}
new_ptr = realloc(orig_ptr, ALLOC_RESIZE);
new_size = malloc_size(new_ptr);
printf("resized %d[%p] -> %d[%p]\n",
orig_size, orig_ptr, new_size, new_ptr);
if (new_ptr == NULL) {
printf("failure to realloc %d\n", i);
abort();
}
}
return 0;
}

--

Comment By: Bob Ippolito (etrepum)
Date: 2005-01-01 21:22

Message:
Logged In: YES 
user_id=139309

Ok.  I've tracked it down.  realloc(...) on Darwin doesn't actually resize 
memory unless it *has* to.  For shrinking an allocation, it does not have 
to, therefore realloc(...) with a smaller size is a no-op.

It seems that this may be a misunderstanding by Python.  The man page 
for realloc(...) does not say that it will EVER free memory, EXCEPT in the 
case where it has to allocate a larger region.

I'll attach an example that demonstrates this outside of Python.

--

Comment By: bacchusrx (bacchusrx)
Date: 2005-01-01 18:01

Message:
Logged In: YES 
user_id=646321

I've been able to replicate the problem reliably on both 10.3.5 and 
10.3.7. I've attached two more examples to demonstrate:

Try this: Do, "dd if=/dev/zero of=./data bs=1024 count=10240" and save 
server.pl wherever you put "data". Have three terminals open. In one, 
run "perl server.pl -s0.25". In another, run "top -ovsize" and in the third 
run "python example2.py". 

After about 100 iterations, python's vsize is +1GB (just about the value 
of cumulative_req in example2.py) and if left running will cause a 
malloc error at around 360 iterations with a vsize over 3.6GB (again, just 
about what cumulative_req reports). Mind you, we've only received 
~512kbytes.

server.pl differs from the netcat method in that it (defaults) to sending 
only 1492 bytes at a time (configurable with the -b switch) and sleeps for 
however many seconds specified with the -s switch. This guarantees 
enough iterations to raise the error each time around. When omittting 
the -s switch to server.pl, I don't get the error, but throughput is good 
enough that the loop in readFromSockUntil() only runs a few times.


[ python-Feature Requests-711268 ] A large block of commands after an " if" cannot be

2005-01-01 Thread SourceForge.net
Feature Requests item #711268, was opened at 2003-03-28 05:47
Message generated for change (Comment added) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=711268&group_id=5470

>Category: Parser/Compiler
>Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Bram Moolenaar (vimboss)
Assigned to: Nobody/Anonymous (nobody)
Summary: A large block of commands after an "if" cannot be 

Initial Comment:
A Generated Python script Contains the code:
if 1:
  file = bugreport.vim
  ... long list of commands 

Executing this code with:
  exec cmds in globals(), globals()

Results in the error:
  SystemError: com_backpatch: offset too large

Looking into the code for com_backpatch() it appears
that the code is more than what can be jumped over.

Possible solutions:
1.  When there is too much code, use another jump
statement that allows for a larger offset.
2.  Always use a jump statement with a large offset
3.  When "if 1" is used, don't generate a jump
statement (not a real fix, but works for the situation
where I ran into the bug).

It looks like this bug exists in all versions of Python.

--

>Comment By: Raymond Hettinger (rhettinger)
Date: 2005-01-01 21:40

Message:
Logged In: YES 
user_id=80475

Re-classifying this as a feature request for CPython
implementation to be expanded to handle larger relative
jumps.  The current behavior of raising a SystemError is
correct, non-buggy behavior.

One solution is to introduce a new bytecode for indirect
jumps based on an entry into the constants table.  Whenever
the distance is too large to backpatch a JUMP_FORWARD, that
opcode can be replaced with JUMP_INDIRECT and given an
offset into the constant table.  This solution is easy to
implement.  (Reminder, the peepholer should skip any code
containing the new opcode.)

--

Comment By: Jeff Epler (jepler)
Date: 2005-01-01 19:33

Message:
Logged In: YES 
user_id=2772

Please see my (unsuitable for inclusion) patch at
http://mail.python.org/pipermail/python-list/2004-November/249827.html
I think that message suggests some steps that might result
in an acceptable patch.

--

Comment By: Facundo Batista (facundobatista)
Date: 2004-12-27 11:10

Message:
Logged In: YES 
user_id=752496

Also happens in 2.4.

I'm reopening the bug, in group 2.4.

--

Comment By: Bram Moolenaar (vimboss)
Date: 2004-12-27 11:04

Message:
Logged In: YES 
user_id=57665

It appears between Python 2.2 and 2.3 the efficiency of the
produced bytecode was improved.  You now need to repeat the
command 10923 times to produce the error.  Thus the problem
remains, it's just further away.

You can reproduce the problem with this program:
cmds = "if 1:\n"
for i in xrange(1, 10923):
cmds = cmds + " a = 'a'\n"
exec cmds in globals(), globals()

I verified with Python 2.3.3, don't have a newer version
right now.

--

Comment By: Brett Cannon (bcannon)
Date: 2004-12-26 14:11

Message:
Logged In: YES 
user_id=357491

I can't reproduce it with 2.3, 2.3 maintenance, 2.4 maintenance, or 2.5 in 
CVS using 8000 lines.

Closing as out of date.

--

Comment By: Facundo Batista (facundobatista)
Date: 2004-12-26 10:00

Message:
Logged In: YES 
user_id=752496

Can not reproduce the problem in Py2.3.4 using the method
posted by vimboss. It's already fixed?

--

Comment By: Facundo Batista (facundobatista)
Date: 2004-12-26 10:00

Message:
Logged In: YES 
user_id=752496

Please, could you verify if this problem persists in Python 2.3.4
or 2.4?

If yes, in which version? Can you provide a test case?

If the problem is solved, from which version?

Note that if you fail to answer in one month, I'll close this bug
as "Won't fix".

Thank you! 

.Facundo

--

Comment By: Bram Moolenaar (vimboss)
Date: 2003-03-28 15:03

Message:
Logged In: YES 
user_id=57665

I can reproduce the problem with this text:
if 1:
   a = "a"

Repeat the assignment 7282 times.  Feed this text to "exec".
With 7281 assignments you do not get the error.
Looks like 9 bytes are produced per assignment.

Good luck fixing this!

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2003-03-28 14:18

Message:
Logged In: YES 
user_id=6380

Hm, the 32-bit argument doesn't work because of what
backpatch does. It would require a totally different
approach to allow backpatching a larer offset, or we'd
always have