[ python-Feature Requests-1185121 ] itertools.imerge: merge sequences

2005-04-19 Thread SourceForge.net
Feature Requests item #1185121, was opened at 2005-04-18 12:11
Message generated for change (Comment added) made by jneb
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1185121&group_id=5470

Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Jurjen N.E. Bos (jneb)
Assigned to: Raymond Hettinger (rhettinger)
Summary: itertools.imerge: merge sequences

Initial Comment:
(For the itertools library, so Python 2.2 and up)
This is a suggested addition to itertools, proposed name imerge.
usage: imerge(seq0, seq1, ..., [key=])
result: imerge assumes the sequences are all in sorted order, and 
produces a iterator that returns pairs of the form (value, index),
where value is a value of one of the sequences, and index is the 
index number of the given sequence.
The output the imerge is in sorted order (taking into account the 
key function), so that identical values in the sequences will be 
produced from left to right.
The code is surprisingly short, making use of the builtin heap 
module.
(You may disagree with my style of argument handling; feel free to 
optimize it.)
def imerge(*iterlist, **key):
"""Merge a sequence of sorted iterables.

Returns pairs [value, index] where each value comes from 
iterlist[index], and the pairs are sorted
if each of the iterators is sorted.
Hint use groupby(imerge(...), operator.itemgetter(0)) to get 
the items one by one.
"""
if key.keys() not in ([], ["key"]): raise TypeError, "Excess 
keyword arguments for imerge"
key = key.get("key", lambda x:x)
from heapq import heapreplace, heappop
#initialize the heap containing (inited, value, index, 
currentItem, iterator)
#this automatically makes sure all iterators are initialized, 
then run, and finally emptied
heap = [(False, None, index, None, iter(iterator)) for index, 
iterator in enumerate(iterlist)]
while heap:
inited, item, index, value, iterator = heap[0]
if inited: yield value, index
try: item = iterator.next()
except StopIteration: heappop(heap)
else: heapreplace(heap, (True, key(item), index, item, 
iterator))

If you find this little routine worth its size, please put it into 
itertools.

- Jurjen

--

>Comment By: Jurjen N.E. Bos (jneb)
Date: 2005-04-19 08:19

Message:
Logged In: YES 
user_id=446428

Well, I was optimizing a piece of code with reasonbly long sorted lists (in 
memory, I agree) that were modified in all kinds of ways. I did not want 
the nlogn behaviour of sort, so I started writing a merge routine.
I found out that the boundary cases of a merge implementation are a 
mess, until I disccovered the heap trick. Then I decided to clean it up 
and and put it up for a library routine.
The fact that it uses iterators is obnly to make it more general, not 
specifically for the "lazy" properties.
- Jurjen

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2005-04-18 22:43

Message:
Logged In: YES 
user_id=80475

I had previously looked at an imerge() utility and found
that it had only a single application (isomorphic to lazy
mergesorting) and that the use cases were dominated by the
in-memory alternative:  sorted(chain(*iterlist)).

Short of writing an external mergesort, what applications
did you have in mind?  What situations have you encountered
where you have multiple sources of sorted data being
generated on the fly (as opposed to already being
in-memory), have needed one element at a time sequential
access to a combined sort of that data, needed that combined
sort only once, and could not afford to have the dataset
in-memory?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1185121&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1108992 ] idle freezes when run over ssh

2005-04-19 Thread SourceForge.net
Bugs item #1108992, was opened at 2005-01-25 10:44
Message generated for change (Comment added) made by mgpoolman
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1108992&group_id=5470

Category: IDLE
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Mark Poolman (mgpoolman)
Assigned to: Nobody/Anonymous (nobody)
Summary: idle freezes when run over ssh

Initial Comment:
Python 2.3.4 (#2, Aug 13 2004, 00:36:58) 
[GCC 3.3.4 (Debian 1:3.3.4-5ubuntu2)] on linux2
IDLE 1.0.3  

When running idle over an ssh link, idle freezes after
an unpredictable length of time. Over 3 days the
longest it has stayed allive for is ~4hrs, but a few
minutes before freezing is the norm. Niether idle nor
python are consuming cpu time once frozen. I can find
no definete recipie to bring about the freeze, although
(I think) there has always been at least one editor
window open when it happens. There is no output on
stderr, or other diagnostics that I can see.

ssh server(ubuntu warty):
OpenSSH_3.8.1p1 Debian 1:3.8.1p1-11ubuntu3.1, OpenSSL
0.9.7d 17 Mar 2004

ssh client (RH9):
OpenSSH_3.5p1, SSH protocols 1.5/2.0, OpenSSL 0x0090701f

/best/*

Mark

--

>Comment By: Mark Poolman (mgpoolman)
Date: 2005-04-19 11:05

Message:
Logged In: YES 
user_id=993923

Haven't been using the machine in question for a while. I
can't reproduce the problem on an opteron with equivalent
set up. (The machine I reported this on is a celeron). I've
atatched a stack trace from gdb, having interrupted with
ctrl-C at the gdb prompt   when idle froze. 

--

Comment By: Kurt B. Kaiser (kbk)
Date: 2005-04-14 21:01

Message:
Logged In: YES 
user_id=149084

Any update on this?

--

Comment By: SUZUKI Hisao (suzuki_hisao)
Date: 2005-03-19 01:46

Message:
Logged In: YES 
user_id=495142

If
1) your IDLE freezes when you close an editor window 
2) which has been editing a file whose path contains a
   non-ASCII character, and
3) you do not call sys.setdefaultencoding() in your
   sitecustomize.py (so letting the default encoding to be
   'ascii'),
then 
my patch 'idlelib.diff' in Python Patch ID 1162825 
"EditorWindow's title with non-ASCII chars." 
may help you.

More precisely, IDLE freezes when updating the
"Recent Files" menu if an implicit conversion of
unicode to ASCII string occurs.  The patch fixes it.

Sorry if it is irrelevant.

--

Comment By: Kurt B. Kaiser (kbk)
Date: 2005-03-04 00:28

Message:
Logged In: YES 
user_id=149084

There have been recent reports on idle-dev regarding
IDLE freezing on Debian Sid.  Since ubuntu is Debian
derived, I assume there may be a relationship.

--

Comment By: Mark Poolman (mgpoolman)
Date: 2005-02-02 18:50

Message:
Logged In: YES 
user_id=993923

>0.8 doesn't have the problem.  Are you sure?

Can't be certain as haven't used it for extended periods on
that box, but I'll look into it. I've used IDLE daily for
about 4 years on various RH and Suse, and never seen a
problem until now. 

> What else is the ubuntu box doing?  Is the load heavy?
Almost nothing, it's there to evaluate ubuntu as a desktop
w/s, and my main activity is getting some in-house python
s/w ported to it.

gdb results to follow. 

--

Comment By: Kurt B. Kaiser (kbk)
Date: 2005-02-02 17:21

Message:
Logged In: YES 
user_id=149084

To keep things simple, please start IDLE with the
-n switch directly on your ubuntu box for the
purposes of future investigations on this issue.

Can you attach to the frozen IDLE with gdb and then
use 'bt' to get a backtrace and find out what was
going at at the time of the freeze?  If so, is it
repeatable?  

It's occasionally reported that IDLE freezes.  I've never
seen it myself, running IDLE for many days on OpenBSD,
Debian, RH, and Arch Linux, WindowsXP, W2K, and W98,
over a period of many years, so it's hard for me to figure
out what's going on.  But it's peculiar that 0.8 doesn't have
the problem.  Are you sure?

What else is the ubuntu box doing?  Is the load heavy?

--

Comment By: Mark Poolman (mgpoolman)
Date: 2005-02-01 13:35

Message:
Logged In: YES 
user_id=993923

> So if I follow correctly,  IDLE -n freezes on your
ubuntu box without using ssh tunneling.

That is correct. The problem appears exactly the same when
run over ssh though, which, I guess rules out any
gnome/metacity/X wierdness.


> I suspect a hardware problem
I' sceptical about that. There's nothing in dmesg or
/var/log to suggest it, and absolutely no other problems
with the machine in question.

 >Are you startin

[ python-Bugs-1185883 ] PyObject_Realloc bug in obmalloc.c

2005-04-19 Thread SourceForge.net
Bugs item #1185883, was opened at 2005-04-19 12:07
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470

Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Kristján Valur (krisvale)
Assigned to: Nobody/Anonymous (nobody)
Summary: PyObject_Realloc bug in obmalloc.c

Initial Comment:
obmalloc.c:835
If the previous block was not handled by obmalloc, and 
the realloc is for growing the block, this memcpy may 
cross a page boundary and cause a segmentation 
fault.  This scenario can happen if a previous allocation 
failed to successfully allocate from the obmalloc pools, 
due to memory starvation or other reasons, but was 
successfully allocated by the c runtime.

The solution is to query the actual size of the allocated 
block, and copy only so much memory.  Most modern 
platforms provide size query functions complementing 
the malloc()/free() calls.  on Windows, this is the _msize
() function.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185931 ] python socketmodule dies on ^c

2005-04-19 Thread SourceForge.net
Bugs item #1185931, was opened at 2005-04-19 13:01
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470

Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: nodata (nodata101)
Assigned to: Nobody/Anonymous (nobody)
Summary: python socketmodule dies on ^c

Initial Comment:
I'm using yum on FC4T2 to apply updates to my computer.
When I press ^c, yum does not exit, but switches mirror.

I reported this to the fedora-test-list, and the
maintainer of yum believes this problem to be inside
the python socketmodule - in the C code.

The thread is here:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01372.html

Hopefully this is the right place to report the bug.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185931 ] python socketmodule dies on ^c

2005-04-19 Thread SourceForge.net
Bugs item #1185931, was opened at 2005-04-19 13:01
Message generated for change (Comment added) made by nodata101
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470

Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: nodata (nodata101)
Assigned to: Nobody/Anonymous (nobody)
Summary: python socketmodule dies on ^c

Initial Comment:
I'm using yum on FC4T2 to apply updates to my computer.
When I press ^c, yum does not exit, but switches mirror.

I reported this to the fedora-test-list, and the
maintainer of yum believes this problem to be inside
the python socketmodule - in the C code.

The thread is here:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01372.html

Hopefully this is the right place to report the bug.

--

>Comment By: nodata (nodata101)
Date: 2005-04-19 13:03

Message:
Logged In: YES 
user_id=960750

Correct ref is:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01545.html

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185931 ] python socketmodule dies on ^c

2005-04-19 Thread SourceForge.net
Bugs item #1185931, was opened at 2005-04-19 14:01
Message generated for change (Comment added) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470

Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: nodata (nodata101)
Assigned to: Nobody/Anonymous (nobody)
Summary: python socketmodule dies on ^c

Initial Comment:
I'm using yum on FC4T2 to apply updates to my computer.
When I press ^c, yum does not exit, but switches mirror.

I reported this to the fedora-test-list, and the
maintainer of yum believes this problem to be inside
the python socketmodule - in the C code.

The thread is here:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01372.html

Hopefully this is the right place to report the bug.

--

>Comment By: Michael Hudson (mwh)
Date: 2005-04-19 15:21

Message:
Logged In: YES 
user_id=6656

Maybe you can persuade the yum maintainers to tell us what the problem 
actually is?  I don't see anything useful in that thread, and don't 
particularly 
want to read the yum sources to find out.

--

Comment By: nodata (nodata101)
Date: 2005-04-19 14:03

Message:
Logged In: YES 
user_id=960750

Correct ref is:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01545.html

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185883 ] PyObject_Realloc bug in obmalloc.c

2005-04-19 Thread SourceForge.net
Bugs item #1185883, was opened at 2005-04-19 13:07
Message generated for change (Comment added) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470

Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Kristján Valur (krisvale)
>Assigned to: Tim Peters (tim_one)
Summary: PyObject_Realloc bug in obmalloc.c

Initial Comment:
obmalloc.c:835
If the previous block was not handled by obmalloc, and 
the realloc is for growing the block, this memcpy may 
cross a page boundary and cause a segmentation 
fault.  This scenario can happen if a previous allocation 
failed to successfully allocate from the obmalloc pools, 
due to memory starvation or other reasons, but was 
successfully allocated by the c runtime.

The solution is to query the actual size of the allocated 
block, and copy only so much memory.  Most modern 
platforms provide size query functions complementing 
the malloc()/free() calls.  on Windows, this is the _msize
() function.

--

>Comment By: Michael Hudson (mwh)
Date: 2005-04-19 15:30

Message:
Logged In: YES 
user_id=6656

Tim, what do you think?

This is a pretty unlikely scenario, it seems to me.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1181619 ] Bad sys.executable value for bdist_wininst install script

2005-04-19 Thread SourceForge.net
Bugs item #1181619, was opened at 2005-04-12 17:49
Message generated for change (Settings changed) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1181619&group_id=5470

Category: Distutils
Group: Platform-specific
Status: Open
Resolution: None
Priority: 5
Submitted By: follower (xfollower)
>Assigned to: Thomas Heller (theller)
Summary: Bad sys.executable value for bdist_wininst install script

Initial Comment:
Description copied from:
 


>From the Python docs, sys.executable is:

executable
A string giving the name of the executable binary
for the Python interpreter, on systems where this makes
sense.

However, during the execution of a post-install script,
this string actually resolves to the name of the binary
installer!  This name should resolve, I think to the
name of the Python executable for which the installer
is running (a value selectable at the start of the
installation, if more than one Python is detected). 
Having this value available allows you to properly
generate shortcuts with the proper full path to the
python executable.

I resorted to using sys.prefix+r'\python.exe', which
will most likely work, but I'd rather see
sys.executable give me a more sensible answer.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1181619&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1175967 ] StringIO and cStringIO don't provide 'name' attribute

2005-04-19 Thread SourceForge.net
Bugs item #1175967, was opened at 2005-04-03 21:20
Message generated for change (Comment added) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1175967&group_id=5470

Category: None
Group: None
>Status: Closed
>Resolution: Invalid
Priority: 5
Submitted By: logistix (logistix)
Assigned to: Nobody/Anonymous (nobody)
Summary: StringIO and cStringIO don't provide 'name' attribute

Initial Comment:
Documentation explicitly states that file-like objects 
should return a repr-style psuedoname.  Patch is 
attached.


--

>Comment By: Michael Hudson (mwh)
Date: 2005-04-19 15:37

Message:
Logged In: YES 
user_id=6656

Agree with Just here.

--

Comment By: Just van Rossum (jvr)
Date: 2005-04-07 09:54

Message:
Logged In: YES 
user_id=92689

The documentation also says "This is a read-only attribute and
may not be present on all file-like objects.", so I'm inclined to close
as "won't fix". I'm sure many in-the-wild file-like objects don't support
it, either, so depending on its existence is bad style at best.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1175967&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185883 ] PyObject_Realloc bug in obmalloc.c

2005-04-19 Thread SourceForge.net
Bugs item #1185883, was opened at 2005-04-19 12:07
Message generated for change (Comment added) made by krisvale
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470

Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Kristján Valur (krisvale)
Assigned to: Tim Peters (tim_one)
Summary: PyObject_Realloc bug in obmalloc.c

Initial Comment:
obmalloc.c:835
If the previous block was not handled by obmalloc, and 
the realloc is for growing the block, this memcpy may 
cross a page boundary and cause a segmentation 
fault.  This scenario can happen if a previous allocation 
failed to successfully allocate from the obmalloc pools, 
due to memory starvation or other reasons, but was 
successfully allocated by the c runtime.

The solution is to query the actual size of the allocated 
block, and copy only so much memory.  Most modern 
platforms provide size query functions complementing 
the malloc()/free() calls.  on Windows, this is the _msize
() function.

--

>Comment By: Kristján Valur (krisvale)
Date: 2005-04-19 14:39

Message:
Logged In: YES 
user_id=1262199

I can only say that I´ve been seeing this happeing with our 
software.  Admittedly it's because we are eating up all 
memory due to other reasons, but we would like to deal with 
that with a MemoryError rather than a crash.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 14:30

Message:
Logged In: YES 
user_id=6656

Tim, what do you think?

This is a pretty unlikely scenario, it seems to me.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1122301 ] marshal may crash on truncated input

2005-04-19 Thread SourceForge.net
Bugs item #1122301, was opened at 2005-02-14 11:14
Message generated for change (Comment added) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1122301&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fredrik Lundh (effbot)
>Assigned to: Fredrik Lundh (effbot)
Summary: marshal may crash on truncated input

Initial Comment:
marshal doesn't behave well on truncated or otherwise 
malformed input.  here's a short demo script, from a 
recent comp.lang.python thread:

:::

the problem is that the following may or may not reach 
the "done!" statement, somewhat depending on python 
version, memory allocator, and what data you pass to 
dumps.

import marshal

data = marshal.dumps((1, 2, 3, "hello", 4, 5, 6))

for i in range(len(data), -1, -1):
try:
print marshal.loads(data[:i])
except EOFError:
print "EOFError"
except ValueError:
print "ValueError"

print "done!"

(try different data combinations, to see how far you get 
on your platform...)

fixing this should be relatively easy, and should result in 
a safe unmarshaller (your application will still have to 
limit the amount of data fed into load/loads, of course).

:::

(also note that marshal may raise either EOFError or 
ValueError exceptions, again somewhat depending on 
how the file is damaged.  a little consistency wouldn't 
hurt, but I'm not sure if/how this can be fixed...)


--

>Comment By: Michael Hudson (mwh)
Date: 2005-04-19 15:58

Message:
Logged In: YES 
user_id=6656

I think the attached fixes this example, and another involving marshalled 
sets.

I spent a while feeding random data to marshal a few days ago and found 
that the commonest problem was attempting to allocate really huge 
sequences.  Also, the TYPE_STRINGREF is horribly fragile, but I'm 
hoping Martin's going to fix that (he has a bug filed against him, anyway).

Can you test/check it in?  My marshal.c has rather a lot of local changes.

Also, a test suite entry would be nice...

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1122301&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185883 ] PyObject_Realloc bug in obmalloc.c

2005-04-19 Thread SourceForge.net
Bugs item #1185883, was opened at 2005-04-19 08:07
Message generated for change (Comment added) made by tim_one
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470

Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Kristján Valur (krisvale)
>Assigned to: Nobody/Anonymous (nobody)
Summary: PyObject_Realloc bug in obmalloc.c

Initial Comment:
obmalloc.c:835
If the previous block was not handled by obmalloc, and 
the realloc is for growing the block, this memcpy may 
cross a page boundary and cause a segmentation 
fault.  This scenario can happen if a previous allocation 
failed to successfully allocate from the obmalloc pools, 
due to memory starvation or other reasons, but was 
successfully allocated by the c runtime.

The solution is to query the actual size of the allocated 
block, and copy only so much memory.  Most modern 
platforms provide size query functions complementing 
the malloc()/free() calls.  on Windows, this is the _msize
() function.

--

>Comment By: Tim Peters (tim_one)
Date: 2005-04-19 11:00

Message:
Logged In: YES 
user_id=31435

mwh:  Umm ... I don't understand what the claim is.  For 
example, what HW does Python run on where memcpy 
segfaults just because the address range crosses a page 
boundary?  If that's what's happening, sounds more like a 
bug in the platform memcpy.  I can memcpy blocks spanning 
thousands of pages on my box -- and so can you .

krisvale:  which OS and which C are you using?

It is true that this code may try to access a bit of memory 
that wasn't allocated.  If that's at the end of the address 
space, then I could see a segfault happening.  If it is, I doubt 
there's any portable way to fix it short of PyObject_Realloc 
never trying to take over small blocks it didn't control to begin 
with.  Then the platform realloc() will segfault instead .


--

Comment By: Kristján Valur (krisvale)
Date: 2005-04-19 10:39

Message:
Logged In: YES 
user_id=1262199

I can only say that I´ve been seeing this happeing with our 
software.  Admittedly it's because we are eating up all 
memory due to other reasons, but we would like to deal with 
that with a MemoryError rather than a crash.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 10:30

Message:
Logged In: YES 
user_id=6656

Tim, what do you think?

This is a pretty unlikely scenario, it seems to me.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1100673 ] Python Interpreter shell is crashed

2005-04-19 Thread SourceForge.net
Bugs item #1100673, was opened at 2005-01-12 05:49
Message generated for change (Settings changed) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1100673&group_id=5470

Category: Python Interpreter Core
Group: Python 2.2.2
>Status: Closed
>Resolution: Rejected
Priority: 5
Submitted By: abhishek (abhishekkabra)
Assigned to: Nobody/Anonymous (nobody)
Summary: Python Interpreter shell is crashed 

Initial Comment:
I faced this problem about 50 % of time when I hit
follwing commands on python shell. 

But I think Crash of interpreter is not a expected
behaviour. It should throw some error even if I am
wrong/ hitting wrong commands 


1. on shell of linux  start python 
2  On python shell  hit _doc__ 
  ( underscore doc underscore
underscore )

So Python shell is crashed with following crash dump 


darwin{akabra}6: python
Python 2.2.2 (#1, Feb 24 2003, 19:13:11)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-4)] on linux2
Type "help", "copyright", "credits" or "license" for
more information.
>>> _doc__Segmentation fault (core dumped)
darwin{akabra}7:



--

>Comment By: Michael Hudson (mwh)
Date: 2005-04-19 16:02

Message:
Logged In: YES 
user_id=6656

Closing for want of activity.

--

Comment By: Facundo Batista (facundobatista)
Date: 2005-01-15 20:55

Message:
Logged In: YES 
user_id=752496

Works ok to me.

--

Comment By: Facundo Batista (facundobatista)
Date: 2005-01-15 20:55

Message:
Logged In: YES 
user_id=752496

Please, could you verify if this problem persists in Python 2.3.4
or 2.4?

If yes, in which version? Can you provide a test case?

If the problem is solved, from which version?

Note that if you fail to answer in one month, I'll close this bug
as "Won't fix".

Thank you! 

.Facundo

--

Comment By: Puneet (puneet_mnitian)
Date: 2005-01-13 05:46

Message:
Logged In: YES 
user_id=1196195

Not reproducible

--

Comment By: Michael Hudson (mwh)
Date: 2005-01-12 19:16

Message:
Logged In: YES 
user_id=6656

That's certainly not expected behaviour, however I think
it's unlikely to be a python bug -- I've not heard of this
behaviour before.

Is python using readline?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1100673&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185931 ] python socketmodule dies on ^c

2005-04-19 Thread SourceForge.net
Bugs item #1185931, was opened at 2005-04-19 13:01
Message generated for change (Comment added) made by nodata101
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470

Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: nodata (nodata101)
Assigned to: Nobody/Anonymous (nobody)
Summary: python socketmodule dies on ^c

Initial Comment:
I'm using yum on FC4T2 to apply updates to my computer.
When I press ^c, yum does not exit, but switches mirror.

I reported this to the fedora-test-list, and the
maintainer of yum believes this problem to be inside
the python socketmodule - in the C code.

The thread is here:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01372.html

Hopefully this is the right place to report the bug.

--

>Comment By: nodata (nodata101)
Date: 2005-04-19 15:14

Message:
Logged In: YES 
user_id=960750

Sorry - already reported upstream by yum maintainer.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 14:21

Message:
Logged In: YES 
user_id=6656

Maybe you can persuade the yum maintainers to tell us what the problem 
actually is?  I don't see anything useful in that thread, and don't 
particularly 
want to read the yum sources to find out.

--

Comment By: nodata (nodata101)
Date: 2005-04-19 13:03

Message:
Logged In: YES 
user_id=960750

Correct ref is:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01545.html

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185931 ] python socketmodule dies on ^c

2005-04-19 Thread SourceForge.net
Bugs item #1185931, was opened at 2005-04-19 14:01
Message generated for change (Comment added) made by mwh
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470

Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: nodata (nodata101)
Assigned to: Nobody/Anonymous (nobody)
Summary: python socketmodule dies on ^c

Initial Comment:
I'm using yum on FC4T2 to apply updates to my computer.
When I press ^c, yum does not exit, but switches mirror.

I reported this to the fedora-test-list, and the
maintainer of yum believes this problem to be inside
the python socketmodule - in the C code.

The thread is here:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01372.html

Hopefully this is the right place to report the bug.

--

>Comment By: Michael Hudson (mwh)
Date: 2005-04-19 16:20

Message:
Logged In: YES 
user_id=6656

Huh?  Where?  Should this be closed as a duplicate?

--

Comment By: nodata (nodata101)
Date: 2005-04-19 16:14

Message:
Logged In: YES 
user_id=960750

Sorry - already reported upstream by yum maintainer.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 15:21

Message:
Logged In: YES 
user_id=6656

Maybe you can persuade the yum maintainers to tell us what the problem 
actually is?  I don't see anything useful in that thread, and don't 
particularly 
want to read the yum sources to find out.

--

Comment By: nodata (nodata101)
Date: 2005-04-19 14:03

Message:
Logged In: YES 
user_id=960750

Correct ref is:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01545.html

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185883 ] PyObject_Realloc bug in obmalloc.c

2005-04-19 Thread SourceForge.net
Bugs item #1185883, was opened at 2005-04-19 12:07
Message generated for change (Comment added) made by krisvale
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470

Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Kristján Valur (krisvale)
Assigned to: Nobody/Anonymous (nobody)
Summary: PyObject_Realloc bug in obmalloc.c

Initial Comment:
obmalloc.c:835
If the previous block was not handled by obmalloc, and 
the realloc is for growing the block, this memcpy may 
cross a page boundary and cause a segmentation 
fault.  This scenario can happen if a previous allocation 
failed to successfully allocate from the obmalloc pools, 
due to memory starvation or other reasons, but was 
successfully allocated by the c runtime.

The solution is to query the actual size of the allocated 
block, and copy only so much memory.  Most modern 
platforms provide size query functions complementing 
the malloc()/free() calls.  on Windows, this is the _msize
() function.

--

>Comment By: Kristján Valur (krisvale)
Date: 2005-04-19 15:22

Message:
Logged In: YES 
user_id=1262199

The platform is windows 2000/2003 server, single threaded C 
runtime.  I have only had the chance to do postmortem 
debugging on this but it would appear to be as you describe:  
The following page is not mapped in.  Windows doesn´t use 
the setbrk() method of heap management and doesn´t 
automatically move the break.  Rather they (the multiple 
heaps) requests pages as required.   A malloc may have 
succeeded from a different page and copying to much from 
the old block close to the boundary caused an exception 
_at_ the page boundary.
Fyi, old block was 68 bytes at 0x6d85efb8.  This block ends 
at -effc.   The new size requested was 108 bytes.  Reading 
108 bytes from this address caused an exception at address 
0x6d85f000.  As you know, reading past a malloc block 
results in undefined behaviour and sometimes this can mean 
a crash.
I have patched python locally to use MIN(nbytes, _msize(p)) 
in stead and we are about to run the modified version on our 
server cluster.  Nodes were dying quite regularly because of 
this.  I'll let you know if this changes anyting in that aspect.

Btw, I work for ccp games, and we are running the MMORPG 
eve online (www.eveonline.com)

--

Comment By: Tim Peters (tim_one)
Date: 2005-04-19 15:00

Message:
Logged In: YES 
user_id=31435

mwh:  Umm ... I don't understand what the claim is.  For 
example, what HW does Python run on where memcpy 
segfaults just because the address range crosses a page 
boundary?  If that's what's happening, sounds more like a 
bug in the platform memcpy.  I can memcpy blocks spanning 
thousands of pages on my box -- and so can you .

krisvale:  which OS and which C are you using?

It is true that this code may try to access a bit of memory 
that wasn't allocated.  If that's at the end of the address 
space, then I could see a segfault happening.  If it is, I doubt 
there's any portable way to fix it short of PyObject_Realloc 
never trying to take over small blocks it didn't control to begin 
with.  Then the platform realloc() will segfault instead .


--

Comment By: Kristján Valur (krisvale)
Date: 2005-04-19 14:39

Message:
Logged In: YES 
user_id=1262199

I can only say that I´ve been seeing this happeing with our 
software.  Admittedly it's because we are eating up all 
memory due to other reasons, but we would like to deal with 
that with a MemoryError rather than a crash.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 14:30

Message:
Logged In: YES 
user_id=6656

Tim, what do you think?

This is a pretty unlikely scenario, it seems to me.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1185883 ] PyObject_Realloc bug in obmalloc.c

2005-04-19 Thread SourceForge.net
Bugs item #1185883, was opened at 2005-04-19 08:07
Message generated for change (Comment added) made by tim_one
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470

Category: Python Interpreter Core
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Kristján Valur (krisvale)
>Assigned to: Tim Peters (tim_one)
Summary: PyObject_Realloc bug in obmalloc.c

Initial Comment:
obmalloc.c:835
If the previous block was not handled by obmalloc, and 
the realloc is for growing the block, this memcpy may 
cross a page boundary and cause a segmentation 
fault.  This scenario can happen if a previous allocation 
failed to successfully allocate from the obmalloc pools, 
due to memory starvation or other reasons, but was 
successfully allocated by the c runtime.

The solution is to query the actual size of the allocated 
block, and copy only so much memory.  Most modern 
platforms provide size query functions complementing 
the malloc()/free() calls.  on Windows, this is the _msize
() function.

--

>Comment By: Tim Peters (tim_one)
Date: 2005-04-19 11:34

Message:
Logged In: YES 
user_id=31435

krisvale:  Thank you for the very clear explanation.  Even I 
understand this now .

We won't use _msize here -- Python has to run under dozens 
of compilers and C libraries, and it's saner to give up on 
this "optimization" completely than to introduce a rat's nest 
of #ifdefs here.  IOW, I expect the entire "if (nbytes <= 
SMALL_REQUEST_THRESHOLD)" block will go away, so 
that the platform realloc() gets called in every case obmalloc 
doesn't control the incoming block.

BTW, note that there's no plan to do another release in the 
Python 2.3 line.

--

Comment By: Kristján Valur (krisvale)
Date: 2005-04-19 11:22

Message:
Logged In: YES 
user_id=1262199

The platform is windows 2000/2003 server, single threaded C 
runtime.  I have only had the chance to do postmortem 
debugging on this but it would appear to be as you describe:  
The following page is not mapped in.  Windows doesn´t use 
the setbrk() method of heap management and doesn´t 
automatically move the break.  Rather they (the multiple 
heaps) requests pages as required.   A malloc may have 
succeeded from a different page and copying to much from 
the old block close to the boundary caused an exception 
_at_ the page boundary.
Fyi, old block was 68 bytes at 0x6d85efb8.  This block ends 
at -effc.   The new size requested was 108 bytes.  Reading 
108 bytes from this address caused an exception at address 
0x6d85f000.  As you know, reading past a malloc block 
results in undefined behaviour and sometimes this can mean 
a crash.
I have patched python locally to use MIN(nbytes, _msize(p)) 
in stead and we are about to run the modified version on our 
server cluster.  Nodes were dying quite regularly because of 
this.  I'll let you know if this changes anyting in that aspect.

Btw, I work for ccp games, and we are running the MMORPG 
eve online (www.eveonline.com)

--

Comment By: Tim Peters (tim_one)
Date: 2005-04-19 11:00

Message:
Logged In: YES 
user_id=31435

mwh:  Umm ... I don't understand what the claim is.  For 
example, what HW does Python run on where memcpy 
segfaults just because the address range crosses a page 
boundary?  If that's what's happening, sounds more like a 
bug in the platform memcpy.  I can memcpy blocks spanning 
thousands of pages on my box -- and so can you .

krisvale:  which OS and which C are you using?

It is true that this code may try to access a bit of memory 
that wasn't allocated.  If that's at the end of the address 
space, then I could see a segfault happening.  If it is, I doubt 
there's any portable way to fix it short of PyObject_Realloc 
never trying to take over small blocks it didn't control to begin 
with.  Then the platform realloc() will segfault instead .


--

Comment By: Kristján Valur (krisvale)
Date: 2005-04-19 10:39

Message:
Logged In: YES 
user_id=1262199

I can only say that I´ve been seeing this happeing with our 
software.  Admittedly it's because we are eating up all 
memory due to other reasons, but we would like to deal with 
that with a MemoryError rather than a crash.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 10:30

Message:
Logged In: YES 
user_id=6656

Tim, what do you think?

This is a pretty unlikely scenario, it seems to me.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185883&group_id=5470
___
Python-bugs-list mai

[ python-Bugs-1186072 ] tempnam doc doesn't include link to tmpfile

2005-04-19 Thread SourceForge.net
Bugs item #1186072, was opened at 2005-04-19 10:49
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186072&group_id=5470

Category: Documentation
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Ian Bicking (ianbicking)
Assigned to: Nobody/Anonymous (nobody)
Summary: tempnam doc doesn't include link to tmpfile

Initial Comment:
Both tmpnam and tempnam include references to tmpfile
(as a preferred way of using temporary files). 
However, they don't include a link to the page where
tmpfile is documented, and it is documented in a
different (non-obvious) section of the ``os`` page.  A
link to the section containing tmpfile would be helpful.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186072&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-548661 ] os.popen w/o using the shell

2005-04-19 Thread SourceForge.net
Bugs item #548661, was opened at 2002-04-25 10:30
Message generated for change (Comment added) made by ianbicking
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=548661&group_id=5470

Category: Python Library
Group: Feature Request
Status: Open
Resolution: None
Priority: 5
Submitted By: Ian Bicking (ianbicking)
Assigned to: Nobody/Anonymous (nobody)
Summary: os.popen w/o using the shell

Initial Comment:
I heard that there was an undocumented feature to the
os.popen family, where instead of passing a command
string as the first argument you could pass a list, as
[path, arg1, arg2, ...], and circumvent any shell
interpretation.  I was disapointed to see that this was
not so ("popen() argument 1 must be string, not list"
ditto tuple)

I believe this would be an excellent feature -- using
the shell is a significant source of errors due to
quoting, as well as a serious security concern.  95% of
the time the shell is not required.  The shell also
introduces portability concerns (e.g., bug #512433) --
creating a Python shell is not necessary when the shell
is usually superfluous anyway.



--

>Comment By: Ian Bicking (ianbicking)
Date: 2005-04-19 11:45

Message:
Logged In: YES 
user_id=210337

This may not matter enough to resolve now, with the advent
of the subprocess module.

--

Comment By: Peter Åstrand (astrand)
Date: 2003-11-01 10:29

Message:
Logged In: YES 
user_id=344921

popen5 never uses the shell
(http://www.lysator.liu.se/~astrand/popen5/)

--

Comment By: Julián Muñoz (julian69)
Date: 2003-01-08 10:15

Message:
Logged In: YES 
user_id=77756

Does this mean that giving a list to popen2 free us from
taking care of the dangerous characters that could be
interprated/escaped by the shell ???

I don't find any documentation about this feature !!!



--

Comment By: Ian Bicking (ianbicking)
Date: 2002-04-25 13:18

Message:
Logged In: YES 
user_id=210337

I see you are correct.  It would be nice if this feature was
consistent across all popen*, and was also documented (and
so  also committed to with clear semantics)

--

Comment By: Martin v. Löwis (loewis)
Date: 2002-04-25 13:13

Message:
Logged In: YES 
user_id=21627

This feature is indeed available in popen2.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=548661&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-19 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-05 21:03
Message generated for change (Comment added) made by lcaamano
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 12:53

Message:
Logged In: YES 
user_id=279987

We're facing this problem.  We're thinking of patching our os.py 
module to always open /dev/urandom on every call.  Does 
anybody know if this would have any bad consequences other 
than the obvious system call overhead?

BTW, here's the traceback we get.  As you probably can guess, 
something called os.urandom before we closed all file descriptors 
in the daemonizing code and it then failed when os.urandom 
tried to use the cached fd.

 Traceback (most recent call last):
   File  "/opt/race/share/sw/common/bin/dpmd", line 27, in ?
 dpmd().run()
   File  "Linux/CommandLineApp.py", line 336, in run
   File  "Linux/daemonbase.py", line 324, in main
   File  "Linux/server.py", line 61, in addServices
   File  "Linux/dpmd.py", line 293, in __init__
   File  "Linux/threadutils.py", line 44, in start
   File  "Linux/xmlrpcd.py", line 165, in createThread
   File  "Linux/threadutils.py", line 126, in __init__
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 423, in NamedTemporaryFile
 dir = gettempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 262, in gettempdir
 tempdir =  _get_default_tempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 185, in _get_default_tempdir
 namer =  _RandomNameSequence()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 121, in __init__
 self.rng = _Random()
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 96, in __init__
 self.seed(x)
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 110, in seed

 a =  long(_hexlify(_urandom(16)), 16)

   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/
os.py", line  728, in urandom 
 bytes +=  read(_urandomfd, n - len(bytes))

 OSError : [Errno 9] Bad file  descriptor



--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 05:04

Message:
Logged In: YES 
user_id=81797

The child is a copy of the parent.  Therefore, if in the
parent you open a few file descriptors, those are the ones
you should close in the child.  That is exactly what I've
done in the past when I forked a child, and it has worked
very well.

I suspect Stevens would make an exception to his guideline
in the event that closing a file descriptor results in
library routine failures.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-06 03:27

Message:
Logged In: YES 
user_id=110477

Unfortunately, catching exceptions is not sufficient - the
file descriptor may have been reassigned. Fortunately in my
case, to a socket which raised ENOSYS, but if it had been a
normal file, this would have been much harder to trace
because reading from it would cause weird errors for readers
of the reassigned fd without triggering an exception in
os.urandom() itself.

As for not closing file descriptors you haven't opened
yourself, if the process is the result of a vfork/exec (in
my case Python processes started by a cluster manager, sort
of like init), the child process has no clue what file
descriptors, sockets or the like it has inherited from its
parent process, 

[ python-Bugs-1185124 ] pydoc doesn't find all module doc strings

2005-04-19 Thread SourceForge.net
Bugs item #1185124, was opened at 2005-04-18 08:18
Message generated for change (Comment added) made by brianvanden
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185124&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Kent Johnson (kjohnson)
Assigned to: Ka-Ping Yee (ping)
Summary: pydoc doesn't find all module doc strings

Initial Comment:
pydoc.synopsis() attempts to find a module's doc string
by parsing the module text. But the parser only
recognizes strings created with """ and r""". Any other
docstring is ignored.

I've attached a patch against Python 2.4.1 that fixes
pydoc to recognize ''' and r''' strings but really it
should recognize any allowable string format.

--

Comment By: Brian vdB (brianvanden)
Date: 2005-04-19 13:11

Message:
Logged In: YES 
user_id=1015686

I started the thread to which Kent referred. I am aware of
PEP 257's recommendation of triple-double quotes. My
(perhaps wrong-headed) construal of that PEP is that it
isn't sufficiently rule-giving that I would have expected
other tools to reject triple-single quotes. 
At any rate, since triple-single are syntactically
acceptable, it would seem better if they were accepted on
equal footing with triple-double. I can well understand that
this would be a v. low priority issue, though. Call it a
RFE. :-)

--

Comment By: Ka-Ping Yee (ping)
Date: 2005-04-18 16:28

Message:
Logged In: YES 
user_id=45338

I think you're right that if it works for the module summary
(using __doc__) then it should work with synopsis(). 
However, the patch you've added doesn't address the problem
properly; instead of handling """ correctly and ignoring
''', it handles both kinds of docstrings incorrectly because
it will accept ''' as a match for """ or """ as a match for '''.

I'll look at fixing this soon, but feel free to keep
prodding me until it gets fixed.

--

Comment By: Kent Johnson (kjohnson)
Date: 2005-04-18 16:04

Message:
Logged In: YES 
user_id=49695

I don't know if there are a large number of modules with
triple-single-quoted docstrings. Pydoc will search any
module in site-packages at least, so you have to consider
third-party modules.

At best pydoc is inconsistent - the web browser display uses
the __doc__attribute but search and apropos use synopsis().
It's a pretty simple change to recognize any triple-quoted
string, it seems like a good idea to me...

I have attached a revised patch that uses a regex match so
it works with e.g. uR""" and other variations of triple-quoting.

FWIW this bug report was motivated by this thread on
comp.lang.python:
http://groups-beta.google.com/group/comp.lang.python/browse_frm/thread/e5cfccb7c9a168d7/1c1702e71e1939b0?q=triple&rnum=1#1c1702e71e1939b0


--

Comment By: Ka-Ping Yee (ping)
Date: 2005-04-18 14:23

Message:
Logged In: YES 
user_id=45338

PEP 257 recommends: "For consistency, always use """triple
double quotes""" around docstrings."  I think that's why
this was originally written to only look for triple
double-quotes.

Are there a large number of modules written using
triple-single quotes for the module docstring?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185124&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-19 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-05 18:03
Message generated for change (Comment added) made by majid
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Fazal Majid (majid)
Date: 2005-04-19 10:49

Message:
Logged In: YES 
user_id=110477

Modifying the system os.py is not a good idea. A better
work-around is to skip the /dev/urandom fd when you are
closing all fds. This is the code we use:

def close_fd():
  # close all inherited file descriptors
  start_fd = 3
  # Python 2.4.1 and later use /dev/urandom to seed the
random module's RNG
  # a file descriptor is kept internally as os._urandomfd
(created on demand
  # the first time os.urandom() is called), and should not
be closed
  try:
os.urandom(4)
urandom_fd = getattr(os, '_urandomfd', None)
  except AttributeError:
urandom_fd = None
  if '-close_fd' in sys.argv:
start_fd = int(sys.argv[sys.argv.index('-close_fd') + 1])
  for fd in range(start_fd, 256):
if fd == urandom_fd:
  continue
try:
  os.close(fd)
except OSError:
  pass


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 09:53

Message:
Logged In: YES 
user_id=279987

We're facing this problem.  We're thinking of patching our os.py 
module to always open /dev/urandom on every call.  Does 
anybody know if this would have any bad consequences other 
than the obvious system call overhead?

BTW, here's the traceback we get.  As you probably can guess, 
something called os.urandom before we closed all file descriptors 
in the daemonizing code and it then failed when os.urandom 
tried to use the cached fd.

 Traceback (most recent call last):
   File  "/opt/race/share/sw/common/bin/dpmd", line 27, in ?
 dpmd().run()
   File  "Linux/CommandLineApp.py", line 336, in run
   File  "Linux/daemonbase.py", line 324, in main
   File  "Linux/server.py", line 61, in addServices
   File  "Linux/dpmd.py", line 293, in __init__
   File  "Linux/threadutils.py", line 44, in start
   File  "Linux/xmlrpcd.py", line 165, in createThread
   File  "Linux/threadutils.py", line 126, in __init__
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 423, in NamedTemporaryFile
 dir = gettempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 262, in gettempdir
 tempdir =  _get_default_tempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 185, in _get_default_tempdir
 namer =  _RandomNameSequence()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 121, in __init__
 self.rng = _Random()
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 96, in __init__
 self.seed(x)
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 110, in seed

 a =  long(_hexlify(_urandom(16)), 16)

   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/
os.py", line  728, in urandom 
 bytes +=  read(_urandomfd, n - len(bytes))

 OSError : [Errno 9] Bad file  descriptor



--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 02:04

Message:
Logged In: YES 
user_id=81797

The child is a copy of the parent.  Therefore, if in the
parent you open a few file descriptors, those are the ones
you should close in the child.  That is exactly what I've
done in the past when I forked a child, and it has worked
very well.

I suspect Stevens would

[ python-Feature Requests-1185121 ] itertools.imerge: merge sequences

2005-04-19 Thread SourceForge.net
Feature Requests item #1185121, was opened at 2005-04-18 07:11
Message generated for change (Comment added) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1185121&group_id=5470

Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Jurjen N.E. Bos (jneb)
Assigned to: Raymond Hettinger (rhettinger)
Summary: itertools.imerge: merge sequences

Initial Comment:
(For the itertools library, so Python 2.2 and up)
This is a suggested addition to itertools, proposed name imerge.
usage: imerge(seq0, seq1, ..., [key=])
result: imerge assumes the sequences are all in sorted order, and 
produces a iterator that returns pairs of the form (value, index),
where value is a value of one of the sequences, and index is the 
index number of the given sequence.
The output the imerge is in sorted order (taking into account the 
key function), so that identical values in the sequences will be 
produced from left to right.
The code is surprisingly short, making use of the builtin heap 
module.
(You may disagree with my style of argument handling; feel free to 
optimize it.)
def imerge(*iterlist, **key):
"""Merge a sequence of sorted iterables.

Returns pairs [value, index] where each value comes from 
iterlist[index], and the pairs are sorted
if each of the iterators is sorted.
Hint use groupby(imerge(...), operator.itemgetter(0)) to get 
the items one by one.
"""
if key.keys() not in ([], ["key"]): raise TypeError, "Excess 
keyword arguments for imerge"
key = key.get("key", lambda x:x)
from heapq import heapreplace, heappop
#initialize the heap containing (inited, value, index, 
currentItem, iterator)
#this automatically makes sure all iterators are initialized, 
then run, and finally emptied
heap = [(False, None, index, None, iter(iterator)) for index, 
iterator in enumerate(iterlist)]
while heap:
inited, item, index, value, iterator = heap[0]
if inited: yield value, index
try: item = iterator.next()
except StopIteration: heappop(heap)
else: heapreplace(heap, (True, key(item), index, item, 
iterator))

If you find this little routine worth its size, please put it into 
itertools.

- Jurjen

--

>Comment By: Raymond Hettinger (rhettinger)
Date: 2005-04-19 13:58

Message:
Logged In: YES 
user_id=80475

For your specific application, it is better to use sorted().
 When the underlying data consists of long runs of
previously ordered data, sorted() will take advantage of
that ordering and run in O(n) time.  In contrast, using a
heap will unnecessarily introduce O(n log n) behavior and
not exploit the underlying data order.

Recommend that you close this request.  This discussion thus
far confirms the original conclusion that imerge() use cases
are dominated by sorted(chain(*iterlist)) which gives code
that is shorter, faster, and easier to understand.

--

Comment By: Jurjen N.E. Bos (jneb)
Date: 2005-04-19 03:19

Message:
Logged In: YES 
user_id=446428

Well, I was optimizing a piece of code with reasonbly long sorted lists (in 
memory, I agree) that were modified in all kinds of ways. I did not want 
the nlogn behaviour of sort, so I started writing a merge routine.
I found out that the boundary cases of a merge implementation are a 
mess, until I disccovered the heap trick. Then I decided to clean it up 
and and put it up for a library routine.
The fact that it uses iterators is obnly to make it more general, not 
specifically for the "lazy" properties.
- Jurjen

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2005-04-18 17:43

Message:
Logged In: YES 
user_id=80475

I had previously looked at an imerge() utility and found
that it had only a single application (isomorphic to lazy
mergesorting) and that the use cases were dominated by the
in-memory alternative:  sorted(chain(*iterlist)).

Short of writing an external mergesort, what applications
did you have in mind?  What situations have you encountered
where you have multiple sources of sorted data being
generated on the fly (as opposed to already being
in-memory), have needed one element at a time sequential
access to a combined sort of that data, needed that combined
sort only once, and could not afford to have the dataset
in-memory?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1185121&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1186195 ] [AST] genexps get scoping wrong

2005-04-19 Thread SourceForge.net
Bugs item #1186195, was opened at 2005-04-19 12:02
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470

Category: Parser/Compiler
Group: AST
Status: Open
Resolution: None
Priority: 5
Submitted By: Brett Cannon (bcannon)
Assigned to: Nick Coghlan (ncoghlan)
Summary: [AST] genexps get scoping wrong

Initial Comment:
test_genexps is failing because it is unable to find a
global defined in a genexp that is returned.  Here is
the problem simplified:

def f(n): return (i for i in xrange(n))
list(f(10))

Leads to ``SystemError: no locals when loading 'xrange'``.

Comparing Python 2.4 bytecode:

  1   0 LOAD_CONST   1 ( at 0x3931e0, file
"", line 1>)
  3 MAKE_FUNCTION0
  6 LOAD_GLOBAL  0 (xrange)
  9 LOAD_FAST0 (n)
 12 CALL_FUNCTION1
 15 GET_ITER
 16 CALL_FUNCTION1
 19 RETURN_VALUE
 20 LOAD_CONST   0 (None)
 23 RETURN_VALUE

to AST bytecode:

  1   0 LOAD_CLOSURE 0 (n)
  3 BUILD_TUPLE  1
  6 LOAD_CONST   1 ( at 0x5212e8, file
"", line 1>)
  9 MAKE_CLOSURE 0
 12 LOAD_NAME0 (xrange)
 15 LOAD_DEREF   0 (n)
 18 CALL_FUNCTION1
 21 GET_ITER
 22 CALL_FUNCTION1
 25 RETURN_VALUE
 26 LOAD_CONST   0 (None)
 29 RETURN_VALUE

makes it obvious something is off (no peepholer; turned
it off in my build of 2.4).

Looks like extraneous work is being done in making a
closure.  Seems like it will still work, though.

Plus the usage of LOAD_NAME is wrong in the AST;
LOAD_NAME gets an object from the local namespace based
on its name instead of offset.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1186195 ] [AST] genexps get scoping wrong

2005-04-19 Thread SourceForge.net
Bugs item #1186195, was opened at 2005-04-19 12:02
Message generated for change (Comment added) made by bcannon
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470

Category: Parser/Compiler
Group: AST
Status: Open
Resolution: None
Priority: 5
Submitted By: Brett Cannon (bcannon)
Assigned to: Nick Coghlan (ncoghlan)
Summary: [AST] genexps get scoping wrong

Initial Comment:
test_genexps is failing because it is unable to find a
global defined in a genexp that is returned.  Here is
the problem simplified:

def f(n): return (i for i in xrange(n))
list(f(10))

Leads to ``SystemError: no locals when loading 'xrange'``.

Comparing Python 2.4 bytecode:

  1   0 LOAD_CONST   1 ( at 0x3931e0, file
"", line 1>)
  3 MAKE_FUNCTION0
  6 LOAD_GLOBAL  0 (xrange)
  9 LOAD_FAST0 (n)
 12 CALL_FUNCTION1
 15 GET_ITER
 16 CALL_FUNCTION1
 19 RETURN_VALUE
 20 LOAD_CONST   0 (None)
 23 RETURN_VALUE

to AST bytecode:

  1   0 LOAD_CLOSURE 0 (n)
  3 BUILD_TUPLE  1
  6 LOAD_CONST   1 ( at 0x5212e8, file
"", line 1>)
  9 MAKE_CLOSURE 0
 12 LOAD_NAME0 (xrange)
 15 LOAD_DEREF   0 (n)
 18 CALL_FUNCTION1
 21 GET_ITER
 22 CALL_FUNCTION1
 25 RETURN_VALUE
 26 LOAD_CONST   0 (None)
 29 RETURN_VALUE

makes it obvious something is off (no peepholer; turned
it off in my build of 2.4).

Looks like extraneous work is being done in making a
closure.  Seems like it will still work, though.

Plus the usage of LOAD_NAME is wrong in the AST;
LOAD_NAME gets an object from the local namespace based
on its name instead of offset.


--

>Comment By: Brett Cannon (bcannon)
Date: 2005-04-19 12:03

Message:
Logged In: YES 
user_id=357491

Initially assigned to Nick since he did the genexp patch and
in hopes he might know what is going on off the top of his
head.  Otherwise assign to me.

I have a sneaking suspicion that the symtable code overall
is slightly busted.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1181619 ] Bad sys.executable value for bdist_wininst install script

2005-04-19 Thread SourceForge.net
Bugs item #1181619, was opened at 2005-04-12 18:49
Message generated for change (Comment added) made by theller
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1181619&group_id=5470

Category: Distutils
Group: Platform-specific
>Status: Closed
>Resolution: Rejected
Priority: 5
Submitted By: follower (xfollower)
Assigned to: Thomas Heller (theller)
Summary: Bad sys.executable value for bdist_wininst install script

Initial Comment:
Description copied from:
 


>From the Python docs, sys.executable is:

executable
A string giving the name of the executable binary
for the Python interpreter, on systems where this makes
sense.

However, during the execution of a post-install script,
this string actually resolves to the name of the binary
installer!  This name should resolve, I think to the
name of the Python executable for which the installer
is running (a value selectable at the start of the
installation, if more than one Python is detected). 
Having this value available allows you to properly
generate shortcuts with the proper full path to the
python executable.

I resorted to using sys.prefix+r'\python.exe', which
will most likely work, but I'd rather see
sys.executable give me a more sensible answer.


--

>Comment By: Thomas Heller (theller)
Date: 2005-04-19 21:46

Message:
Logged In: YES 
user_id=11105

I interpreted the docs you quote as 'the Python interpreter
that is currently running' and not 'the Python interpreter
that is normally used'.

OTOH, it's too late to change this for 2.4 anyway.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1181619&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-19 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-05 21:03
Message generated for change (Comment added) made by gvanrossum
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Guido van Rossum (gvanrossum)
Date: 2005-04-19 16:05

Message:
Logged In: YES 
user_id=6380

I recommend to close this as invalid. The daemonization code
is clearly broken.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-19 13:49

Message:
Logged In: YES 
user_id=110477

Modifying the system os.py is not a good idea. A better
work-around is to skip the /dev/urandom fd when you are
closing all fds. This is the code we use:

def close_fd():
  # close all inherited file descriptors
  start_fd = 3
  # Python 2.4.1 and later use /dev/urandom to seed the
random module's RNG
  # a file descriptor is kept internally as os._urandomfd
(created on demand
  # the first time os.urandom() is called), and should not
be closed
  try:
os.urandom(4)
urandom_fd = getattr(os, '_urandomfd', None)
  except AttributeError:
urandom_fd = None
  if '-close_fd' in sys.argv:
start_fd = int(sys.argv[sys.argv.index('-close_fd') + 1])
  for fd in range(start_fd, 256):
if fd == urandom_fd:
  continue
try:
  os.close(fd)
except OSError:
  pass


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 12:53

Message:
Logged In: YES 
user_id=279987

We're facing this problem.  We're thinking of patching our os.py 
module to always open /dev/urandom on every call.  Does 
anybody know if this would have any bad consequences other 
than the obvious system call overhead?

BTW, here's the traceback we get.  As you probably can guess, 
something called os.urandom before we closed all file descriptors 
in the daemonizing code and it then failed when os.urandom 
tried to use the cached fd.

 Traceback (most recent call last):
   File  "/opt/race/share/sw/common/bin/dpmd", line 27, in ?
 dpmd().run()
   File  "Linux/CommandLineApp.py", line 336, in run
   File  "Linux/daemonbase.py", line 324, in main
   File  "Linux/server.py", line 61, in addServices
   File  "Linux/dpmd.py", line 293, in __init__
   File  "Linux/threadutils.py", line 44, in start
   File  "Linux/xmlrpcd.py", line 165, in createThread
   File  "Linux/threadutils.py", line 126, in __init__
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 423, in NamedTemporaryFile
 dir = gettempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 262, in gettempdir
 tempdir =  _get_default_tempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 185, in _get_default_tempdir
 namer =  _RandomNameSequence()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 121, in __init__
 self.rng = _Random()
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 96, in __init__
 self.seed(x)
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 110, in seed

 a =  long(_hexlify(_urandom(16)), 16)

   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/
os.py", line  728, in urandom 
 bytes +=  read(_urandomfd, n - len(bytes))

 OSError : [Errno 9] Bad file  descriptor



--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-06 05:04

Message:
Logged In: YES 
user_id=81797

The 

[ python-Bugs-1185931 ] python socketmodule dies on ^c

2005-04-19 Thread SourceForge.net
Bugs item #1185931, was opened at 2005-04-19 13:01
Message generated for change (Comment added) made by nodata101
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470

Category: Extension Modules
Group: Python 2.4
>Status: Closed
>Resolution: Duplicate
Priority: 5
Submitted By: nodata (nodata101)
Assigned to: Nobody/Anonymous (nobody)
Summary: python socketmodule dies on ^c

Initial Comment:
I'm using yum on FC4T2 to apply updates to my computer.
When I press ^c, yum does not exit, but switches mirror.

I reported this to the fedora-test-list, and the
maintainer of yum believes this problem to be inside
the python socketmodule - in the C code.

The thread is here:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01372.html

Hopefully this is the right place to report the bug.

--

>Comment By: nodata (nodata101)
Date: 2005-04-19 21:04

Message:
Logged In: YES 
user_id=960750

Closing as dupe. Do not have bug ref no.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 15:20

Message:
Logged In: YES 
user_id=6656

Huh?  Where?  Should this be closed as a duplicate?

--

Comment By: nodata (nodata101)
Date: 2005-04-19 15:14

Message:
Logged In: YES 
user_id=960750

Sorry - already reported upstream by yum maintainer.

--

Comment By: Michael Hudson (mwh)
Date: 2005-04-19 14:21

Message:
Logged In: YES 
user_id=6656

Maybe you can persuade the yum maintainers to tell us what the problem 
actually is?  I don't see anything useful in that thread, and don't 
particularly 
want to read the yum sources to find out.

--

Comment By: nodata (nodata101)
Date: 2005-04-19 13:03

Message:
Logged In: YES 
user_id=960750

Correct ref is:
 https://www.redhat.com/archives/fedora-test-list/2005-April/msg01545.html

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1185931&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-19 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-06 01:03
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-19 22:27

Message:
Logged In: YES 
user_id=81797

Perhaps the best way to resolve this would be for the
standard library to provide code that either does the
daemonize process, or at least does the closing of the
sockets that may be done as part of the daemonize, that way
it's clear what the "right" way is to do it.  Thoughts?

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2005-04-19 20:05

Message:
Logged In: YES 
user_id=6380

I recommend to close this as invalid. The daemonization code
is clearly broken.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-19 17:49

Message:
Logged In: YES 
user_id=110477

Modifying the system os.py is not a good idea. A better
work-around is to skip the /dev/urandom fd when you are
closing all fds. This is the code we use:

def close_fd():
  # close all inherited file descriptors
  start_fd = 3
  # Python 2.4.1 and later use /dev/urandom to seed the
random module's RNG
  # a file descriptor is kept internally as os._urandomfd
(created on demand
  # the first time os.urandom() is called), and should not
be closed
  try:
os.urandom(4)
urandom_fd = getattr(os, '_urandomfd', None)
  except AttributeError:
urandom_fd = None
  if '-close_fd' in sys.argv:
start_fd = int(sys.argv[sys.argv.index('-close_fd') + 1])
  for fd in range(start_fd, 256):
if fd == urandom_fd:
  continue
try:
  os.close(fd)
except OSError:
  pass


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 16:53

Message:
Logged In: YES 
user_id=279987

We're facing this problem.  We're thinking of patching our os.py 
module to always open /dev/urandom on every call.  Does 
anybody know if this would have any bad consequences other 
than the obvious system call overhead?

BTW, here's the traceback we get.  As you probably can guess, 
something called os.urandom before we closed all file descriptors 
in the daemonizing code and it then failed when os.urandom 
tried to use the cached fd.

 Traceback (most recent call last):
   File  "/opt/race/share/sw/common/bin/dpmd", line 27, in ?
 dpmd().run()
   File  "Linux/CommandLineApp.py", line 336, in run
   File  "Linux/daemonbase.py", line 324, in main
   File  "Linux/server.py", line 61, in addServices
   File  "Linux/dpmd.py", line 293, in __init__
   File  "Linux/threadutils.py", line 44, in start
   File  "Linux/xmlrpcd.py", line 165, in createThread
   File  "Linux/threadutils.py", line 126, in __init__
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 423, in NamedTemporaryFile
 dir = gettempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 262, in gettempdir
 tempdir =  _get_default_tempdir()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 185, in _get_default_tempdir
 namer =  _RandomNameSequence()
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 121, in __init__
 self.rng = _Random()
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/r
andom.py", line 96, in __init__
 self.seed(x)
   
File "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/pyt

[ python-Bugs-1186345 ] [AST] assert failure on ``eval("u'\Ufffffffe'")``

2005-04-19 Thread SourceForge.net
Bugs item #1186345, was opened at 2005-04-19 16:24
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186345&group_id=5470

Category: Parser/Compiler
Group: AST
Status: Open
Resolution: None
Priority: 5
Submitted By: Brett Cannon (bcannon)
Assigned to: Nobody/Anonymous (nobody)
Summary: [AST] assert failure on ``eval("u'\Ufffe'")``

Initial Comment:
Isolated the failure of test_unicode to be because of
the test of ``eval("u'\Ufffe'")``.  What is odd is
that the Unicode string works fine as a literal at the
intepreter prompt.  Somehow eval() is triggering this
problem.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186345&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1186353 ] [AST] automatic unpacking of arguments broken

2005-04-19 Thread SourceForge.net
Bugs item #1186353, was opened at 2005-04-19 16:37
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186353&group_id=5470

Category: Parser/Compiler
Group: AST
Status: Open
Resolution: None
Priority: 5
Submitted By: Brett Cannon (bcannon)
Assigned to: Nobody/Anonymous (nobody)
Summary: [AST] automatic unpacking of arguments broken

Initial Comment:
The code ``(lambda (x, y): x)((3, 5))`` fails because
the passed-in tuple is not unpacked into the arguments.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186353&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1186195 ] [AST] genexps get scoping wrong

2005-04-19 Thread SourceForge.net
Bugs item #1186195, was opened at 2005-04-19 12:02
Message generated for change (Comment added) made by bcannon
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470

Category: Parser/Compiler
Group: AST
Status: Open
Resolution: None
Priority: 5
Submitted By: Brett Cannon (bcannon)
Assigned to: Nick Coghlan (ncoghlan)
Summary: [AST] genexps get scoping wrong

Initial Comment:
test_genexps is failing because it is unable to find a
global defined in a genexp that is returned.  Here is
the problem simplified:

def f(n): return (i for i in xrange(n))
list(f(10))

Leads to ``SystemError: no locals when loading 'xrange'``.

Comparing Python 2.4 bytecode:

  1   0 LOAD_CONST   1 ( at 0x3931e0, file
"", line 1>)
  3 MAKE_FUNCTION0
  6 LOAD_GLOBAL  0 (xrange)
  9 LOAD_FAST0 (n)
 12 CALL_FUNCTION1
 15 GET_ITER
 16 CALL_FUNCTION1
 19 RETURN_VALUE
 20 LOAD_CONST   0 (None)
 23 RETURN_VALUE

to AST bytecode:

  1   0 LOAD_CLOSURE 0 (n)
  3 BUILD_TUPLE  1
  6 LOAD_CONST   1 ( at 0x5212e8, file
"", line 1>)
  9 MAKE_CLOSURE 0
 12 LOAD_NAME0 (xrange)
 15 LOAD_DEREF   0 (n)
 18 CALL_FUNCTION1
 21 GET_ITER
 22 CALL_FUNCTION1
 25 RETURN_VALUE
 26 LOAD_CONST   0 (None)
 29 RETURN_VALUE

makes it obvious something is off (no peepholer; turned
it off in my build of 2.4).

Looks like extraneous work is being done in making a
closure.  Seems like it will still work, though.

Plus the usage of LOAD_NAME is wrong in the AST;
LOAD_NAME gets an object from the local namespace based
on its name instead of offset.


--

>Comment By: Brett Cannon (bcannon)
Date: 2005-04-19 17:09

Message:
Logged In: YES 
user_id=357491

Some playing with gdb has turned up some clues.  So
LOAD_NAME is emitted by compiler_nameop().  How this
partially works is that it gets the scope for an argument
and then based on that emits the proper load, store, or delete.

So why is xrange() coming out at the NAME scope?  Well,
turns out it is not being found in a particular scope by
PyST_GetScope() and is thus returning 0 by default.  This
means that NAME becomes the scope for xrange().

So it looks like the symtable creation is screwing up and
not making xrange() a global like it should.  This might be
a side-effect of the while closure thing above.  Not sure,
though.

--

Comment By: Brett Cannon (bcannon)
Date: 2005-04-19 12:03

Message:
Logged In: YES 
user_id=357491

Initially assigned to Nick since he did the genexp patch and
in hopes he might know what is going on off the top of his
head.  Otherwise assign to me.

I have a sneaking suspicion that the symtable code overall
is slightly busted.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1186195 ] [AST] genexps get scoping wrong

2005-04-19 Thread SourceForge.net
Bugs item #1186195, was opened at 2005-04-19 12:02
Message generated for change (Comment added) made by bcannon
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470

Category: Parser/Compiler
Group: AST
Status: Open
Resolution: None
Priority: 5
Submitted By: Brett Cannon (bcannon)
Assigned to: Nick Coghlan (ncoghlan)
Summary: [AST] genexps get scoping wrong

Initial Comment:
test_genexps is failing because it is unable to find a
global defined in a genexp that is returned.  Here is
the problem simplified:

def f(n): return (i for i in xrange(n))
list(f(10))

Leads to ``SystemError: no locals when loading 'xrange'``.

Comparing Python 2.4 bytecode:

  1   0 LOAD_CONST   1 ( at 0x3931e0, file
"", line 1>)
  3 MAKE_FUNCTION0
  6 LOAD_GLOBAL  0 (xrange)
  9 LOAD_FAST0 (n)
 12 CALL_FUNCTION1
 15 GET_ITER
 16 CALL_FUNCTION1
 19 RETURN_VALUE
 20 LOAD_CONST   0 (None)
 23 RETURN_VALUE

to AST bytecode:

  1   0 LOAD_CLOSURE 0 (n)
  3 BUILD_TUPLE  1
  6 LOAD_CONST   1 ( at 0x5212e8, file
"", line 1>)
  9 MAKE_CLOSURE 0
 12 LOAD_NAME0 (xrange)
 15 LOAD_DEREF   0 (n)
 18 CALL_FUNCTION1
 21 GET_ITER
 22 CALL_FUNCTION1
 25 RETURN_VALUE
 26 LOAD_CONST   0 (None)
 29 RETURN_VALUE

makes it obvious something is off (no peepholer; turned
it off in my build of 2.4).

Looks like extraneous work is being done in making a
closure.  Seems like it will still work, though.

Plus the usage of LOAD_NAME is wrong in the AST;
LOAD_NAME gets an object from the local namespace based
on its name instead of offset.


--

>Comment By: Brett Cannon (bcannon)
Date: 2005-04-19 17:30

Message:
Logged In: YES 
user_id=357491

OK, figured out why the closure thing is happening.  'n' is
being detected as a free variable.  This leads to
PyCode_GetNumFree() to return a non-0 value.  In
compiler_make_closure() this automatically triggers the
LOAD_CLOSURE/.../MAKE_CLOSURE chunk of bytecode instead of
LOAD_CONST/MAKE_FUNCTION.

So, how to make 'n' be detected as a local instead a free
variable ...

--

Comment By: Brett Cannon (bcannon)
Date: 2005-04-19 17:09

Message:
Logged In: YES 
user_id=357491

Some playing with gdb has turned up some clues.  So
LOAD_NAME is emitted by compiler_nameop().  How this
partially works is that it gets the scope for an argument
and then based on that emits the proper load, store, or delete.

So why is xrange() coming out at the NAME scope?  Well,
turns out it is not being found in a particular scope by
PyST_GetScope() and is thus returning 0 by default.  This
means that NAME becomes the scope for xrange().

So it looks like the symtable creation is screwing up and
not making xrange() a global like it should.  This might be
a side-effect of the while closure thing above.  Not sure,
though.

--

Comment By: Brett Cannon (bcannon)
Date: 2005-04-19 12:03

Message:
Logged In: YES 
user_id=357491

Initially assigned to Nick since he did the genexp patch and
in hopes he might know what is going on off the top of his
head.  Otherwise assign to me.

I have a sneaking suspicion that the symtable code overall
is slightly busted.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1186195&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-19 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-05 21:03
Message generated for change (Comment added) made by lcaamano
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 22:31

Message:
Logged In: YES 
user_id=279987

Clearly broken?  Hardly.

Daemonization code is not the only place where it's recommend 
and standard practice to close file descriptors.

It's unreasonable to expect python programs to keep track of all 
the possible file descriptors the python library might cache to 
make sure it doesn't close them in all the daemonization 
routines ... btw, contrary to standard unix programming practices.

Are there any other file descriptors we should know about?



--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-19 18:27

Message:
Logged In: YES 
user_id=81797

Perhaps the best way to resolve this would be for the
standard library to provide code that either does the
daemonize process, or at least does the closing of the
sockets that may be done as part of the daemonize, that way
it's clear what the "right" way is to do it.  Thoughts?

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2005-04-19 16:05

Message:
Logged In: YES 
user_id=6380

I recommend to close this as invalid. The daemonization code
is clearly broken.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-19 13:49

Message:
Logged In: YES 
user_id=110477

Modifying the system os.py is not a good idea. A better
work-around is to skip the /dev/urandom fd when you are
closing all fds. This is the code we use:

def close_fd():
  # close all inherited file descriptors
  start_fd = 3
  # Python 2.4.1 and later use /dev/urandom to seed the
random module's RNG
  # a file descriptor is kept internally as os._urandomfd
(created on demand
  # the first time os.urandom() is called), and should not
be closed
  try:
os.urandom(4)
urandom_fd = getattr(os, '_urandomfd', None)
  except AttributeError:
urandom_fd = None
  if '-close_fd' in sys.argv:
start_fd = int(sys.argv[sys.argv.index('-close_fd') + 1])
  for fd in range(start_fd, 256):
if fd == urandom_fd:
  continue
try:
  os.close(fd)
except OSError:
  pass


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 12:53

Message:
Logged In: YES 
user_id=279987

We're facing this problem.  We're thinking of patching our os.py 
module to always open /dev/urandom on every call.  Does 
anybody know if this would have any bad consequences other 
than the obvious system call overhead?

BTW, here's the traceback we get.  As you probably can guess, 
something called os.urandom before we closed all file descriptors 
in the daemonizing code and it then failed when os.urandom 
tried to use the cached fd.

 Traceback (most recent call last):
   File  "/opt/race/share/sw/common/bin/dpmd", line 27, in ?
 dpmd().run()
   File  "Linux/CommandLineApp.py", line 336, in run
   File  "Linux/daemonbase.py", line 324, in main
   File  "Linux/server.py", line 61, in addServices
   File  "Linux/dpmd.py", line 293, in __init__
   File  "Linux/threadutils.py", line 44, in start
   File  "Linux/xmlrpcd.py", line 165, in createThread
   File  "Linux/threadutils.py", line 126, in __init__
   
File  "/opt/race/share/sw/os/Linux_2.4_i686/python/lib/python2.4/t
empfile.py", line 423, in N

[ python-Bugs-1177468 ] random.py/os.urandom robustness

2005-04-19 Thread SourceForge.net
Bugs item #1177468, was opened at 2005-04-06 01:03
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1177468&group_id=5470

Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Fazal Majid (majid)
Assigned to: Nobody/Anonymous (nobody)
Summary: random.py/os.urandom robustness

Initial Comment:
Python 2.4.1 now uses os.urandom() to seed the random
number generator. This is mostly an improvement, but
can lead to subtle regression bugs.

os.urandom() will open /dev/urandom on demand, e.g.
when random.Random.seed() is called, and keep it alive
as os._urandomfd.

It is standard programming practice for a daemon
process to close file descriptors it has inherited from
its parent process, and if it closes the file
descriptor corresponding to os._urandomfd, the os
module is blissfully unaware and the next time
os.urandom() is called, it will try to read from a
closed file descriptor (or worse, a new one opened
since), with unpredictable results.

My recommendation would be to make os.urandom() open
/dev/urandom each time and not keep a persistent file
descripto. This will be slightly slower, but more
robust. I am not sure how I feel about a standard
library function steal a file descriptor slot forever,
specially when os.urandom() is probably going to be
called only once in the lifetime of a program, when the
random module is seeded.


--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-20 02:49

Message:
Logged In: YES 
user_id=81797

Conversely, I would say that it's unreasonable to expect
other things not to break if you go through and close file
descriptors that the standard library has opened.

--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-20 02:31

Message:
Logged In: YES 
user_id=279987

Clearly broken?  Hardly.

Daemonization code is not the only place where it's recommend 
and standard practice to close file descriptors.

It's unreasonable to expect python programs to keep track of all 
the possible file descriptors the python library might cache to 
make sure it doesn't close them in all the daemonization 
routines ... btw, contrary to standard unix programming practices.

Are there any other file descriptors we should know about?



--

Comment By: Sean Reifschneider (jafo)
Date: 2005-04-19 22:27

Message:
Logged In: YES 
user_id=81797

Perhaps the best way to resolve this would be for the
standard library to provide code that either does the
daemonize process, or at least does the closing of the
sockets that may be done as part of the daemonize, that way
it's clear what the "right" way is to do it.  Thoughts?

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2005-04-19 20:05

Message:
Logged In: YES 
user_id=6380

I recommend to close this as invalid. The daemonization code
is clearly broken.

--

Comment By: Fazal Majid (majid)
Date: 2005-04-19 17:49

Message:
Logged In: YES 
user_id=110477

Modifying the system os.py is not a good idea. A better
work-around is to skip the /dev/urandom fd when you are
closing all fds. This is the code we use:

def close_fd():
  # close all inherited file descriptors
  start_fd = 3
  # Python 2.4.1 and later use /dev/urandom to seed the
random module's RNG
  # a file descriptor is kept internally as os._urandomfd
(created on demand
  # the first time os.urandom() is called), and should not
be closed
  try:
os.urandom(4)
urandom_fd = getattr(os, '_urandomfd', None)
  except AttributeError:
urandom_fd = None
  if '-close_fd' in sys.argv:
start_fd = int(sys.argv[sys.argv.index('-close_fd') + 1])
  for fd in range(start_fd, 256):
if fd == urandom_fd:
  continue
try:
  os.close(fd)
except OSError:
  pass


--

Comment By: Luis P Caamano (lcaamano)
Date: 2005-04-19 16:53

Message:
Logged In: YES 
user_id=279987

We're facing this problem.  We're thinking of patching our os.py 
module to always open /dev/urandom on every call.  Does 
anybody know if this would have any bad consequences other 
than the obvious system call overhead?

BTW, here's the traceback we get.  As you probably can guess, 
something called os.urandom before we closed all file descriptors 
in the daemonizing code and it then failed when os.urandom 
tried to use the cached fd.

 Traceback (most recent call last):
   File  "/opt/race/share/sw/common/bin/dpmd", line 27, in ?
 dpmd().run()
   File  "Linux/CommandLineApp.py", line 336, in run
   File  "Linux/daemonbase.py", line 324, in main
   File  "Linux/serve

[ python-Bugs-1120777 ] bug in unichr() documentation

2005-04-19 Thread SourceForge.net
Bugs item #1120777, was opened at 2005-02-11 13:54
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1120777&group_id=5470

Category: Documentation
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Marko Kreen (mkz)
>Assigned to: Fred L. Drake, Jr. (fdrake)
Summary: bug in unichr() documentation

Initial Comment:
http://www.python.org/doc/2.4/lib/built-in-funcs.html:

> Return the Unicode string of one character whose
Unicode
> code is the integer i.
> [...]
> The argument must be in the range [0..65535], inclusive.

unichr.__doc_ says:
> Return a Unicode string of one character with ordinal
i; 0 <= i <= 0x10.

Which is correct?


--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-20 02:57

Message:
Logged In: YES 
user_id=81797

Fred: The attached patch looks good to me.

--

Comment By: Marko Kreen (mkz)
Date: 2005-02-11 14:38

Message:
Logged In: YES 
user_id=894541

Main problem for me was that the 65535 hints that unichr()
may want UTF-16 values not Unicode.  That was rather
confusing.

Ok, attached path clarifies unichr() range.

--

Comment By: M.-A. Lemburg (lemburg)
Date: 2005-02-11 14:03

Message:
Logged In: YES 
user_id=38388

Whether unichr() handles the UCS2 or the UCS4 range depends
on the configuration option you set at Python compile time.
Perhaps we should extend the documentation to mention this
difference ?!

Doc patches are welcome :-)

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1120777&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1119439 ] Python Programming FAQ should be updated for Python 2.4

2005-04-19 Thread SourceForge.net
Bugs item #1119439, was opened at 2005-02-09 17:17
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1119439&group_id=5470

Category: Documentation
Group: Python 2.4
>Status: Closed
Resolution: None
Priority: 5
Submitted By: Michael Hoffman (hoffmanm)
Assigned to: Nobody/Anonymous (nobody)
Summary: Python Programming FAQ should be updated for Python 2.4

Initial Comment:
For example:

* "1.4.3 How do I iterate over a sequence in reverse
order?" should use reversed()
* "1.4.4   How do you remove duplicates from a list?"
should use set()
* "1.5.2   I want to do a complicated sort: can you do
a Schwartzian Transform in Python?" should use sort(key=)
* section 1.6 should use new-style classes

--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-20 03:31

Message:
Logged In: YES 
user_id=81797

I've submitted this to the [EMAIL PROTECTED] list for
pursual.  Michael Hoffman: If you can make a patch of the
suggested changes, to help the process along.  I'm
considering it closed here, please let me know if you think
is inappropriate?

--

Comment By: Terry J. Reedy (tjreedy)
Date: 2005-02-16 22:49

Message:
Logged In: YES 
user_id=593130

I believe the FAQs are only maintained at www.python.org 
and not at SourceForge.  If so, you should email your 
comments to [EMAIL PROTECTED] and/or 
[EMAIL PROTECTED]

I strongly urge you to work your comments up into a set of 
specific patches (in text form).  In other words, give a 
specific suggested text to replace or augment the old one.  
Since there is no paid editor, this will make changes much 
easier and therefore more likely sooner.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1119439&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1117048 ] Dictionary Parsing Problem

2005-04-19 Thread SourceForge.net
Bugs item #1117048, was opened at 2005-02-05 22:50
Message generated for change (Comment added) made by jafo
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1117048&group_id=5470

Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 2
Submitted By: WalterBrunswick (walterbrunswick)
Assigned to: Nobody/Anonymous (nobody)
Summary: Dictionary Parsing Problem

Initial Comment:
Python Version: Python 2.4
OS Version: Microsoft Windows XP [Version 5.1.2600]

There may or may not be (according to some people on 
#python on irc.freenode.net) a fault, if you will, in 
the way that Python evaluates dictionaries (in this 
case, globals()).

 apparently printing of dicts are optimized 
not to produce the entire output string before 
printing it

This may be an advantage when it comes to processing 
speed and memory management, but when it comes to 
catching errors, this could very well be a problem.

(I am waltz.)

...
 you think it should print a dict with an error 
in it?
 ha! caught ya there!
 well, if there's an error, it should break.
 and it did.
 It shouldn't print AT ALL.
 why not?
 Because of the ERROR!
...

I'm saying it should raise an abstract exception 
(something like DictError) BEFORE printing, as to 
allow the programmer to handle the error.

Some people don't agree with my argument, but I'll let 
you (the core developers) decide what's best. I just 
thought I should mention this issue, to help improve 
Python to the max!

--

>Comment By: Sean Reifschneider (jafo)
Date: 2005-04-20 03:48

Message:
Logged In: YES 
user_id=81797

Just to chime in, I am ok with the current behavior.

--

Comment By: Terry J. Reedy (tjreedy)
Date: 2005-02-17 04:30

Message:
Logged In: YES 
user_id=593130

The Python interpreter does not print blank lines except as 
requested.  While a starting linefeed would look nicer in this 
particular circumstance (error while outputting), I would not 
want it for two reasons: 1) the circumstance is extremely 
rare, should usually only result from a program bug, and 
cannot hardly be detected, so a starting linefeed would 
nearly always produce an extraneous blank line; 2) without 
the linefeed, it is completely clear that the error occured in 
mid-sentence, so to speak.

Looking at the informative output, which I don't see as 
nasty, I see the following:
* the error occurred while trying to get the repr of the object 
(a module) associated with the name 'const'.
* in const, you defined a function with the special name 
__getattr__, which apparently gets treated as a method that 
overrides the default.  (I was not aware you could do this for 
modules.)
* because of the presence of __getattr__, or perhaps 
because of something else in const, repr() asked const, via 
__getattr__, whether it has a method __repr__.
* your __getattr__ raised proprietary const.ConstError
* since this is something other than the expected "return the 
(computed) attribute value or raise an AttributeError 
exception" (Ref Man 3.3.2 Customizing attribute access), 
this ended the globals() call and triggered a traceback.

Without seeing the code for const, I wonder whether you 
really need a custom __setattr__ and __getattr__ (once 
fixed) or should get rid of them and use the default 
mechanism.

Unless you see a real bug that I missed, please close this 
report.


--

Comment By: Scott Baldwin (linkmastersab)
Date: 2005-02-17 00:08

Message:
Logged In: YES 
user_id=1135743

My opinion on the most feasible solution is to simple insert
a linebreak before the traceback begins, so that the
traceback stands out more. An exception for this kind of
trivial bug is too expensive for any good that would come
out of it.

--

Comment By: Scott Baldwin (linkmastersab)
Date: 2005-02-17 00:05

Message:
Logged In: YES 
user_id=1135743

My opinion on the most feasible solution is to simple insert
a linebreak before the traceback begins, so that the
traceback stands out more. An exception for this kind of
trivial bug is too expensive for any good that would come
out of it.

--

Comment By: WalterBrunswick (walterbrunswick)
Date: 2005-02-16 23:05

Message:
Logged In: YES 
user_id=1164416

The interpreter begins parsing the dictionary immediately, 
without buffering and checking for errors beforehand. In 
the case that an error occurs, such as a faulty key, the 
interpreter outputs a nasty-looking error - including the 
error message as part of the dictionary, as you can see 
from the log (ErrorLog2.log). There is no way that this can 
be prevented without locating the source of the problem. 
What I'm saying is that the int

[ python-Feature Requests-1185121 ] itertools.imerge: merge sequences

2005-04-19 Thread SourceForge.net
Feature Requests item #1185121, was opened at 2005-04-18 12:11
Message generated for change (Settings changed) made by jneb
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1185121&group_id=5470

Category: Python Library
Group: None
>Status: Closed
Resolution: None
Priority: 5
Submitted By: Jurjen N.E. Bos (jneb)
Assigned to: Raymond Hettinger (rhettinger)
Summary: itertools.imerge: merge sequences

Initial Comment:
(For the itertools library, so Python 2.2 and up)
This is a suggested addition to itertools, proposed name imerge.
usage: imerge(seq0, seq1, ..., [key=])
result: imerge assumes the sequences are all in sorted order, and 
produces a iterator that returns pairs of the form (value, index),
where value is a value of one of the sequences, and index is the 
index number of the given sequence.
The output the imerge is in sorted order (taking into account the 
key function), so that identical values in the sequences will be 
produced from left to right.
The code is surprisingly short, making use of the builtin heap 
module.
(You may disagree with my style of argument handling; feel free to 
optimize it.)
def imerge(*iterlist, **key):
"""Merge a sequence of sorted iterables.

Returns pairs [value, index] where each value comes from 
iterlist[index], and the pairs are sorted
if each of the iterators is sorted.
Hint use groupby(imerge(...), operator.itemgetter(0)) to get 
the items one by one.
"""
if key.keys() not in ([], ["key"]): raise TypeError, "Excess 
keyword arguments for imerge"
key = key.get("key", lambda x:x)
from heapq import heapreplace, heappop
#initialize the heap containing (inited, value, index, 
currentItem, iterator)
#this automatically makes sure all iterators are initialized, 
then run, and finally emptied
heap = [(False, None, index, None, iter(iterator)) for index, 
iterator in enumerate(iterlist)]
while heap:
inited, item, index, value, iterator = heap[0]
if inited: yield value, index
try: item = iterator.next()
except StopIteration: heappop(heap)
else: heapreplace(heap, (True, key(item), index, item, 
iterator))

If you find this little routine worth its size, please put it into 
itertools.

- Jurjen

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2005-04-19 18:58

Message:
Logged In: YES 
user_id=80475

For your specific application, it is better to use sorted().
 When the underlying data consists of long runs of
previously ordered data, sorted() will take advantage of
that ordering and run in O(n) time.  In contrast, using a
heap will unnecessarily introduce O(n log n) behavior and
not exploit the underlying data order.

Recommend that you close this request.  This discussion thus
far confirms the original conclusion that imerge() use cases
are dominated by sorted(chain(*iterlist)) which gives code
that is shorter, faster, and easier to understand.

--

Comment By: Jurjen N.E. Bos (jneb)
Date: 2005-04-19 08:19

Message:
Logged In: YES 
user_id=446428

Well, I was optimizing a piece of code with reasonbly long sorted lists (in 
memory, I agree) that were modified in all kinds of ways. I did not want 
the nlogn behaviour of sort, so I started writing a merge routine.
I found out that the boundary cases of a merge implementation are a 
mess, until I disccovered the heap trick. Then I decided to clean it up 
and and put it up for a library routine.
The fact that it uses iterators is obnly to make it more general, not 
specifically for the "lazy" properties.
- Jurjen

--

Comment By: Raymond Hettinger (rhettinger)
Date: 2005-04-18 22:43

Message:
Logged In: YES 
user_id=80475

I had previously looked at an imerge() utility and found
that it had only a single application (isomorphic to lazy
mergesorting) and that the use cases were dominated by the
in-memory alternative:  sorted(chain(*iterlist)).

Short of writing an external mergesort, what applications
did you have in mind?  What situations have you encountered
where you have multiple sources of sorted data being
generated on the fly (as opposed to already being
in-memory), have needed one element at a time sequential
access to a combined sort of that data, needed that combined
sort only once, and could not afford to have the dataset
in-memory?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1185121&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com