[ python-Bugs-1227955 ] shelve/bsddb crash on db close

2006-01-24 Thread SourceForge.net
Bugs item #1227955, was opened at 2005-06-26 16:38
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227955&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Scott (ses4j)
Assigned to: Nobody/Anonymous (nobody)
Summary: shelve/bsddb crash on db close

Initial Comment:
I have a 300 meg bsddb/hash db created and accessed by
shelve.  No problems when running python only.  But I
started accessing the code that opens it via a windows
DLL, opening and closing the DB on PROCESS_ATTACH and
DETACH.  All of a sudden, it would crash in the bsddb
module on closing/del'ing the db.  

Found a workaround by opening the db with
shelve.BsddbShelf(..) instead of shelve.open(..) - then
it closed fine when the DLL unloaded, no crash.

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 00:03

Message:
Logged In: YES 
user_id=33168

Perhaps this is related to bug 1413192?  Could you try the
patch there and see if it fixes this problem?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227955&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-788526 ] Closing dbenv first bsddb doesn't release locks & segfau

2006-01-24 Thread SourceForge.net
Bugs item #788526, was opened at 2003-08-13 22:13
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.3
Status: Open
Resolution: None
Priority: 5
Submitted By: Jane Austine (janeaustine50)
Assigned to: Gregory P. Smith (greg)
Summary: Closing dbenv first bsddb doesn't release locks & segfau

Initial Comment:
There is a test code named test_env_close in 
bsddb/test, but it
doesn't test the case thoroughly. There seems to be a 
bug in closing
the db environment first -- the lock is not released, and 
sometimes it
seg-faults.

Following is the code that shows this bug.


import os
from bsddb import db

dir,dbname='test_dbenv','test_db'

def getDbEnv(dir):
try:
os.mkdir(dir)
except:
pass
dbenv = db.DBEnv()
dbenv.open(dir, db.DB_INIT_CDB| db.DB_CREATE 
|db.DB_INIT_MPOOL)
return dbenv

def getDbHandler(db_env,db_name):
d = db.DB(dbenv)
d.open(db_name, db.DB_BTREE, db.DB_CREATE)
return d

dbenv=getDbEnv(dir)
assert dbenv.lock_stat()['nlocks']==0
d=getDbHandler(dbenv,dbname)
assert dbenv.lock_stat()['nlocks']==1
try:
dbenv.close()
except db.DBError:
pass
else:
assert 0

del d
import gc
gc.collect()
dbenv=getDbEnv(dir)
assert dbenv.lock_stat()['nlocks']==0,'number of current 
locks should
be 0' #this fails


If you close dbenv before db handler, the lock is not 
released.
Moreover, try this with dbshelve and it segfaults.


>>> from bsddb import dbshelve
>>> dbenv2=getDbEnv('test_dbenv2')
>>> d2=dbshelve.open(dbname,dbenv=dbenv2)
>>> try:
... dbenv2.close()
... except db.DBError:
... pass
... else:
... assert 0
... 
>>>
>>> 
Exception bsddb._db.DBError: (0, 'DBEnv object has 
been closed') in
Segmentation fault


Tested on:
 1. linux with Python 2.3 final, Berkeley DB 4.1.25
 2. windows xp with Python 2.3 final (with _bsddb that 
comes along)


--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 00:04

Message:
Logged In: YES 
user_id=33168

Jane could try the patch in bug 1413192 to see if it fixes
your problem?

--

Comment By: Gregory P. Smith (greg)
Date: 2004-06-16 15:18

Message:
Logged In: YES 
user_id=413

Yes this bug is still there.  A "workaround" is just a
"don't do that" when it comes to closing sleepycat DBEnv
objects while there are things using them still open.  I
believe we can prevent this...

One proposal: internally in _bsddb.c DBEnv could be made to
keep a weak reference to all objects created using it (DB
and DBLock objects) and refuse to call the sleepycat close()
method if any still exist (overridable using a force=1 flag).


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2004-06-15 20:14

Message:
Logged In: YES 
user_id=33168

Greg do you know anything about this?  Is it still a problem?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-495682 ] cannot handle http_proxy with user:pass@

2006-01-24 Thread SourceForge.net
Bugs item #495682, was opened at 2001-12-21 01:22
Message generated for change (Comment added) made by loewis
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=495682&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Feature Request
>Status: Closed
>Resolution: Fixed
Priority: 3
Submitted By: Matthias Klose (doko)
Assigned to: Nobody/Anonymous (nobody)
Summary: cannot handle http_proxy with user:pass@

Initial Comment:
[please CC [EMAIL PROTECTED]; the original report 
can be found at http://bugs.debian.org/120013 ]

I tried to use an http_proxy variable which looks 
like: 
http://user:[EMAIL PROTECTED]:3128/

with pass like \jkIoPd{

And I got this error :

Traceback (most recent call last):
  File "/usr/bin/reportbug", line 1146, in ?
main()
  File "/usr/bin/reportbug", line 628, in main
http_proxy)
  File "/usr/lib/site-python/reportbug_ui_text.py", 
line 314, in
  handle_bts_query
archived=archived)
  File "/usr/lib/site-python/debianbts.py", line 575, 
in get_reports
result = get_cgi_reports(package, system, 
http_proxy, archived)
  File "/usr/lib/site-python/debianbts.py", line 494, 
in get_cgi_reports
page = urlopen(url, proxies=proxies)
  File "/usr/lib/site-python/debianbts.py", line 382, 
in urlopen
return _urlopener.open(url)
  File "/usr/lib/python2.1/urllib.py", line 176, in 
open
return getattr(self, name)(url)
  File "/usr/lib/python2.1/urllib.py", line 277, in 
open_http
h = httplib.HTTP(host)
  File "/usr/lib/python2.1/httplib.py", line 663, in 
__init__
self._conn = self._connection_class(host, port)
  File "/usr/lib/python2.1/httplib.py", line 342, in 
__init__
self._set_hostport(host, port)
  File "/usr/lib/python2.1/httplib.py", line 348, in 
_set_hostport
port = int(host[i+1:])
ValueError: invalid literal for int(): \jkIoPd
[EMAIL PROTECTED]:3128

But if I use http_proxy=http://10.0.0.1:3128/, it 
works well.


--

>Comment By: Martin v. Löwis (loewis)
Date: 2006-01-24 16:52

Message:
Logged In: YES 
user_id=21627

Fixed with said patch.

--

Comment By: Johannes Nicolai (jonico)
Date: 2005-11-05 18:09

Message:
Logged In: YES 
user_id=863272

I have proposed a patch for this in the patch section: 
1349118 is the number of the patch 
URL: 
https://sourceforge.net/tracker/index.php?func=detail&aid=1349118&group_id=5470&atid=305470
 
The patch also solves some other issues with proxies (now it 
correct handles protocols, where a proxy was specified but not 
supported and https proxies will also be used if a host requires 
www-authentification) 

--

Comment By: Guido van Rossum (gvanrossum)
Date: 2001-12-28 23:28

Message:
Logged In: YES 
user_id=6380

This is a feature request. If someone submits a patch, we'll
happily apply it.

It looks like urllib2 already supports this feature; you
could try using that.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=495682&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1413790 ] zipfile: inserting some filenames produces corrupt .zips

2006-01-24 Thread SourceForge.net
Bugs item #1413790, was opened at 2006-01-24 09:57
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413790&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Grant Olson (logistix)
Assigned to: Nobody/Anonymous (nobody)
Summary: zipfile: inserting some filenames produces corrupt .zips

Initial Comment:
Running something like the following produces a corrupt
.zip file.  The builtin XP zip folder view won't show
any documents and attempting to extract via "right
click -> Extract files..." will set off an untrusted
file alert:

import zipfile
z = zipfile.ZipFile("c:\\foo.zip","w")
z.write("c:\\autoexec.bat", "\\autoexec.bat")
z.close()

zipfile should either throw an error when adding these
files or attempt to normalize the path.  I would prefer
that zipfile make the assumption that any files
starting with absolute or relative pathnames
("\\foo\\bar.txt" or ".\\foo\\bar.txt") should join in
at the root of the zipfile ("foo\\bar.txt" in this case).

Patch to accomplish the latter is attached.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413790&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by rshura
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Alex Roitman (rshura)
Date: 2006-01-24 10:50

Message:
Logged In: YES 
user_id=498357

Thanks for a quick response!

OK, first thing first: your simpler testcase seems to expose
yet another problem, not the one I had. In particular, your
testcase segfaults for me on python2.4.2/bsddb4.3.0 but
*does not* segfault with python2.3.5/bsddb4.2.0.5

In my testcase, I can definitely blame the segfault on the
associate call, not on open. I can demonstrate it by either
commenting out the associate call (no segfault) or by
inserting a print statement right before the associate.

So your testcase does not seem to have an exact same problem
than my testcase. In my testcase nothing seems to depend on
variable names (as one would expect). I am rebuilding
python2.4 with your patch, will post results soon.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-23 23:03

Message:
Logged In: YES 
user_id=33168

I spoke too soon.  The attached patch works for me or your
original test case and my pared down version.  It also
passes the tests.  It also fixes a potential memory leak. 
Let me know if this works for you.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-23 22:45

Message:
Logged In: YES 
user_id=33168

I've got a much simpler test case.  The problem seems to be
triggered when the txn is deleted after the env (in
Modules/_bsddb.c 917 vs 966).  If I change the variable
names in python, I don't get the same behaviour (ie, it
doesn't crash).

I removed the original data file, but if you change the_txn
to txn, that might "fix" the problem.  If not, try playing
with different variable names and see if you can get it to
not crash.  Obviously there needs to be a real fix in C
code, but I'm not sure what needs to happen.  It doesn't
look like we keep enough info to do this properly.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-23 12:41

Message:
Logged In: YES 
user_id=498357

Attaching test3.py containing same code without
transactions. Works fine with either pm.db or pm_ok.db

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by rshura
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:31

Message:
Logged In: YES 
user_id=498357

OK, built and installed all kinds of python packages with
the patch. All tests are fine. Here goes:

1. Your testcase goes just fine, no segfault with the
patched version.
2. Mine still segfaults.
3. I ran mine under gdb with python2.4-dbg package, here's
the output (printed "shurafine" is my addition, to make sure
that the correct code is being run):

$ gdb python2.4-dbg
GNU gdb 6.4-debian
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public
License, and you are
welcome to change it and/or distribute copies of it under
certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show
warranty" for details.
This GDB was configured as "i486-linux-gnu"...Using host
libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210038592 (LWP 29629)]
shurafine

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210038592 (LWP 29629)]
0xb7b57f3e in DB_associate (self=0xb7db9f58, args=0xb7dbd3b4,
kwargs=0xb7db5e94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1219
1219Py_DECREF(self->associateCallback);
(gdb)

Please let me know if I can be of further assistance.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 10:50

Message:
Logged In: YES 
user_id=498357

Thanks for a quick response!

OK, first thing first: your simpler testcase seems to expose
yet another problem, not the one I had. In particular, your
testcase segfaults for me on python2.4.2/bsddb4.3.0 but
*does not* segfault with python2.3.5/bsddb4.2.0.5

In my testcase, I can definitely blame the segfault on the
associate call, not on open. I can demonstrate it by either
commenting out the associate call (no segfault) or by
inserting a print statement right before the associate.

So your testcase does not seem to have an exact same problem
than my testcase. In my testcase nothing seems to depend on
variable names (as one would expect). I am rebuilding
python2.4 with your patch, will post results soon.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-23 23:03

Message:
Logged In: YES 
user_id=33168

I spoke too soon.  The attached patch works for me or your
original test case and my pared down version.  It also
passes the tests.  It also fixes a potential memory leak. 
Let me know if this works for you.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-23 22:45

Message:
Logged In: YES 
user_id=33168

I've got a much simpler test case.  The problem seems to be
triggered when the txn is deleted after the env (in
Modules/_bsddb.c 917 vs 966).  If I change the variable
names in python, I don't get the same behaviour (ie, it
doesn't crash).

I removed the original data file, but i

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by rshura
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:37

Message:
Logged In: YES 
user_id=498357

Done same tests on another Debian sid machine, exact same
results (up to one line number, due to my extra fprintf
statement):

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210390848 (LWP 5865)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210390848 (LWP 5865)]
0xb7b01eb4 in DB_associate (self=0xb7d63df0, args=0xb7d67234,
kwargs=0xb7d5ee94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1218
1218Py_DECREF(self->associateCallback);
(gdb) 

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:31

Message:
Logged In: YES 
user_id=498357

OK, built and installed all kinds of python packages with
the patch. All tests are fine. Here goes:

1. Your testcase goes just fine, no segfault with the
patched version.
2. Mine still segfaults.
3. I ran mine under gdb with python2.4-dbg package, here's
the output (printed "shurafine" is my addition, to make sure
that the correct code is being run):

$ gdb python2.4-dbg
GNU gdb 6.4-debian
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public
License, and you are
welcome to change it and/or distribute copies of it under
certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show
warranty" for details.
This GDB was configured as "i486-linux-gnu"...Using host
libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210038592 (LWP 29629)]
shurafine

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210038592 (LWP 29629)]
0xb7b57f3e in DB_associate (self=0xb7db9f58, args=0xb7dbd3b4,
kwargs=0xb7db5e94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1219
1219Py_DECREF(self->associateCallback);
(gdb)

Please let me know if I can be of further assistance.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 10:50

Message:
Logged In: YES 
user_id=498357

Thanks for a quick response!

OK, first thing first: your simpler testcase seems to expose
yet another problem, not the one I had. In particular, your
testcase segfaults for me on python2.4.2/bsddb4.3.0 but
*does not* segfault with python2.3.5/bsddb4.2.0.5

In my testcase, I can definitely blame the segfault on the
associate call, not on open. I can demonstrate it by either
commenting out the associate call (no segfault) or by
inserting a print statement right before the associate.

So your testcase does not seem to have an exact same problem
than my testcase. In my testcase nothing seems to depend on
variable names (as one would expect). I am rebuilding
python2.4 with your patch, will post results soon.

--

Comment By:

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 11:40

Message:
Logged In: YES 
user_id=33168

Could you pull the version of Modules/_bsddb.c out of SVN
and then apply my patch?  I believe your new problem was
fixed recently.  You can look in the Misc/NEWS file to find
the exact patch.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:37

Message:
Logged In: YES 
user_id=498357

Done same tests on another Debian sid machine, exact same
results (up to one line number, due to my extra fprintf
statement):

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210390848 (LWP 5865)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210390848 (LWP 5865)]
0xb7b01eb4 in DB_associate (self=0xb7d63df0, args=0xb7d67234,
kwargs=0xb7d5ee94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1218
1218Py_DECREF(self->associateCallback);
(gdb) 

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:31

Message:
Logged In: YES 
user_id=498357

OK, built and installed all kinds of python packages with
the patch. All tests are fine. Here goes:

1. Your testcase goes just fine, no segfault with the
patched version.
2. Mine still segfaults.
3. I ran mine under gdb with python2.4-dbg package, here's
the output (printed "shurafine" is my addition, to make sure
that the correct code is being run):

$ gdb python2.4-dbg
GNU gdb 6.4-debian
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public
License, and you are
welcome to change it and/or distribute copies of it under
certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show
warranty" for details.
This GDB was configured as "i486-linux-gnu"...Using host
libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210038592 (LWP 29629)]
shurafine

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210038592 (LWP 29629)]
0xb7b57f3e in DB_associate (self=0xb7db9f58, args=0xb7dbd3b4,
kwargs=0xb7db5e94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1219
1219Py_DECREF(self->associateCallback);
(gdb)

Please let me know if I can be of further assistance.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 10:50

Message:
Logged In: YES 
user_id=498357

Thanks for a quick response!

OK, first thing first: your simpler testcase seems to expose
yet another problem, not the one I had. In particular, your
testcase segfaults for me on python2.4.2/bsddb4.3.0 but
*does not* segfault with python2.3.5/bsddb4.2.0.5

In my testcase, I can definitely blame the segfault on the
associate call, not on open. I can demonstrate it by either
commenting out the associate call (no segfault) or by

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by greg
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 12:14

Message:
Logged In: YES 
user_id=413

fwiw your patch looks good.  it makes sense for a DBTxn to
hold a reference to its DBEnv.

(I suspect there may still be problems if someone calls
DBEnv.close while there are outstanding DBTxn's but doing
something about that would be a lot more work if its an
actual issue)

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 11:40

Message:
Logged In: YES 
user_id=33168

Could you pull the version of Modules/_bsddb.c out of SVN
and then apply my patch?  I believe your new problem was
fixed recently.  You can look in the Misc/NEWS file to find
the exact patch.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:37

Message:
Logged In: YES 
user_id=498357

Done same tests on another Debian sid machine, exact same
results (up to one line number, due to my extra fprintf
statement):

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210390848 (LWP 5865)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210390848 (LWP 5865)]
0xb7b01eb4 in DB_associate (self=0xb7d63df0, args=0xb7d67234,
kwargs=0xb7d5ee94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1218
1218Py_DECREF(self->associateCallback);
(gdb) 

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:31

Message:
Logged In: YES 
user_id=498357

OK, built and installed all kinds of python packages with
the patch. All tests are fine. Here goes:

1. Your testcase goes just fine, no segfault with the
patched version.
2. Mine still segfaults.
3. I ran mine under gdb with python2.4-dbg package, here's
the output (printed "shurafine" is my addition, to make sure
that the correct code is being run):

$ gdb python2.4-dbg
GNU gdb 6.4-debian
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public
License, and you are
welcome to change it and/or distribute copies of it under
certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show
warranty" for details.
This GDB was configured as "i486-linux-gnu"...Using host
libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210038592 (LWP 29629)]
shurafine

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210038592 (LWP 29629)]
0xb7b57f3e in DB_associate (self=0xb7db9f58, args=0xb7dbd3b4,
kwargs=0xb7db5e94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1219
1219Py_DECREF(self->associateCallback);
(gdb)

Please let me know if I can be of further assistance.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 10:50

Message:
Logged In: YES 
user_

[ python-Bugs-1414018 ] email.Utils.py: UnicodeError in RFC2322 header

2006-01-24 Thread SourceForge.net
Bugs item #1414018, was opened at 2006-01-25 05:19
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1414018&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: A. Sagawa (qbin)
Assigned to: Nobody/Anonymous (nobody)
Summary: email.Utils.py: UnicodeError in RFC2322 header

Initial Comment:
Description:
collapse_rfc2231_value does not handle UnicodeError
exception. Therefore a header like this one can cause
UnicodeError in attempting unicode conversion.

---
Content-Type: text/plain; charset="ISO-2022-JP"
Content-Disposition: attachment;
 filename*=iso-2022-jp''%1B%24BJs9p%3Dq%2D%21%1B%28B%2Etxt
---

Test script:
---
#! /usr/bin/env python
import sys
import email

msg = email.message_from_file(sys.stdin)
for part in msg.walk():
  print part.get_params()
  print part.get_filename()
---
run
% env LANG=ja_JP.eucJP ./test.py < attached_sample.eml

Background:
Character 0x2d21 is invalid in JIS X0208 but defined in
CP932 (Shift_JIS's superset by Microsoft).  Conversion
between Shift_JIS and ISO-2022-JP are computable
because both of them based on JIS X0208. So sometimes
CP932 characters appear in ISO-2022-JP encoded string,
typically produced by Windows MUA.
But Python's "ISO-2022-JP" means *pure* JIS X0208, thus
conversion is failed.

Workaround:
Convert to fallback_charset and/or skip invalid character.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1414018&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1414018 ] email.Utils.py: UnicodeError in RFC2322 header

2006-01-24 Thread SourceForge.net
Bugs item #1414018, was opened at 2006-01-24 21:19
Message generated for change (Settings changed) made by birkenfeld
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1414018&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: A. Sagawa (qbin)
>Assigned to: Barry A. Warsaw (bwarsaw)
Summary: email.Utils.py: UnicodeError in RFC2322 header

Initial Comment:
Description:
collapse_rfc2231_value does not handle UnicodeError
exception. Therefore a header like this one can cause
UnicodeError in attempting unicode conversion.

---
Content-Type: text/plain; charset="ISO-2022-JP"
Content-Disposition: attachment;
 filename*=iso-2022-jp''%1B%24BJs9p%3Dq%2D%21%1B%28B%2Etxt
---

Test script:
---
#! /usr/bin/env python
import sys
import email

msg = email.message_from_file(sys.stdin)
for part in msg.walk():
  print part.get_params()
  print part.get_filename()
---
run
% env LANG=ja_JP.eucJP ./test.py < attached_sample.eml

Background:
Character 0x2d21 is invalid in JIS X0208 but defined in
CP932 (Shift_JIS's superset by Microsoft).  Conversion
between Shift_JIS and ISO-2022-JP are computable
because both of them based on JIS X0208. So sometimes
CP932 characters appear in ISO-2022-JP encoded string,
typically produced by Windows MUA.
But Python's "ISO-2022-JP" means *pure* JIS X0208, thus
conversion is failed.

Workaround:
Convert to fallback_charset and/or skip invalid character.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1414018&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by rshura
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it needs exact same number of locks, and yet it works while
pm.db does not.

The only difference between the two data files is that in
each data item, data[0] is much larger in pm.db and smaller
in pm_ok.db

Is it remotely possible that the actual error has nothing to
do with locks but rather with the data size? What can I do
to find out or fix this?

Thanks for you help!

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 12:14

Message:
Logged In: YES 
user_id=413

fwiw your patch looks good.  it makes sense for a DBTxn to
hold a reference to its DBEnv.

(I suspect there may still be problems if someone calls
DBEnv.close while there are outstanding DBTxn's but doing
something about that would be a lot more work if its an
actual issue)

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 11:40

Message:
Logged In: YES 
user_id=33168

Could you pull the version of Modules/_bsddb.c out of SVN
and then apply my patch?  I believe your new problem was
fixed recently.  You can look in the Misc/NEWS file to find
the exact patch.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:37

Message:
Logged In: YES 
user_id=498357

Done same tests on another Debian sid machine, exact same
results (up to one line number, due to my extra fprintf
statement):

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210390848 (LWP 5865)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210390848 (LWP 5865)]
0xb7b01eb4 in DB_associate (self=0xb7d63df0, args=0xb7d67234,
kwargs=0xb7d5ee94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1218
1218Py_DECREF(self->associateCallback);
(gdb) 

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:31

Message:
Logged In: YES 
user_id=498357

OK, built and installed all kinds of python packages with
the patch. All tests are fine. Here goes:

1. Your testcase goes just fine, no segfault with the
patched version.
2. Mine still segfaults.
3. I ran mine under gdb with python2.4-dbg package, here's
the out

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by rshura
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Alex Roitman (rshura)
Date: 2006-01-24 13:12

Message:
Logged In: YES 
user_id=498357

Tried increasing locks, lockers, and locked objects to 1
each and seems to help. So I guess the number of locks is
data-size specific. I guess this is indeed a lock issue, so
it's my problem now and not yours :-)

Thanks for your help!

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it needs exact same number of locks, and yet it works while
pm.db does not.

The only difference between the two data files is that in
each data item, data[0] is much larger in pm.db and smaller
in pm_ok.db

Is it remotely possible that the actual error has nothing to
do with locks but rather with the data size? What can I do
to find out or fix this?

Thanks for you help!

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 12:14

Message:
Logged In: YES 
user_id=413

fwiw your patch looks good.  it makes sense for a DBTxn to
hold a reference to its DBEnv.

(I suspect there may still be problems if someone calls
DBEnv.close while there are outstanding DBTxn's but doing
something about that would be a lot more work if its an
actual issue)

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 11:40

Message:
Logged In: YES 
user_id=33168

Could you pull the version of Modules/_bsddb.c out of SVN
and then apply my patch?  I believe your new problem was
fixed recently.  You can look in the Misc/NEWS file to find
the exact patch.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:37

Message:
Logged In: YES 
user_id=498357

Done same tests on another Debian sid machine, exact same
results (up to one line number, due to my extra fprintf
statement):

(gdb) run test2.py
Starting program: /usr/bin/python2.4-dbg test2.py
[Thread debugging using libthread_db enabled]
[New Thread -1210390848 (LWP 5865)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1210390848 (LWP 5865)]
0xb7b01eb4 in DB_associate (self=0xb7d63df0, args=0xb7d67234,
kwargs=0xb7d5ee94) at
/home/shura/src/python2.4-2.4.2/Modules/_bsddb.c:1218
1218Py_DECREF(self->associateCallback);
(gdb) 

--

[ python-Bugs-1295808 ] expat symbols should be namespaced in pyexpat

2006-01-24 Thread SourceForge.net
Bugs item #1295808, was opened at 2005-09-19 21:44
Message generated for change (Comment added) made by tmick
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1295808&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Trent Mick (tmick)
Assigned to: Martin v. Löwis (loewis)
Summary: expat symbols should be namespaced in pyexpat

Initial Comment:
The Problem:
- you embed Python in some app
- the app dynamically loads libexpat of version X
- the embedded Python imports pyexpat (which was built
against
  libexpat version X+n)
--> pyexpat gets the expat symbols from the already
loaded and *older*
libexpat: crash (Specifically the crash we observed
was in
getting an old XML_ErrorString (from xmlparse.c)
and then calling
it with newer values in the XML_Error enum:

  // pyexpat.c, line 1970
  ...
  // Added in Expat 1.95.7.
  MYCONST(XML_ERROR_UNBOUND_PREFIX);
  ...


The Solution:
Prefix all a exported symbols with "PyExpat_". This is
similar to
what Mozilla does for some common libs:
http://lxr.mozilla.org/seamonkey/source/modules/libimg/png/mozpngconf.h#115


I'll attach the gdb backtrace that we were getting and
a patch.

--

>Comment By: Trent Mick (tmick)
Date: 2006-01-24 23:34

Message:
Logged In: YES 
user_id=34892

> This seems to be a duplicate of bug #1075984. 

You are right.

> Trent, is this patch sufficient to meet your embedding
> needs so that nothing else needs to be done?

Yes.

> I do not think this patch can be applied to 2.4.

That's fine. Getting this into Python >=2.5 would be good
enough.

Martin,
Have you had a chance to review this?

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2005-09-30 06:01

Message:
Logged In: YES 
user_id=33168

This seems to be a duplicate of bug #1075984.  I like this
patch better, but perhaps both patches (the one here and the
other bug report) should be implemented?

I think Martin helps maintain pyexpat.  Maybe he has some
ideas about either or both of these bugs/patches.  Martin,
do you think these are safe to apply?  I can apply the
patch(es) if you think it's safe.

Trent, is this patch sufficient to meet your embedding needs
so that nothing else needs to be done?

I do not think this patch can be applied to 2.4.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1295808&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by greg
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 16:35

Message:
Logged In: YES 
user_id=413

BerkeleyDB uses page locking so it makes sense that a
database with larger data objects in it would require more
locks assuming it is internally locking each page.  That
kind of tuning gets into BerkeleyDB internals where i
suspect people on the comp.databases.berkeleydb newsgroup
could answer things better.

glad its working for you now.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 13:12

Message:
Logged In: YES 
user_id=498357

Tried increasing locks, lockers, and locked objects to 1
each and seems to help. So I guess the number of locks is
data-size specific. I guess this is indeed a lock issue, so
it's my problem now and not yours :-)

Thanks for your help!

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it needs exact same number of locks, and yet it works while
pm.db does not.

The only difference between the two data files is that in
each data item, data[0] is much larger in pm.db and smaller
in pm_ok.db

Is it remotely possible that the actual error has nothing to
do with locks but rather with the data size? What can I do
to find out or fix this?

Thanks for you help!

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 12:14

Message:
Logged In: YES 
user_id=413

fwiw your patch looks good.  it makes sense for a DBTxn to
hold a reference to its DBEnv.

(I suspect there may still be problems if someone calls
DBEnv.close while there are outstanding DBTxn's but doing
something about that would be a lot more work if its an
actual issue)

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 11:40

Message:
Logged In: YES 
user_id=33168

Could you pull the version of Modules/_bsddb.c out of SVN
and then apply my patch?  I believe your new problem was
fixed recently.  You can look in the Misc/NEWS file to find
the exact patch.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 11:37

Message:
Logged In: YES 
user_id=498357

Done same tests on another Debian sid machine, exact same
results (up to one line number, d

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by rshura
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Alex Roitman (rshura)
Date: 2006-01-24 18:21

Message:
Logged In: YES 
user_id=498357

While you guys are here, can I ask you if there's a way to
return to the checkpoint made in a Txn-aware database?
Specifically, is there a way to return to the latest
checkpoing from within python?

My problem is that if my data import fails in the middle, I
want to undo some transactions that were committed, to have
a clean import undo. Checkpoint seems like a nice way to do
that, if only I could get back to it :-)

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 16:35

Message:
Logged In: YES 
user_id=413

BerkeleyDB uses page locking so it makes sense that a
database with larger data objects in it would require more
locks assuming it is internally locking each page.  That
kind of tuning gets into BerkeleyDB internals where i
suspect people on the comp.databases.berkeleydb newsgroup
could answer things better.

glad its working for you now.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 13:12

Message:
Logged In: YES 
user_id=498357

Tried increasing locks, lockers, and locked objects to 1
each and seems to help. So I guess the number of locks is
data-size specific. I guess this is indeed a lock issue, so
it's my problem now and not yours :-)

Thanks for your help!

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it needs exact same number of locks, and yet it works while
pm.db does not.

The only difference between the two data files is that in
each data item, data[0] is much larger in pm.db and smaller
in pm_ok.db

Is it remotely possible that the actual error has nothing to
do with locks but rather with the data size? What can I do
to find out or fix this?

Thanks for you help!

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 12:14

Message:
Logged In: YES 
user_id=413

fwiw your patch looks good.  it makes sense for a DBTxn to
hold a reference to its DBEnv.

(I suspect there may still be problems if someone calls
DBEnv.close while there are outstanding DBTxn's but doing
something about that would be a lot more work if its an
actual issue)

-

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 21:23

Message:
Logged In: YES 
user_id=33168

I'm sorry I'm not a Berkeley DB developer, I just play one
on TV.  :-)  Seriously, I don't know anything about BDB.  I
was just trying to get it stable.  Maybe Greg can answer
your question.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 18:21

Message:
Logged In: YES 
user_id=498357

While you guys are here, can I ask you if there's a way to
return to the checkpoint made in a Txn-aware database?
Specifically, is there a way to return to the latest
checkpoing from within python?

My problem is that if my data import fails in the middle, I
want to undo some transactions that were committed, to have
a clean import undo. Checkpoint seems like a nice way to do
that, if only I could get back to it :-)

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 16:35

Message:
Logged In: YES 
user_id=413

BerkeleyDB uses page locking so it makes sense that a
database with larger data objects in it would require more
locks assuming it is internally locking each page.  That
kind of tuning gets into BerkeleyDB internals where i
suspect people on the comp.databases.berkeleydb newsgroup
could answer things better.

glad its working for you now.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 13:12

Message:
Logged In: YES 
user_id=498357

Tried increasing locks, lockers, and locked objects to 1
each and seems to help. So I guess the number of locks is
data-size specific. I guess this is indeed a lock issue, so
it's my problem now and not yours :-)

Thanks for your help!

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it needs exact same number of locks, and yet it works while
pm.db does not.

The only difference between the two data files is that in
each data item, data[0] is much larger in pm.db and smaller
in pm_ok.db

Is it remotely possible that the actual error has nothing to
do with locks but rather with the data size? What can I do
to find out or fix this?

Thanks for you help!

--

Comment By: Gregory P. Smith (greg)
Date:

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
>Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 21:29

Message:
Logged In: YES 
user_id=33168

Committed revision 42177.
Committed revision 42178. (2.4)


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 21:23

Message:
Logged In: YES 
user_id=33168

I'm sorry I'm not a Berkeley DB developer, I just play one
on TV.  :-)  Seriously, I don't know anything about BDB.  I
was just trying to get it stable.  Maybe Greg can answer
your question.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 18:21

Message:
Logged In: YES 
user_id=498357

While you guys are here, can I ask you if there's a way to
return to the checkpoint made in a Txn-aware database?
Specifically, is there a way to return to the latest
checkpoing from within python?

My problem is that if my data import fails in the middle, I
want to undo some transactions that were committed, to have
a clean import undo. Checkpoint seems like a nice way to do
that, if only I could get back to it :-)

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 16:35

Message:
Logged In: YES 
user_id=413

BerkeleyDB uses page locking so it makes sense that a
database with larger data objects in it would require more
locks assuming it is internally locking each page.  That
kind of tuning gets into BerkeleyDB internals where i
suspect people on the comp.databases.berkeleydb newsgroup
could answer things better.

glad its working for you now.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 13:12

Message:
Logged In: YES 
user_id=498357

Tried increasing locks, lockers, and locked objects to 1
each and seems to help. So I guess the number of locks is
data-size specific. I guess this is indeed a lock issue, so
it's my problem now and not yours :-)

Thanks for your help!

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it needs exact same number of locks, and yet it works while
pm.db does not.

The only difference between the two data files is that in
each data item, data[0] is much larger in pm.db and smaller
in pm_ok.db

Is it remotely possible that the actu

[ python-Bugs-1413192 ] bsddb: segfault on db.associate call with Txn and large data

2006-01-24 Thread SourceForge.net
Bugs item #1413192, was opened at 2006-01-23 12:35
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1413192&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: Python 2.4
Status: Closed
Resolution: Fixed
Priority: 5
Submitted By: Alex Roitman (rshura)
Assigned to: Neal Norwitz (nnorwitz)
Summary: bsddb: segfault on db.associate call with Txn and large data

Initial Comment:
Problem confirmed on Python2.3.5/bsddb4.2.0.5 and
Python2.4.2/bsddb4.3.0 on Debian sid and Ubuntu Breezy.

It appears, that the associate call, necessary to
create a secondary index, segfaults when:
1. There is a large amount of data
2. Environment is transactional.

The
http://www.gramps-project.org/files/bsddb/testcase.tar.gz
 contains the example code and two databases, pm.db and
pm_ok.db -- both have the same number of keys and each
data item is a pickled tuple with two elements. The
second index is created over the unpickled data[1]. The
pm.db segfaults and the pm_ok.db does not. The second
db has much smaller data items in data[0].

If the environment is set up and opened without TXN
then pm.db is also fine. Seems like a problem in
associate call in a TXN environment, that is only seen
with large enough data.

Please let me know if I can be of further assistance.
This is a show-stopper issue for me, I would go out of
my way to help resolving this or finding a work-around.

Thanks!
Alex

P.S. I could not attach the large file, probably due to
the size limit on the upload, hence a link to the testcase.

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 21:31

Message:
Logged In: YES 
user_id=33168

Oh, I forgot to say thanks for the good bug report and
responding back.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 21:29

Message:
Logged In: YES 
user_id=33168

Committed revision 42177.
Committed revision 42178. (2.4)


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 21:23

Message:
Logged In: YES 
user_id=33168

I'm sorry I'm not a Berkeley DB developer, I just play one
on TV.  :-)  Seriously, I don't know anything about BDB.  I
was just trying to get it stable.  Maybe Greg can answer
your question.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 18:21

Message:
Logged In: YES 
user_id=498357

While you guys are here, can I ask you if there's a way to
return to the checkpoint made in a Txn-aware database?
Specifically, is there a way to return to the latest
checkpoing from within python?

My problem is that if my data import fails in the middle, I
want to undo some transactions that were committed, to have
a clean import undo. Checkpoint seems like a nice way to do
that, if only I could get back to it :-)

--

Comment By: Gregory P. Smith (greg)
Date: 2006-01-24 16:35

Message:
Logged In: YES 
user_id=413

BerkeleyDB uses page locking so it makes sense that a
database with larger data objects in it would require more
locks assuming it is internally locking each page.  That
kind of tuning gets into BerkeleyDB internals where i
suspect people on the comp.databases.berkeleydb newsgroup
could answer things better.

glad its working for you now.

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 13:12

Message:
Logged In: YES 
user_id=498357

Tried increasing locks, lockers, and locked objects to 1
each and seems to help. So I guess the number of locks is
data-size specific. I guess this is indeed a lock issue, so
it's my problem now and not yours :-)

Thanks for your help!

--

Comment By: Alex Roitman (rshura)
Date: 2006-01-24 12:50

Message:
Logged In: YES 
user_id=498357

With the SVN version of _bsddb.c I no longer have segfault
with my test. Instead I have the following exception:

Traceback (most recent call last):
  File "test2.py", line 37, in ?
   
person_map.associate(surnames,find_surname,db.DB_CREATE,txn=the_txn)
MemoryError: (12, 'Cannot allocate memory -- Lock table is
out of available locks')

Now, please bear with me here if you can. It's easy to shrug
it off saying that I simply don't have enough locks for this
huge txn. But the exact same code works fine with the
pm_ok.db file from my testcase, and that file has exact same
number of elements and exact same structure of both the data
and the secondary index computation. So one would think that
it 

[ python-Bugs-1332873 ] BSD DB test failures for BSD DB 4.1

2006-01-24 Thread SourceForge.net
Bugs item #1332873, was opened at 2005-10-19 22:30
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1332873&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: None
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Neal Norwitz (nnorwitz)
Assigned to: Gregory P. Smith (greg)
Summary: BSD DB test failures for BSD DB 4.1

Initial Comment:
==
FAIL: test01_associateWithDB
(bsddb.test.test_associate.AssociateHashTestCase)
--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 220, in test01_associateWithDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: ('99', 'unknown artist|Unnamed
song|Unknown')

==
FAIL: test02_associateAfterDB
(bsddb.test.test_associate.AssociateHashTestCase)
--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 240, in test02_associateAfterDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: ('99', 'unknown artist|Unnamed
song|Unknown')

==
FAIL: test01_associateWithDB
(bsddb.test.test_associate.AssociateBTreeTestCase)
--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 220, in test01_associateWithDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: ('99', 'unknown artist|Unnamed
song|Unknown')

==
FAIL: test02_associateAfterDB
(bsddb.test.test_associate.AssociateBTreeTestCase)--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 240, in test02_associateAfterDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: ('99', 'unknown artist|Unnamed
song|Unknown')

==
FAIL: test01_associateWithDB
(bsddb.test.test_associate.AssociateRecnoTestCase)
--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 220, in test01_associateWithDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: (99, 'unknown artist|Unnamed song|Unknown')

==
FAIL: test02_associateAfterDB
(bsddb.test.test_associate.AssociateRecnoTestCase)--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 240, in test02_associateAfterDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: (99, 'unknown artist|Unnamed song|Unknown')

==
FAIL: test01_associateWithDB
(bsddb.test.test_associate.AssociateBTreeTxnTestCase)
--
Traceback (most recent call last):
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 220, in test01_associateWithDB
self.finish_test(self.secDB)
  File
"/home/neal/build/python/dist/clean/Lib/bsddb/test/test_associate.py",
line 278, in finish_test
assert vals[1] == 99 or vals[1] == '99', vals
AssertionError: ('99', 'unknown artist|Unnamed
song|Unknown')

==

[ python-Bugs-788526 ] Closing dbenv first bsddb doesn't release locks & segfau

2006-01-24 Thread SourceForge.net
Bugs item #788526, was opened at 2003-08-13 22:13
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
>Group: Python 2.4
>Status: Closed
>Resolution: Works For Me
Priority: 5
Submitted By: Jane Austine (janeaustine50)
Assigned to: Gregory P. Smith (greg)
Summary: Closing dbenv first bsddb doesn't release locks & segfau

Initial Comment:
There is a test code named test_env_close in 
bsddb/test, but it
doesn't test the case thoroughly. There seems to be a 
bug in closing
the db environment first -- the lock is not released, and 
sometimes it
seg-faults.

Following is the code that shows this bug.


import os
from bsddb import db

dir,dbname='test_dbenv','test_db'

def getDbEnv(dir):
try:
os.mkdir(dir)
except:
pass
dbenv = db.DBEnv()
dbenv.open(dir, db.DB_INIT_CDB| db.DB_CREATE 
|db.DB_INIT_MPOOL)
return dbenv

def getDbHandler(db_env,db_name):
d = db.DB(dbenv)
d.open(db_name, db.DB_BTREE, db.DB_CREATE)
return d

dbenv=getDbEnv(dir)
assert dbenv.lock_stat()['nlocks']==0
d=getDbHandler(dbenv,dbname)
assert dbenv.lock_stat()['nlocks']==1
try:
dbenv.close()
except db.DBError:
pass
else:
assert 0

del d
import gc
gc.collect()
dbenv=getDbEnv(dir)
assert dbenv.lock_stat()['nlocks']==0,'number of current 
locks should
be 0' #this fails


If you close dbenv before db handler, the lock is not 
released.
Moreover, try this with dbshelve and it segfaults.


>>> from bsddb import dbshelve
>>> dbenv2=getDbEnv('test_dbenv2')
>>> d2=dbshelve.open(dbname,dbenv=dbenv2)
>>> try:
... dbenv2.close()
... except db.DBError:
... pass
... else:
... assert 0
... 
>>>
>>> 
Exception bsddb._db.DBError: (0, 'DBEnv object has 
been closed') in
Segmentation fault


Tested on:
 1. linux with Python 2.3 final, Berkeley DB 4.1.25
 2. windows xp with Python 2.3 final (with _bsddb that 
comes along)


--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 22:05

Message:
Logged In: YES 
user_id=33168

Assuming this was fixed by the patch.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 00:04

Message:
Logged In: YES 
user_id=33168

Jane could try the patch in bug 1413192 to see if it fixes
your problem?

--

Comment By: Gregory P. Smith (greg)
Date: 2004-06-16 15:18

Message:
Logged In: YES 
user_id=413

Yes this bug is still there.  A "workaround" is just a
"don't do that" when it comes to closing sleepycat DBEnv
objects while there are things using them still open.  I
believe we can prevent this...

One proposal: internally in _bsddb.c DBEnv could be made to
keep a weak reference to all objects created using it (DB
and DBLock objects) and refuse to call the sleepycat close()
method if any still exist (overridable using a force=1 flag).


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2004-06-15 20:14

Message:
Logged In: YES 
user_id=33168

Greg do you know anything about this?  Is it still a problem?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=788526&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1033390 ] build doesn't pick up bsddb w/Mandrake 9.2

2006-01-24 Thread SourceForge.net
Bugs item #1033390, was opened at 2004-09-23 06:58
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1033390&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Alex Martelli (aleax)
>Assigned to: Neal Norwitz (nnorwitz)
Summary: build doesn't pick up bsddb w/Mandrake 9.2

Initial Comment:
Mandrake 9.2 installs bsddb 4.1 under /usr/lib, and apparently 
Python 2.4a3's setup.py doesn't pick it up -- adding /usr/lib to the 
list of directories where bsddb 4 is being searched for, and 
rerunning make, seems to fix the problem.  (Problem does not 
appear on Mandrake 9.1, where I had installed sleepycat's stuff 
under /usr/local/BerkeleyDB.4.1 "by hand"; nor on MacOSX, where 
I had a fink-installed one in /sw/lib; no similar problem with any 
other module on any of these platforms, except bsddb).


--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 22:10

Message:
Logged In: YES 
user_id=33168

Alex?  I suspect this isn't a problem any longer.  If we
don't hear back within a month, we'll close this report.

--

Comment By: Gregory P. Smith (greg)
Date: 2004-12-13 04:15

Message:
Logged In: YES 
user_id=413

Could you try this again using python CVS HEAD.  I just committed a rework of 
setup.py's bsddb library+include file finding code that hopefully does the 
right thing for you.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1033390&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1227955 ] shelve/bsddb crash on db close

2006-01-24 Thread SourceForge.net
Bugs item #1227955, was opened at 2005-06-26 16:38
Message generated for change (Comment added) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227955&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: Scott (ses4j)
>Assigned to: Neal Norwitz (nnorwitz)
Summary: shelve/bsddb crash on db close

Initial Comment:
I have a 300 meg bsddb/hash db created and accessed by
shelve.  No problems when running python only.  But I
started accessing the code that opens it via a windows
DLL, opening and closing the DB on PROCESS_ATTACH and
DETACH.  All of a sudden, it would crash in the bsddb
module on closing/del'ing the db.  

Found a workaround by opening the db with
shelve.BsddbShelf(..) instead of shelve.open(..) - then
it closed fine when the DLL unloaded, no crash.

--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 22:12

Message:
Logged In: YES 
user_id=33168

If we don't hear back within a month, we should close this
as probably fixed by the patch that was checked in.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 00:03

Message:
Logged In: YES 
user_id=33168

Perhaps this is related to bug 1413192?  Could you try the
patch there and see if it fixes this problem?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227955&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-874534 ] 2.[345]: --with-wctype-functions 4 test failures

2006-01-24 Thread SourceForge.net
Bugs item #874534, was opened at 2004-01-10 10:32
Message generated for change (Settings changed) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=874534&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: Python 2.5
>Status: Closed
>Resolution: Wont Fix
Priority: 5
Submitted By: Pierre (pierre42)
Assigned to: Neal Norwitz (nnorwitz)
Summary: 2.[345]: --with-wctype-functions 4 test failures

Initial Comment:
# gmake test
case $MAKEFLAGS in \
*-s*)
LD_LIBRARY_PATH=/tmp/Python-2.3.3:/usr/local/lib:/usr/local/qt/lib:/usr/local/kde/lib:/usr/local/pwlib/lib:/usr/local/openh323/lib
CC='gcc' LDSHARED='gcc -shared' OPT='-DNDEBUG -g -O3
-Wall -Wstrict-prototypes' ./python -E ./setup.py -q
build;; \
*)
LD_LIBRARY_PATH=/tmp/Python-2.3.3:/usr/local/lib:/usr/local/qt/lib:/usr/local/kde/lib:/usr/local/pwlib/lib:/usr/local/openh323/lib
CC='gcc' LDSHARED='gcc -shared' OPT='-DNDEBUG -g -O3
-Wall -Wstrict-prototypes' ./python -E ./setup.py build;; \
esac
running build
running build_ext
building 'dbm' extension
gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC
-fno-strict-aliasing -DHAVE_NDBM_H -I.
-I/tmp/Python-2.3.3/./Include -I/usr/local/include
-I/tmp/Python-2.3.3/Include -I/tmp/Python-2.3.3 -c
/tmp/Python-2.3.3/Modules/dbmmodule.c -o
build/temp.linux-i686-2.3/dbmmodule.o
gcc -shared build/temp.linux-i686-2.3/dbmmodule.o
-L/usr/local/lib -o build/lib.linux-i686-2.3/dbm.so
*** WARNING: renaming "dbm" since importing it failed:
build/lib.linux-i686-2.3/dbm.so: undefined symbol:
dbm_firstkey
running build_scripts
find ./Lib -name '*.py[co]' -print | xargs rm -f
LD_LIBRARY_PATH=/tmp/Python-2.3.3:/usr/local/lib:/usr/local/qt/lib:/usr/local/kde/lib:/usr/local/pwlib/lib:/usr/local/openh323/lib
./python -E -tt ./Lib/test/regrtest.py -l
test_grammar
[...]
test_codecs
test test_codecs failed -- Traceback (most recent call
last):
  File "/tmp/Python-2.3.3/Lib/test/test_codecs.py",
line 333, in test_nameprep
raise test_support.TestFailed("Test 3.%d: %s" %
(pos+1, str(e)))
TestFailed: Test 3.5: u'\u0143 \u03b9' != u'\u0144 \u03b9'
 
test_codeop
[...]
test_format
/tmp/Python-2.3.3/Lib/test/test_format.py:19:
FutureWarning: %u/%o/%x/%X of negative int will return
a signed string in Python 2.4 and up
  result = formatstr % args
test_fpformat
[...]
test_re
test test_re produced unexpected output:
**
*** lines 2-3 of actual output doesn't appear in
expected output after line 1:
+ === Failed incorrectly ('(?u)\\b.\\b', u'\xc4', 0,
'found', u'\xc4')
+ === Failed incorrectly ('(?u)\\w', u'\xc4', 0,
'found', u'\xc4')
**
test_regex
[...]
test_unicode
test test_unicode failed -- errors occurred in
test.test_unicode.UnicodeTest
test_unicode_file
test_unicode_file skipped -- No Unicode filesystem
semantics on this platform.
test_unicodedata
test test_unicodedata failed -- Traceback (most recent
call last):
  File
"/tmp/Python-2.3.3/Lib/test/test_unicodedata.py", line
62, in test_method_checksum
self.assertEqual(result, self.expectedchecksum)
  File "/tmp/Python-2.3.3/Lib/unittest.py", line 302,
in failUnlessEqual
raise self.failureException, \
AssertionError:
'c269de8355871e3210ae8710b45c2ddb0675b9d5' !=
'a37276dc2c158bef6dfd908ad34525c97180fad9'
 
test_univnewlines
[...]
test_zlib
222 tests OK.
4 tests failed:
test_codecs test_re test_unicode test_unicodedata
29 tests skipped:
test_aepack test_al test_bsddb185 test_bsddb3
test_cd test_cl
test_curses test_dbm test_email_codecs test_gl
test_imgfile
test_linuxaudiodev test_locale test_macfs
test_macostools test_nis
test_normalization test_ossaudiodev test_pep277
test_plistlib
test_scriptpackages test_socket_ssl test_socketserver
test_sunaudiodev test_timeout test_unicode_file
test_urllibnet
test_winreg test_winsound
2 skips unexpected on linux2:
test_dbm test_locale
gmake: *** [test] Error 1


--

Comment By: M.-A. Lemburg (lemburg)
Date: 2006-01-09 04:11

Message:
Logged In: YES 
user_id=38388

This option should/will go away in Python 2.5, so I don't
think there's a need to bother with trying to fix problems
related to it.

The reason for the removal is that the option causes
semantical problems and makes Unicode work in non-standard
ways on platforms that use locale-aware extensions to the
wc-type functions.


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-08 21:47

Message:
Logged In: YES 
user_id=33168

Confirmed these are still a current problem in SVN.  The
problem is: --with-wctype-functions.  With this option all 4
tests fail on amd64 gentoo linux.

-

[ python-Bugs-786194 ] posixmodule uses utimes, which is broken in glibc-2.3.2

2006-01-24 Thread SourceForge.net
Bugs item #786194, was opened at 2003-08-10 02:34
Message generated for change (Settings changed) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=786194&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Extension Modules
Group: Python 2.3
>Status: Closed
Resolution: None
Priority: 7
Submitted By: Matthias Klose (doko)
Assigned to: Neal Norwitz (nnorwitz)
Summary: posixmodule uses utimes, which is broken in glibc-2.3.2

Initial Comment:
Maybe this is category 'Build' ... at least it results
in '1970/01/01-01:00:01' timestamps on all files copied
by distutils on glibc-2.3.2 based systems (2.3.1 seems
to be ok).

Disabling the detection of the utimes function in
configure works around this. As this function is a
wrapper around utime, why not use this one directly? Or
check, if utimes correctly works.


--

>Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-24 22:26

Message:
Logged In: YES 
user_id=33168

Closing due to inactivity.

--

Comment By: Neal Norwitz (nnorwitz)
Date: 2005-10-01 22:54

Message:
Logged In: YES 
user_id=33168

Matthias, can this be closed?  Is this still relevant?

--

Comment By: Matthias Klose (doko)
Date: 2003-08-10 03:54

Message:
Logged In: YES 
user_id=60903

for an utimes check see
http://lists.debian.org/debian-glibc/2003/debian-glibc-200308/msg00115.html

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=786194&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[ python-Bugs-1196154 ] Error: ... ossaudiodev.c, line 48: Missing type specifier

2006-01-24 Thread SourceForge.net
Bugs item #1196154, was opened at 2005-05-05 12:53
Message generated for change (Settings changed) made by nnorwitz
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1196154&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Build
Group: Python 2.4
>Status: Closed
>Resolution: Works For Me
Priority: 5
Submitted By: Will L G (diskman)
Assigned to: Neal Norwitz (nnorwitz)
Summary: Error: ... ossaudiodev.c, line 48: Missing type specifier

Initial Comment:
RedHat Linux 7.2 [alpha]
Kernel 2.6.11.6
Compaq C 6.5.2
Binutils 2.15
Make 2.8

Was receiving the error below and then a few little 
changes to make the configure script think I was using 
gcc while still using ccc...

[EMAIL PROTECTED] Python-2.4.1]# make
case $MAKEFLAGS in \
*-s*) LD_LIBRARY_PATH=/usr2/www/linux-
related/programming/python/Python-
2.4.1:/usr/X11R6/lib:/usr/lib:/usr/local/lib CC='ccache 
ccc -
pthread' LDSHARED='ccache ccc -pthread -shared' 
OPT='-
DNDEBUG -O' ./python -E ./setup.py -q build;; \
*) LD_LIBRARY_PATH=/usr2/www/linux-
related/programming/python/Python-
2.4.1:/usr/X11R6/lib:/usr/lib:/usr/local/lib CC='ccache 
ccc -
pthread' LDSHARED='ccache ccc -pthread -shared' 
OPT='-
DNDEBUG -O' ./python -E ./setup.py build;; \
esac
Could not find platform independent libraries 
Could not find platform dependent libraries 
Consider setting $PYTHONHOME to 
[:]
'import site' failed; use -v for traceback
Traceback (most recent call last):
  File "./setup.py", line 6, in ?
import sys, os, getopt, imp, re
ImportError: No module named os
make: *** [sharedmods] Error 1
[EMAIL PROTECTED] Python-2.4.1]# 


Here is a copy of the little script I used to set the env to 
use ccc while configuring python to compile using gcc 
and thus build the appropriate extensions:

./configure \
  --prefix=/usr \
  --sysconfdir=/etc \
  --build=alphapca56-alpha-linux-gnu \
  --without-gcc \
  --enable-shared \
  --with-dec-threads \
  --with-cxx="ccache cxx" \
  --with-cc="ccache ccc" \
  --without-threads

make CC="ccache ccc"  CXX="ccache cxx" \
CFLAGS="-O5 -fast -mtune=ev56 -w -pipe -
lpthread -threads" \
CXXFLAGS="-O5 -fast -mtune=ev56 -w -pipe -
lpthread -threads"

EVERYTHING compiled fine but for one little thing, two 
extensions didn't compile:
building 'ossaudiodev' extension
ccache ccc -DNDEBUG -O -fPIC -OPT:Olimit=0 -I. -
I/usr2/www/pub/alpha-RH7/programming/python/Python-
2.4.1/./Include -I/usr/local/include -I/usr2/www/pub/alpha-
RH7/programming/python/Python-2.4.1/Include -
I/usr2/www/pub/alpha-RH7/programming/python/Python-
2.4.1 -c /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/Modules/ossaudiodev.c -o build/temp.linux-alpha-
2.4/ossaudiodev.o
cc: Error: /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/Modules/ossaudiodev.c, line 48: Missing type 
specifier or type qualifier. (missingtype)
PyObject_HEAD;
-^
cc: Error: /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/Modules/ossaudiodev.c, line 57: Missing type 
specifier or type qualifier. (missingtype)
PyObject_HEAD;
-^



ccache ccc -DNDEBUG -O -fPIC -OPT:Olimit=0 -I. -
I/usr2/www/pub/alpha-RH7/programming/python/Python-
2.4.1/./Include -I/usr/local/include -I/usr2/www/pub/alpha-
RH7/programming/python/Python-2.4.1/Include -
I/usr2/www/pub/alpha-RH7/programming/python/Python-
2.4.1 -c /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/Modules/ossaudiodev.c -o build/temp.linux-alpha-
2.4/ossaudiodev.o
cc: Error: /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/Modules/ossaudiodev.c, line 48: Missing type 
specifier or type qualifier. (missingtype)
PyObject_HEAD;
-^
cc: Error: /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/Modules/ossaudiodev.c, line 57: Missing type 
specifier or type qualifier. (missingtype)
PyObject_HEAD;
-^
cc: Info: /usr2/www/pub/alpha-
RH7/programming/python/Python-
2.4.1/./Include/objimpl.h, line 255: In this declaration, 
type long double has the same representation as type 
double on this platform. (longdoublenyi)
  long double dummy;  /* force worst-case alignment */


--

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-01-09 23:20

Message:
Logged In: YES 
user_id=33168

The problem could be an extra ; on the PyObject_HEAD line.
(There shouldn't be any.)  The semi-colon has been removed
in SVN.  Can you verify that is the problem?  The fix wasn't
ported to 2.4, but that's easy enough if removing the
semi-colon fixes the problem.

If you want faster resolution, perhaps you can volunteer to
help out.

--

Comment By: Will L G (diskman)
Date: 2005-08-04 09:13

Message:
Logged In

[ python-Bugs-1199282 ] subprocess _active.remove(self) self not in list _active

2006-01-24 Thread SourceForge.net
Bugs item #1199282, was opened at 2005-05-10 18:24
Message generated for change (Comment added) made by atila-cheops
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1199282&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: Python 2.4
Status: Open
Resolution: None
Priority: 5
Submitted By: cheops (atila-cheops)
Assigned to: Peter Åstrand (astrand)
Summary: subprocess _active.remove(self) self not in list _active

Initial Comment:
I start a subprocess in a seperate thread (25 concurrent 
threads)
in some of the threads the following error occurs
 
Exception in thread Thread-4:
Traceback (most recent call last):
  File "C:\Python24\lib\threading.py", line 442, in 
__bootstrap
self.run()
  File "upgrade.py", line 45, in run
returncode = p.wait()
  File "C:\Python24\lib\subprocess.py", line 765, in wait
_active.remove(self)
ValueError: list.remove(x): x not in list
 
this is the code that starts the subprocess and where I 
wait for the result
 
p = subprocess.Popen('command', \
 stdin=None, 
stdout=subprocess.PIPE, \
 stderr=subprocess.STDOUT, 
universal_newlines=True)
returncode = p.wait()
errormessage = p.stdout.readlines()

--

>Comment By: cheops (atila-cheops)
Date: 2006-01-25 07:08

Message:
Logged In: YES 
user_id=1276121

As suggested by astrand
adding a try ... except clause in the file subprocess.py did
the job
I had to add that try ... except clause in 2 places
if you look in the file there are 2 instances were
list.remove(x) occurs unprotected.

try:
 list.remove(x)
except:
 pass

I have worked with 2.4.0, 2.4.1 and 2.4.2 and all three
needed the patch.
Hope this helps.

--

Comment By: HVB bei TUP (hvb_tup)
Date: 2006-01-23 16:34

Message:
Logged In: YES 
user_id=1434251

BTW: In my case, I call python.exe from a Windows service.

--

Comment By: HVB bei TUP (hvb_tup)
Date: 2006-01-23 16:30

Message:
Logged In: YES 
user_id=1434251

I have a similar problem.
Python 2.4.1 under MS Windows 2003,
Multi-Threaded application (about concurrent 10 threads).

In my case the same error occurs during _cleanup called 
from __init__ :

  
File "E:\lisa_ins\ewu\coop\reporting\python\tup_lisa\util\t
hreadutil.py", line 582, in callSubProcess
creationflags = creationflags
  File "C:\Python24\lib\subprocess.py", line 506, in 
__init__
_cleanup()
  File "C:\Python24\lib\subprocess.py", line 414, in 
_cleanup
inst.poll()
  File "C:\Python24\lib\subprocess.py", line 755, in poll
_active.remove(self)
ValueError: list.remove(x): x not in list

Is there a work-around?


--

Comment By: cheops (atila-cheops)
Date: 2005-09-19 09:29

Message:
Logged In: YES 
user_id=1276121

I noticed this bug under windows
to reproduce the bug, I attached the script I use, but this 
needs input, I tried with a simpler example (pinging a number 
of host concurrently) but that did not cause the bug.

--

Comment By: Peter Åstrand (astrand)
Date: 2005-06-23 16:03

Message:
Logged In: YES 
user_id=344921

I believe it should be sufficient to add a try...except
clause around _active.remove(). Can you upload a complete
example that triggers the bug? Have you noticed this bug on
Windows, UNIX or both platforms?


--

Comment By: cheops (atila-cheops)
Date: 2005-05-12 10:17

Message:
Logged In: YES 
user_id=1276121

this might be related to bug 1183780

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1199282&group_id=5470
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com