[ python-Bugs-973507 ] sys.stdout problems with pythonw.exe
Bugs item #973507, was opened at 2004-06-15 22:34 Message generated for change (Comment added) made by pfremy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=973507&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Manlio Perillo (manlioperillo) Assigned to: Nobody/Anonymous (nobody) Summary: sys.stdout problems with pythonw.exe Initial Comment: >>> sys.version '2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)]' >>> sys.platform 'win32' >>> sys.getwindowsversion() (5, 1, 2600, 2, '') Hi. I have written this script for reproducing the bug: import sys class teeIO: def __init__(self, *files): self.__files = files def write(self, str): for i in self.__files: print >> trace, 'writing on %s: %s' % (i, str) i.write(str) print >> trace, '-' * 70 def tee(*files): return teeIO(*files) log = file('log.txt', 'w') err = file('err.txt', 'w') trace = file('trace.txt', 'w') sys.stdout = tee(log, sys.__stdout__) sys.stderr = tee(err, sys.__stderr__) def write(n, width): sys.stdout.write('x' * width) if n == 1: return write(n - 1, width) try: 1/0 except: write(1, 4096) [output from err.log] Traceback (most recent call last): File "sys.py", line 36, in ? write(1, 4096) File "sys.py", line 28, in write sys.stdout.write('x' * width) File "sys.py", line 10, in write i.write(str) IOError: [Errno 9] Bad file descriptor TeeIO is needed for actually read the program output, but I don't know if the problem is due to teeIO. The same problem is present for stderr, as can be seen by swapping sys.__stdout__ and sys.__stderr__. As I can see, 4096 is the buffer size for sys.stdout/err. The problem is the same if the data is written in chunks, ad example: write(2, 4096/2). The bug isn't present if I use python.exe or if I write less than 4096 bytes. Thanks and regards Manlio Perillo -- Comment By: Philippe Fremy (pfremy) Date: 2004-12-23 16:19 Message: Logged In: YES user_id=233844 Manlio, thanks a lot for the tip. I ran into the same problem (a program that can both be used with python.exe and pythonw.exe). I will apply your fix. I think that the fix should be applied somehow to pythonw.exe, so that it does something more understandable to the user. -- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-20 15:39 Message: Logged In: YES user_id=1054957 Thanks for sys.executable and 'nul' hints! I only want to add two notes: 1) isrealfile(file('nul')) -> True So 'nul' has a 'real' implementation 2) sys.executables isn't very useful for me, since I can do: pythonw ascript.py > afile In this case sys.stdout is a 'real file', so I don't want to redirect it to a null device. In all cases, isrealfile work as I want. -- Comment By: Tim Peters (tim_one) Date: 2004-06-20 05:13 Message: Logged In: YES user_id=31435 Just noting that "the usual" way to determine whether you're running under pythonw is to see whether sys.executable.endswith("pythonw.exe") The usual way to get a do-nothing file object on Windows is to open the special (to Windows) file named "nul" (that's akin to opening the special file /dev/null on Unixish boxes). Note that file('nul').fileno() does return a handle on Windows, despite that it's not a file in the filesystem. -- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-18 18:42 Message: Logged In: YES user_id=1054957 I have found a very simple patch. First I have implemented this function: import os def isrealfile(file): """ Test if file is on the os filesystem """ if not hasattr(file, 'fileno'): return False try: tmp = os.dup(file.fileno()) except: return False else: os.close(tmp); return True Microsoft implementation of stdout/err/in when no console is created (and when no pipes are used) actually are not 'real' files. Then I have added the following code in sitecustomize.py: import sys class NullStream: """ A file like class that writes nothing """ def close(self): pass def flush(self): pass def write(self, str): pass def writelines(self, sequence): pass if not isrealfile(sys.__stdout__): sys.stdout = NullStream() if not isrealfile(sys.__stderr__): sys.stderr = NullStream() I have tested the code only on Windows XP Pro. P.S. isrealfile could be added in os module. Regards Manlio Perillo -- Comment By: Manlio Perillo (manlioperillo) Date: 2004-06-16 19:05 Message: Logged In: YES user_id=1054957
[ python-Bugs-1090139 ] presentation typo in lib: 6.21.4.2 How callbacks are called
Bugs item #1090139, was opened at 2004-12-22 20:00 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1090139&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jesse Weinstein (weinsteinj) >Assigned to: Raymond Hettinger (rhettinger) Summary: presentation typo in lib: 6.21.4.2 How callbacks are called Initial Comment: On the page: http://docs.python.org/lib/optparse-how-callbacks-called.html the text: args should be changed to: args to match the rest of the items on the page. This may require changing the LaTeX. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1090139&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1089974 ] mmap missing offset parameter
Bugs item #1089974, was opened at 2004-12-22 11:22 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089974&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: James Y Knight (foom) Assigned to: Nobody/Anonymous (nobody) Summary: mmap missing offset parameter Initial Comment: For some reason, the author of the MMap module didn't see fit to expose the "offset" parameter of the mmap syscall to python. It would be really nice if it had that. Currently, it's always set to 0. m_obj->data = mmap(NULL, map_size, prot, flags, fd, 0); -- Comment By: Josiah Carlson (josiahcarlson) Date: 2004-12-23 08:57 Message: Logged In: YES user_id=341410 I agree. Having access to the offset parameter would be quite convenient, at least to some who use mmap in a nontrivial fashion. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089974&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1048495 ] Memory leaks?
Bugs item #1048495, was opened at 2004-10-16 17:49 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1048495&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Roman Mamedov (romanrm) Assigned to: Nobody/Anonymous (nobody) Summary: Memory leaks? Initial Comment: Open python command-line interpreter. Enter: >>> a = range (1000) Observe Python memory usage. 20 Mb real, 159 Mb virtual memory here(I'm on windows). Enter: >>> a = 0 Observe memory usage again. 120 mb real/120 mb virtual. OK, this is a garbage collected language, lets try to garbage-collect. >>> import gc >>> gc.collect() 0 That didn't help. The memory usage is still at 120/120. So, the question is, when that "range" object will get deleted, or how to do delete it manually? Why garbage collection doesn't get rid of "orphaned" objects? Any comments? -- >Comment By: Raymond Hettinger (rhettinger) Date: 2004-12-23 14:39 Message: Logged In: YES user_id=80475 Closing because there is no bug here. You're welcome to submit a patch attempting to improve memory utilization while keeping int/float performance constant. -- Comment By: Roman Mamedov (romanrm) Date: 2004-10-17 13:50 Message: Logged In: YES user_id=943452 Thank you very much for a detailed explaination. In my opinion, this issue deserves more attention and consideration. There's a trend to create not just simple fire-off/throw-away scripts, but complex, long-running, GUI software in Python(as well as in other scripting/VM languages), and this tradeoff could make memory usage unnecessary high in not-so-rare usage patterns. That way, a split-second gain caused by having immortal integers could easily be eaten by VM trashing due to overconsumption of memory. I believe that comparable integer/float performance can be attained even without having these types as infinitely-immortal. -- Comment By: Tim Peters (tim_one) Date: 2004-10-16 19:04 Message: Logged In: YES user_id=31435 range() constructs a list. The list takes 4 bytes/entry, so you get about 40MB reclaimed when the list goes away. The space for integer objects happens to be immortal, though, and the approximately 12 bytes per integer doesn't go away. Space for floats is also immortal, BTW. There aren't easy resolutions. For example, the caching of space for integer objects in a dedicated internal int freelist speeds many programs. And if Python didn't do special memory allocation for ints, malloc overhead would probably boost the memory burden in your example to 16 bytes/int. So there are tradeoffs. Note that xrange() can usually be used instead to create one integer at a time (instead of creating 10 million simultaneously). Then the memory burden is trivial. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1048495&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-991754 ] _bsddb segfault
Bugs item #991754, was opened at 2004-07-15 17:27 Message generated for change (Comment added) made by dcjim You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=991754&group_id=5470 Category: Extension Modules Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 7 Submitted By: Jim Fulton (dcjim) Assigned to: Gregory P. Smith (greg) Summary: _bsddb segfault Initial Comment: I have to remove the _bsddb extension to run the Python tests. Otherwise I get a segfault when test_anydbm is run. I also get a segfault running test_bsddb uname -r 2.4.22-1.2188.nptlsmp rpm -q db4 db4-4.1.25-14 gdb ./python GNU gdb Red Hat Linux (5.3.90-0.20030710.41rh) Copyright 2003 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) r -E -tt ./Lib/test/regrtest.py -vv test_bsddb Starting program: /home/jim/src/python/cvs2/dist/src/python -E -tt ./Lib/test/regrtest.py -vv test_bsddb [Thread debugging using libthread_db enabled] [New Thread -1084317568 (LWP 19122)] test_bsddb test__no_deadlock_first (test.test_bsddb.TestBTree) ... ERROR Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1084317568 (LWP 19122)] 0x in ?? () (gdb) where #0 0x in ?? () (gdb) r -E -tt ./Lib/test/regrtest.py -vv test_anydbm The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/jim/src/python/cvs2/dist/src/python -E -tt ./Lib/test/regrtest.py -vv test_anydbm [Thread debugging using libthread_db enabled] [New Thread -1084645248 (LWP 19132)] test_anydbm test_anydbm_creation (test.test_anydbm.AnyDBMTestCase) ... ERROR Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1084645248 (LWP 19132)] 0x in ?? () (gdb) where #0 0x in ?? () -- >Comment By: Jim Fulton (dcjim) Date: 2004-12-23 22:08 Message: Logged In: YES user_id=73023 That seems to have fixed it, at least for me. The tests run without seg-faulting and I can even import bsddb. :) Thanks. P.S. I only tested the head. Lemme know if you want be to test the 2.4 branch. -- Comment By: Gregory P. Smith (greg) Date: 2004-12-13 12:07 Message: Logged In: YES user_id=413 I just rewrote the setup.py section that finds the header and library file for use when building the bsddb module. Previously it could pick different versions of the header + lib which would compile and link fine but fail at runtime. Its checked in to HEAD. Could you try that out and let me know if that fixes anything for you? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=991754&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-980352 ] coercion results used dangerously
Bugs item #980352, was opened at 2004-06-26 17:26 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: coercion results used dangerously Initial Comment: The C core uses the result of PyNumber_CoerceEx() dangerously: it gets passed to tp_compare, and most tp_compare slots assume they get two objects of the same type. This assumption is never checked, even when a user-defined __coerce__() is called: >>> class X(object): ... def __coerce__(self, other): ... return 4, other ... >>> slice(1,2,3) == X() Segmentation fault -- >Comment By: Armin Rigo (arigo) Date: 2004-12-23 22:14 Message: Logged In: YES user_id=4771 Patch applied. -- Comment By: Dima Dorfman (ddorfman) Date: 2004-07-22 14:15 Message: Logged In: YES user_id=908995 I just filed patch #995939 which should address this. A review would be appreciated. -- Comment By: Raymond Hettinger (rhettinger) Date: 2004-06-26 22:42 Message: Logged In: YES user_id=80475 I looked back at one of my ASPN recipes, http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/265894 , and saw that the use of __coerce__ dramatically simplified the code. Also, the API for rich comparisons is not only complex, but it is not entirely sef-consistent. See Tim's "mini-bug" comment in sets.py for an example. IOW, I think it is premature to pull the plug. -- Comment By: Neil Schemenauer (nascheme) Date: 2004-06-26 19:21 Message: Logged In: YES user_id=35752 This bug should obviously get fixed but in long term I think __coerce__ should go away. Do you think deprecating it for 2.4 and then removing support for it in 2.5 or 2.6 is feasible? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=980352&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1089978 ] exec scoping problem
Bugs item #1089978, was opened at 2004-12-22 19:27 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089978&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Kevin Quick (kquick) Assigned to: Nobody/Anonymous (nobody) Summary: exec scoping problem Initial Comment: Python 2.3.3 (#1, Oct 18 2004, 16:10:24) [GCC 3.3.4 20040623 (Gentoo Linux 3.3.4-r1, ssp-3.3.2-2, pie-8.7.6)] on linux2 Using exec on a code object with an "in ..." statement to specify locals and globals does not appear to set the globals for any code objects called by the exec'd code object. As a workaround, I can exec a file object containing the relevant code objects and the scope appears to work, although the following issues are noted (these are possibly separate bugs, but all demonstrated by the attached... let me know if you'd like separate bugreport submissions, but I figured it was easiest to start with one in case I'm way off base in some fundamental way). 1. exec won't process an opened .pyc file, only a .py file. However, the module's __file__ attribute will specify the .pyc or the .py, depending on which one is more recent. This forces me to reset the extension to .py at all times. It also means that if I use this technique I must ensure that the .py is always available relative to the .pyc. 2. The exec'd file needs the addition of a "if __name__ == '__main__'" to invoke the functionality I want. This makes sense for exec'ing a file, but because I'm forced to exec the file to get globals scoped as I wanted, rather than using the code object, I am then limited to that single function invocation for any __name__ == "__main__" invocation of the file. 3. Specifying "in locals()" for the code object invocation has no adverse (or positive) effect, but specifying it for the file object seems to cause the interpreter to recurse the *current* file, not the exec'd file (this is Test #5 in the attachment). -- >Comment By: Armin Rigo (arigo) Date: 2004-12-23 22:35 Message: Logged In: YES user_id=4771 This is actually all expected behavior, although the test 5 suprised me much at first, because there should be no difference at all between test 4 and test 5: the "in locals()" has no effect. In fact, there is no difference. You can add or remove "in locals()" in both tests 4 and 5 and it's always test 5 (i.e. the second time the same test) that fails. The reason is a bit subtle. Specifying a globals in exec is "not recursive", so to say, because every function call executes the callee in the globals where it was originally defined. These globals are attached to the function object (but not to the code object). So tests 2 and 3 (which are exactly equivalent) strip naked the code of greet and run it into a globals where it was not expected to be; it's as if you took the source code of the function and pasted it in place of the exec. It finds globalvar in the current module, and it also finds show_globalvar() because you imported it in the line "from submod import *", but this calls the unmodified show_globalvar() in submod.py, hence the NameError. If you wanted so-called recursive custom globals, all functions calls would have to be replaced by exec's. I assume you know that using classes and instances looks like a much cleaner solution... Now test 4 passes because it's as if you had pasted the whole source code of submod.py there. In particular, you are creating a new version of all the functions, which live in the execprob module. Now when test 5 runs, the expression 'greet.__module__' has a new meaning: 'greet' is now the name of the function defined in the current module by the test 4... so now 'greet.__module__' actually names the current module, and you're executing the current module recursively. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089978&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Feature Requests-985094 ] getattr(object, name) accepts only strings
Feature Requests item #985094, was opened at 2004-07-04 23:21 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=985094&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Viktor Ferenczi (complex) Assigned to: Nobody/Anonymous (nobody) Summary: getattr(object,name) accepts only strings Initial Comment: getattr(object,name) function accepts only strings. This behavior prohibits some interesting usage of callables as names in database column and field references. For example: Someone references a database field: value=record.field_name Programmatically, when name is the name of the field: value=getattr(record,field_name) Calculated fields could be implemented by passing a callable as a field name: def fn(record): return '%s (%s)'%(record.name,record.number) value=getattr(record,fn) The database backend checks if the name is callable and then call the name with the record. But this cannot be implemented in the simple way if getattr checks if the name is a string or not. This is an unneccessary check in getattr, setattr and delattr, since prevents interesting solutions. Temporary workaround: value=record.__getattr__(fn) There can be many unneccessary type checks and limitations in core and library functions. They should be removed to allow free usage. -- >Comment By: Armin Rigo (arigo) Date: 2004-12-23 22:55 Message: Logged In: YES user_id=4771 This is in part due to historical reasons. I guess you know about "property"? This is exactly what database people usually call calculated fields. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=985094&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1089632 ] _DummyThread() objects not freed from threading._active map
Bugs item #1089632, was opened at 2004-12-22 11:07 Message generated for change (Settings changed) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089632&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 Status: Open Resolution: None >Priority: 5 Submitted By: saravanand (saravanand) Assigned to: Nobody/Anonymous (nobody) Summary: _DummyThread() objects not freed from threading._active map Initial Comment: Problem Background: === I have Python Server module (long running) which accepts calls from several Python Clients over socket interface and forwards the call to a C++ component. This C++ component gives the reponses back to Python Server in a separate thread(created by C++ module) via callback. In the Python Callback implementation, the responses are sent to client in a synchronised manner using Python primitive threading.Semaphore. This Synchronisation is required as the C++ component can deliver parallel responses in different C++ threads. Here, the Python Server creates the semaphore object per client when the client request arrives (in Python thread). This same object is acquired & released in the C++ callback thread(s). Here we observed that Windows Events are getting created whenever the acquire method is executed in the Python Callback implementation in the context of C++ thread. But the same event is not freed by the Python Interpreter even after the termination of the C++ thread. Because of this, a Windows Event handles are getting leaked in the Python Server. Problem Description: == When we checked the Python module threading.py, we found that, every time a non-python thread (in our case C++ created thread), enters python and accessesn a primitive in threading module (eg: Semaphore, RLock), python looks for an entry for this thread in the _active map using thread ID as the Key. Since no entry exists for such C++ created threads, a _DummyThread object is created and added to the _active map for this C++ thread. For every _DummyThread object that is created, there is a corresponding Windows Event also getting created. Since this entry is never removed from the _active map even after the termination of the C++ thread ( as we could make out from the code in threading.py),for every "unique" C++ thread that enters python, a Windows Event is allocated and this manifests as continuous increase in the Handle count in my Python server ( as seen in Windows PerfMon/Task Manager). Is there a way to avoid this caching in Python Interpreter? Why cant Python remove this entry from the map when the C++ thread terminates. Or if Python can't get to know about the thread termination, should it not implement some kind of Garbage collection for the entries in this Map (especially entries for the _DummyThread objects). Does this require a correction in Python modulethreading.py? or is this caching behaviour by design? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089632&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1071597 ] configure problem on HP-UX 11.11
Bugs item #1071597, was opened at 2004-11-23 11:30 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1071597&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Harri Pasanen (harripasanen) Assigned to: Nobody/Anonymous (nobody) Summary: configure problem on HP-UX 11.11 Initial Comment: Python 2.4c1 has this problem (but if I recall, so did 2.3.3)_ Using gcc 3.3.3 to build on HP-UX 11.11, the configure out of the box is a bit off, resulting in a failed build, due to missing thread symbols: /usr/ccs/bin/ld: Unsatisfied symbols: PyThread_acquire_lock (first referenced in libpython2.4.a(import.o)) (code) PyThread_exit_thread (first referenced in libpython2.4.a(threadmodule.o)) (code) PyThread_allocate_lock (first referenced in libpython2.4.a(import.o)) (code) PyThread_free_lock (first referenced in libpython2.4.a(threadmodule.o)) (code) PyThread_start_new_thread (first referenced in libpython2.4.a(threadmodule.o)) (code) PyThread_release_lock (first referenced in libpython2.4.a(import.o)) (code) PyThread_get_thread_ident (first referenced in libpython2.4.a(import.o)) (code) PyThread__init_thread (first referenced in libpython2.4.a(thread.o)) (code) collect2: ld returned 1 exit status A workaround is to manually edit pyconfig.h, adding #define _POSIX_THREADS (The reason it is not picked up is that unistd.h on HP-UX has this comment: / * The following defines are specified in the standard, but are not yet * implemented: * *_POSIX_THREADS can't be defined until all * features are implemented ) The implementation seems however to be sufficiently complete to permit compiling and running Python with _POSIX_THREADS. While I'm editing pyconfig.h, I also comment out _POSIX_C_SOURCE definition, as it will result in lots of compilation warnings, of the style: gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I../Python-2.4c1/Include -DPy_BUILD_CORE -o Objects/frameobject.o ../Python-2.4c1/Objects/frameobject.c In file included from ../Python-2.4c1/Include/Python.h:8, from ../Python-2.4c1/Objects/frameobject.c:4: pyconfig.h:835:1: warning: "_POSIX_C_SOURCE" redefined :6:1: warning: this is the location of the previous definition So, to recapitulate: After configure, add #define _POSIX_THREADS and comment out #define _POSIX_C_SOURCE 200112L That will give you a Python working rather well, with "make test" producing: 251 tests OK. 1 test failed: test_pty 38 tests skipped: test_aepack test_al test_applesingle test_bsddb test_bsddb185 test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dl test_gdbm test_gl test_imgfile test_largefile test_linuxaudiodev test_locale test_macfs test_macostools test_nis test_normalization test_ossaudiodev test_pep277 test_plistlib test_scriptpackages test_socket_ssl test_socketserver test_sunaudiodev test_tcl test_timeout test_urllib2net test_urllibnet test_winreg test_winsound 1 skip unexpected on hp-ux11: test_tcl -- >Comment By: Martin v. Löwis (loewis) Date: 2004-12-24 00:50 Message: Logged In: YES user_id=21627 Can you find out why gcc says that "_POSIX_C_SOURCE" is defined on the command line? On the command line you provide, it isn't. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1071597&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1089632 ] _DummyThread() objects not freed from threading._active map
Bugs item #1089632, was opened at 2004-12-22 02:07 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089632&group_id=5470 Category: Python Interpreter Core Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: saravanand (saravanand) Assigned to: Nobody/Anonymous (nobody) Summary: _DummyThread() objects not freed from threading._active map Initial Comment: Problem Background: === I have Python Server module (long running) which accepts calls from several Python Clients over socket interface and forwards the call to a C++ component. This C++ component gives the reponses back to Python Server in a separate thread(created by C++ module) via callback. In the Python Callback implementation, the responses are sent to client in a synchronised manner using Python primitive threading.Semaphore. This Synchronisation is required as the C++ component can deliver parallel responses in different C++ threads. Here, the Python Server creates the semaphore object per client when the client request arrives (in Python thread). This same object is acquired & released in the C++ callback thread(s). Here we observed that Windows Events are getting created whenever the acquire method is executed in the Python Callback implementation in the context of C++ thread. But the same event is not freed by the Python Interpreter even after the termination of the C++ thread. Because of this, a Windows Event handles are getting leaked in the Python Server. Problem Description: == When we checked the Python module threading.py, we found that, every time a non-python thread (in our case C++ created thread), enters python and accessesn a primitive in threading module (eg: Semaphore, RLock), python looks for an entry for this thread in the _active map using thread ID as the Key. Since no entry exists for such C++ created threads, a _DummyThread object is created and added to the _active map for this C++ thread. For every _DummyThread object that is created, there is a corresponding Windows Event also getting created. Since this entry is never removed from the _active map even after the termination of the C++ thread ( as we could make out from the code in threading.py),for every "unique" C++ thread that enters python, a Windows Event is allocated and this manifests as continuous increase in the Handle count in my Python server ( as seen in Windows PerfMon/Task Manager). Is there a way to avoid this caching in Python Interpreter? Why cant Python remove this entry from the map when the C++ thread terminates. Or if Python can't get to know about the thread termination, should it not implement some kind of Garbage collection for the entries in this Map (especially entries for the _DummyThread objects). Does this require a correction in Python modulethreading.py? or is this caching behaviour by design? -- >Comment By: Brett Cannon (bcannon) Date: 2004-12-23 18:35 Message: Logged In: YES user_id=357491 Yes, it is by design. If you read the source you will notice that the comment mentions that the _DummyThread object is flagged as a daemon thread and thus should not be expected to be killed. The comment also mentions how they are not garbage collected. As stated in the docs, dummy threads are of limited functionality. You could cheat and remove the entries yourself from threading._active, but that might not be future-safe. I would just make sure that all threads are created through the threading or thread module, even if it means creating a minimal wrapper in Python for your C++ code to call through that to execute your C++ threads. If you want the docs to be more specific please feel free to submit a patch for the docs. Or if you can come up with a good way for the dummy threads to clean up after themselves then you can also submit that. But since the source code specifies that this expected and the docs say that dummy threads are of limited functionality I am closing as "won't fix". -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1089632&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1085300 ] Mac Library Modules 1.1.1 Bad Info
Bugs item #1085300, was opened at 2004-12-14 10:43 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1085300&group_id=5470 Category: Documentation Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Walrus (unclewalrus) Assigned to: Brett Cannon (bcannon) Summary: Mac Library Modules 1.1.1 Bad Info Initial Comment: Document states that OS X's TextEdit only saves RTF files. This is incorrect; you can make a plaintext file by choosing 'Make Plain Text' from the Format menu. -- >Comment By: Brett Cannon (bcannon) Date: 2004-12-23 18:43 Message: Logged In: YES user_id=357491 Fixed in rev. 1.13 in 2.5, rev. 1.12.2.1 in 2.4, and rev. 1.9.6.2 in 2.3 . Thanks, unclewalrus. -- Comment By: Walrus (unclewalrus) Date: 2004-12-22 07:28 Message: Logged In: YES user_id=1178211 Mac Library Modules, section 1.1.1, about halfway down. http://www.python.org/doc/2.4/mac/node5.html -- Comment By: Brett Cannon (bcannon) Date: 2004-12-21 21:12 Message: Logged In: YES user_id=357491 Yes, it's true; RTF and plaintext are the two possible output. Walrus, where exactly in the docs does it claim this? -- Comment By: Raymond Hettinger (rhettinger) Date: 2004-12-19 13:35 Message: Logged In: YES user_id=80475 Brett, can you verify this and, if true, add an appropriate note to the docs. -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1085300&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com