[issue1098] decode_unicode doesn't nul-terminate
New submission from Adam Olsen: In the large else branch in decode_unicode (if encoding is not NULL or "iso-8859-1"), the new string it produces is not nul-terminated. This then hits PyUnicode_DecodeUnicodeEscape's octal escape case, which reads past the end of the string (but would stop if there was a nul there.) I found this via valgrind. -- messages: 55630 nosy: rhamphoryncus severity: normal status: open title: decode_unicode doesn't nul-terminate __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1098> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1237] type_new doesn't allocate space for sentinal slot
New submission from Adam Olsen: type_new() allocates the exact number of slots it's going to use, but various other functions assume there's one more slot with a NULL name field serving as a sentinel. I'm unsure why it doesn't normally crash. -- components: Interpreter Core messages: 56231 nosy: rhamphoryncus severity: normal status: open title: type_new doesn't allocate space for sentinal slot __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1237> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1237] type_new doesn't allocate space for sentinal slot
Adam Olsen added the comment: typeobject.c:1842:type_new type = (PyTypeObject *)metatype->tp_alloc(metatype, nslots); nslots may be 0. typeobject.c:1966:type_new assigns this just-past-the-end address to tp_members type->tp_members = PyHeapType_GET_MEMBERS(et); type_new later calls PyType_Ready, which calls add_members. typeobject.c:3062:add_members for (; memb->name != NULL; memb++) { Interestingly, traverse_slots and clear_slots both use Py_Size rather than name != NULL (so I was wrong about the extent of the problem.) Both seem only to be used for heap types. add_members is used by both heap types and static C types, so it needs to handle both behaviours. One possible (if ugly) solution would be to switch iteration methods depending on if Py_Size() is 0 or not, making sure type_new sets tp_members to NULL if Py_Size() is 0. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1237> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1237] type_new doesn't allocate space for sentinal slot
Adam Olsen added the comment: Ugh, you're right. I refactored PyType_GenericAlloc out of my fork, which is why I got a crash. Sorry for wasting your time. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1237> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1328] feature request: force BOM option
Adam Olsen added the comment: The problem with "being tolerate" as you suggest is you lose the ability to round-trip. Read in a file using the UTF-8 signature, write it back out, and suddenly nothing else can open it. Conceptually, these signatures shouldn't even be part of the encoding; they're a prefix in the file indicating which encoding to use. Note that the BOM signature (ZWNBSP) is a valid code point. Although it seems unlikely for a file to start with ZWNBSP, if were to chop a file up into smaller chunks and decode them individually you'd be more likely to run into it. (However, it seems general use of ZWNBSP is being discouraged precisely due to this potential for confusion[1]). In summary, guessing the encoding should never be the default. Although it may be appropriate in some contexts, we must ensure we emit the right encoding for those contexts as well. [2] [1] http://unicode.org/faq/utf_bom.html#38 [2] http://unicode.org/faq/utf_bom.html#28 -- nosy: +rhamphoryncus __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1328> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1328] feature request: force BOM option
Adam Olsen added the comment: On 11/1/07, James G. sack (jim) <[EMAIL PROTECTED]> wrote: > > James G. sack (jim) added the comment: > > Adam Olsen wrote: > > Adam Olsen added the comment: > > > > The problem with "being tolerate" as you suggest is you lose the ability > > to round-trip. Read in a file using the UTF-8 signature, write it back > > out, and suddenly nothing else can open it. > > I'm sorry, I don't see the round-trip problem you describe. > > If codec utf_8 or utf_8_sig were to accept input with or without the > 3-byte BOM, and write it as currently specified without/with the BOM > respectively, then _I_ can reread again with either utf_8 or utf_8_sig. > > No round trip problem _for me_. > > Now If I need to exchange with some else, that's a different matter. One > way or another I need to know what format they need and create the > output they require for their input. > > Am I missing something in your statement of a problem? You don't seem to think it's important to interact with other programs. If you're importing with no intent to write out to a common format, then yes, autodetecting the BOM is just fine. Python needs a more general default though, and not guessing is part of that. > > Conceptually, these signatures shouldn't even be part of the encoding; > > they're a prefix in the file indicating which encoding to use. > > Yes, I'm aware of that, but you can't predict what you may find in dusty > archives, or what someone may give to you. IMO, that's the basis of > being tolerant in what you accept, is it not? Garbage in, garbage out. There's a lot of protocols with whitespace, capitalization, etc that you can fudge around while retaining the same contents; character set encodings aren't one of them. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1328> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1441] Cycles through ob_type aren't freed
New submission from Adam Olsen: If I create a subclass of 'type' that's also an instance of 'type', then I change __class__ to point to itself, at which point it cannot be freed (as the type object is needed to delete the instance.) I believe this can be solved by resetting __class__ to a known-safe value. Presumably this should be a hidden subclass of type, stored in a C global, and used specifically for this purpose. type_clear can do the reset (checking that the passed in type is a heap type, perhaps with a heap type metaclass); I'm hoping __del__ and weakref callbacks are not an issue at this point, but I'll defer to the experts for verification. This log using gdb shows that type_dealloc is called for a normal type (BoringType), but not for the self-cyclic one (ImmortalType). ImmortalType shows up in every collection, never actually getting collected. (I'm assuming Python doesn't bother to delete heap types during shutdown, which is why type_dealloc isn't called more.) ** [EMAIL PROTECTED]:~/src/python-p3yk/build-debug$ gdb ./python GNU gdb 6.6.90.20070912-debian Copyright (C) 2007 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i486-linux-gnu"... Using host libthread_db library "/lib/libthread_db.so.1". (gdb) set height 10 (gdb) break type_dealloc Breakpoint 1 at 0x80af057: file ../Objects/typeobject.c, line 2146. (gdb) commands Type commands for when breakpoint 1 is hit, one per line. End with a line saying just "end". >silent >printf "*** type_dealloc %p: %s\n", type, type->tp_name >cont >end (gdb) break typeobject.c:2010 Breakpoint 2 at 0x80aec35: file ../Objects/typeobject.c, line 2010. (gdb) commands Type commands for when breakpoint 2 is hit, one per line. End with a line saying just "end". >silent >printf "*** type_new %p: %s\n", type, type->tp_name >cont >end (gdb) run Starting program: /home/rhamph/src/python-p3yk/build-debug/python Failed to read a valid object file image from memory. [Thread debugging using libthread_db enabled] [New Thread 0xb7e156b0 (LWP 25496)] [Switching to Thread 0xb7e156b0 (LWP 25496)] *** type_new 0x81c80ac: ZipImportError *** type_new 0x81e9934: abstractproperty *** type_new 0x81ea484: _Abstract *** type_new 0x81eab04: ABCMeta *** type_new 0x81eb6b4: Hashable *** type_new 0x81ecb7c: Iterable *** type_new 0x81ed9a4: Iterator *** type_new 0x81ede84: Sized *** type_new 0x81ee364: Container *** type_new 0x822f2fc: Callable *** type_new 0x822f974: Set *** type_new 0x823094c: MutableSet *** type_new 0x8230fec: Mapping *** type_new 0x823135c: MappingView *** type_new 0x823183c: KeysView *** type_new 0x8231eb4: ItemsView *** type_new 0x823252c: ValuesView *** type_new 0x8232ba4: MutableMapping *** type_new 0x82330ac: Sequence *** type_new 0x8233fa4: MutableSequence *** type_new 0x81e61ac: _Environ *** type_new 0x823657c: _wrap_close *** type_new 0x81d41a4: _Printer *** type_new 0x81dab84: _Helper *** type_new 0x81d12a4: error *** type_new 0x82ad5b4: Pattern *** type_new 0x82adc2c: SubPattern *** type_new 0x82ae134: Tokenizer *** type_new 0x82afb04: Scanner *** type_new 0x8249f34: _multimap *** type_new 0x824892c: _TemplateMetaclass *** type_new 0x82b0634: Template *** type_new 0x82b34ac: Formatter *** type_new 0x82b000c: DistutilsError *** type_new 0x82b40c4: DistutilsModuleError *** type_new 0x82b440c: DistutilsClassError *** type_new 0x82b4754: DistutilsGetoptError *** type_new 0x82b4a9c: DistutilsArgError *** type_new 0x82b4de4: DistutilsFileError *** type_new 0x82b512c: DistutilsOptionError *** type_new 0x82b57d4: DistutilsSetupError *** type_new 0x82b5b1c: DistutilsPlatformError *** type_new 0x82b5e64: DistutilsExecError *** type_new 0x82b61ac: DistutilsInternalError *** type_new 0x82b64f4: DistutilsTemplateError *** type_new 0x82b683c: CCompilerError *** type_new 0x82b6b84: PreprocessError *** type_new 0x82b6ecc: CompileError *** type_new 0x82b7214: LibError *** type_new 0x82b755c: LinkError *** type_new 0x82b7d4c: UnknownFileError *** type_new 0x82b9b6c: Log *** type_new 0x82ba994: Quitter *** type_new 0x82bcdbc: CodecInfo *** type_new 0x82bd104: Codec *** type_new 0x82bdd94: IncrementalEncoder *** type_new 0x82be224: BufferedIncrementalEncoder *** type_new 0x82be72c: IncrementalDecoder *** type_new 0x82bebbc: BufferedIncrementalDecoder *** type_new 0x82bf0c4: StreamWriter *** type_new 0x82bf5cc: StreamReader *** type_new 0x82bfad4: StreamReaderWriter *** type_new 0x82c022c: StreamRecoder *** type_new 0x82c221c: CodecRegistryError *** type_new 0x82c5414: _OptionError *** type_new 0x82c23f4: BlockingIOError *** type_new
[issue1225584] crash in gcmodule.c on python reinitialization
Changes by Adam Olsen: -- nosy: +rhamphoryncus _ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1225584> _ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1517] lookdict should INCREF/DECREF startkey around PyObject_RichCompareBool
New submission from Adam Olsen: (thanks go to my partner in crime, jorendorff, for helping flesh this out.) lookdict calls PyObject_RichCompareBool without using INCREF/DECREF on the key passed. It's possible for the comparison to delete the key from the dict, causing its own argument to be deallocated. This can lead to bogus results or a crash. A custom type with its own __eq__ method will implicitly INCREF the key when passing it as an argument, which prevents incorrect behaviour from manifesting. There are a couple ways around this though, as shown in my attachment. -- components: Interpreter Core files: dictbug.py messages: 57925 nosy: rhamphoryncus severity: normal status: open title: lookdict should INCREF/DECREF startkey around PyObject_RichCompareBool Added file: http://bugs.python.org/file8820/dictbug.py __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1517> __#!/usr/bin/env python # This first approach causes dict to return the wrong value # for the key. The dict doesn't create its own reference # to startkey, meaning that address may get reused. We # take advantage of that to swap in a new value without it # noticing the key has changed. class Foo(object): def __init__(self, value): self.value = value def __hash__(self): return hash(self.value) def __eq__(self, other): # self gets packed into a tuple to get passed to us, # which INCREFs it and prevents us from reusing the # address. To get around that we move the work into # our return value's __nonzero__ method instead. return BadBool(self.value, other.value) class BadBool(object): def __init__(self, a, b): self.a = a self.b = b def __nonzero__(self): global counter if not counter: counter = 1 # This is the bad part. Conceivably, another thread # might do this without any malicious behaviour # involved. del d[d.keys()[0]] d[Foo(2**32+1)] = 'this is never the right answer' return self.a == self.b print "Test 1, using __eq__(a, b).__nonzero__()" d = {} counter = 0 d[Foo(2)] = 'this is an acceptable answer' print d.get(Foo(2), 'so is this') print '*' # This second version uses tuple's tp_richcompare. tuple # assumes the caller has a valid reference, but Bar.__eq__ # purges that reference, causing the tuple to be # deallocated. Watch only exists to make sure tuple # continues to use the memory, provoking a crash. # # Interestingly, Watch.__eq__ never gets called. class Bar(object): def __hash__(self): return 0 def __eq__(self, other): d.clear() return True class Watch(object): def __init__(self): print "New Watch 0x%x" % id(self) def __del__(self): print "Deleting Watch 0x%x" % id(self) def __hash__(self): return 0 def __eq__(self): print "eq Watch", self, other return True print "Test 2, using tuple's tp_richcompare" d = {} d.clear() d[(Bar(), Watch())] = 'hello' print d[(Bar(), Watch())] ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1736792] dict reentrant/threading request
Adam Olsen added the comment: I don't believe there's anything to debate on this, so all it really needs is a patch, followed by getting someone to review and commit it. -- ___ Python tracker <http://bugs.python.org/issue1736792> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1441] Cycles through ob_type aren't freed
Adam Olsen added the comment: As far as I know. -- ___ Python tracker <http://bugs.python.org/issue1441> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10046] Correction to atexit documentation
Adam Olsen added the comment: Signals can directly kill a process. Try SIGTERM to see this. SIGINT is caught and handled by Python, which just happens to default to a graceful exit (unless stuck in a lib that prevents that.) Try pasting your script into an interactive interpreter session and you'll see that it doesn't exit at all. -- ___ Python tracker <http://bugs.python.org/issue10046> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2778] set_swap_bodies is unsafe
Adam Olsen <[EMAIL PROTECTED]> added the comment: Here's another approach to avoiding set_swap_bodies. The existing semantics are retained. Rather than creating a temporary frozenset and swapping the contents, I check for a set and call the internal hash function directly (bypassing PyObject_Hash). I even retain the current semantics of PySet_Discard and PySet_Contains, which do NOT do the implicit conversion (and have unit tests to verify that!) I do have some concern that calling PySet_Check on every call may be too slow. It may be better to only call it on failure (which is more-or-less what the old code did.) set_swap_bodies has only one remaining caller, and their use case could probably be significantly simplified. Added file: http://bugs.python.org/file10315/python-setswap-2.diff __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2778> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2778] set_swap_bodies is unsafe
Adam Olsen <[EMAIL PROTECTED]> added the comment: Revised again. sets are only hashed after PyObject_Hash raises a TypeError. This also fixes a regression in test_subclass_with_custom_hash. Oddly, it doesn't show up in trunk, but does when my previous patch is applied to py3k. Added file: http://bugs.python.org/file10321/python-setswap-3.diff __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2778> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2855] lookkey should INCREF/DECREF startkey around PyObject_RichCompareBool
New submission from Adam Olsen <[EMAIL PROTECTED]>: sets are based on dicts' code, so they have the same problem as bug 1517. Patch attached. -- files: python-lookkeycompare.diff keywords: patch messages: 66829 nosy: Rhamphoryncus severity: normal status: open title: lookkey should INCREF/DECREF startkey around PyObject_RichCompareBool Added file: http://bugs.python.org/file10322/python-lookkeycompare.diff __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2855> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2778] set_swap_bodies is unsafe
Adam Olsen <[EMAIL PROTECTED]> added the comment: There is no temporary hashability. The hash value is calculated, but never stored in the set's hash field, so it will never become out of sync. Modification while __hash__ or __eq__ is running is possible, but for __eq__ that applies to any mutable type. set_contains_key only has two callers, one for each value of the treat_set_key_as_frozen argument, so I could inline it if you'd prefer that? set_swap_bodies has only one remaining caller, which uses a normal set, not a frozenset. Using set_swap_bodies on a frozenset would be visible except in a few special circumstances (ie it only contains builtin types), so a sanity check against that seems appropriate. The old code reset ->hash to -1 in case one of the arguments was a frozenset - impossible now, so I sanity check that it's always -1. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2778> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue689895] Imports can deadlock
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue689895> ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2928] Allow set/frozenset for __all__
New submission from Adam Olsen <[EMAIL PROTECTED]>: Patch allows any iterable (such as set and frozenset) to be used for __all__. I also add some blank lines, making it more readable. -- files: python-importall.diff keywords: patch messages: 67104 nosy: Rhamphoryncus severity: normal status: open title: Allow set/frozenset for __all__ Added file: http://bugs.python.org/file10384/python-importall.diff __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2928> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2928] Allow set/frozenset for __all__
Adam Olsen <[EMAIL PROTECTED]> added the comment: tuples are already allowed for __all__, which breaks attempts to monkey-patch it. I did forget to check the return from PyObject_GetIter. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2928> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue643841] New class special method lookup change
Adam Olsen <[EMAIL PROTECTED]> added the comment: Is there any reason not to name it ProxyMixin, ala DictMixin? -- nosy: +Rhamphoryncus Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue643841> ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1720705] thread + import => crashes?
Adam Olsen <[EMAIL PROTECTED]> added the comment: The patch for issue 1856 should fix the potential crash, so we could eliminate that scary blurb from the docs. -- nosy: +Rhamphoryncus _ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1720705> _ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue643841] New class special method lookup change
Adam Olsen <[EMAIL PROTECTED]> added the comment: _deref won't work for remote objects, will it? Nor _unwrap, although that starts to get "fun". Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue643841> ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue643841] New class special method lookup change
Adam Olsen <[EMAIL PROTECTED]> added the comment: If it's so specialized then I'm not sure it should be in the stdlib - maybe as a private API, if there was a user. Having a reference implementation is noble, but this isn't the right way to do it. Maybe as an example in Doc or in the cookbook. Better yet, add the unit test and define the ProxyMixin directly in that file. Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue643841> ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue643841] New class special method lookup change
Adam Olsen <[EMAIL PROTECTED]> added the comment: Surely remote proxies fall under what would be expected for a "proxy mixin"? If it's in the stdlib it should be a canonical implementation, NOT a reference implementation. At the moment I can think up 3 use cases: * weakref proxies * lazy load proxy * distributed object The first two could be done if _deref were made overridable. The third needs to turn everything into a message, which would could either do directly, or we could do by turning everything into normal method lookups which then get handled through __getattribute__. Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue643841> ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3001] RLock's are SLOW
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3001> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2507] Exception state lives too long in 3.0
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2507> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2833] __exit__ silences the active exception
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2833> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3021] Lexical exception handlers
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3021> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1758146] Crash in PyObject_Malloc
Adam Olsen <[EMAIL PROTECTED]> added the comment: Does the PythonInterpreter option create multiple interpreters within a single process, rather than spawning separate processes? IMO, that API should be ripped out. They aren't truly isolated interpreters and nobody I've asked has yet provided a use case for it. -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1758146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1758146] Crash in PyObject_Malloc
Adam Olsen <[EMAIL PROTECTED]> added the comment: Right, so it's only the python modules loaded as part of the app that need to be isolated. You don't need the stdlib or any other part of the interpreter to be isolated. This could be done either by not using the normal import mechanism (build your own on top of exec()) or by some magic to generate a different root package for each "interpreter" (so you show up in sys.modules as '_mypkg183.somemodule'.) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1758146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue643841] New class special method lookup change
Adam Olsen <[EMAIL PROTECTED]> added the comment: The inplace operators aren't right for weakref proxies. If a new object is returned there likely won't be another reference to it and the weakref will promptly be cleared. This could be fixed with another property like _target, which by default type(self)(result). Weakref proxies could override it to raise an exception instead. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue643841> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3042] Add PEP 8 compliant aliases to threading module
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3042> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3021] Lexical exception handlers
Adam Olsen <[EMAIL PROTECTED]> added the comment: PEP 3134's implicit exception chaining (if accepted) would require your semantic, and your semantic is simpler anyway (even if the implementation is non-trivial), so consider my objections to be dropped. PEP 3134 also proposes implicit chaining during a finally block, which raises questions for this case: try: ... finally: print(sys.exc_info()) raise If sys.exc_info() were removed (with no direct replacement) we'd have that behaviour answered. raise could be answered by making it a syntax error, but keep in mind this may be nested in another except block: try: ... except: try: ... finally: raise I'd prefer a syntax error in this case as well, to avoid any ambiguity and to keep the implementation simple. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3021> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3021] Lexical exception handlers
Adam Olsen <[EMAIL PROTECTED]> added the comment: PEP 3134 gives reason to change it. __context__ should be set from whatever exception is "active" from the try/finally, thus it should be the inner block, not the outer except block. This flipping of behaviour, and the general ambiguity, is why I suggest a syntax error. "In the face of ambiguity, refuse the temptation to guess." PEP 3134 has not been officially accepted, but many parts have be added anyway. Your cleanups pave the way for the last of it. I suggest asking on python-3000 for a pronouncement on the PEP. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3021> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3021] Lexical exception handlers
Adam Olsen <[EMAIL PROTECTED]> added the comment: I agree, the argument for a syntax error is weak. It's more instinct than anything else. I don't think I'd be able to convince you unless Guido had the same instinct I do. ;) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3021> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3070] Wrong size calculation in posix_execve
New submission from Adam Olsen <[EMAIL PROTECTED]>: In 2.x, the size of C string needed for an environment variable used by posix_execve was calculated using PyString_GetSize. In 3.0 this is translated to PyUnicode_GetSize. However, in 3.0 the C string is the UTF-8 encoded version of the unicode object, which doesn't necessarily have the same length as what PyUnicode_GetSize reports. The simplest solution I see is to use strlen() instead. -- components: Extension Modules messages: 67880 nosy: Rhamphoryncus severity: normal status: open title: Wrong size calculation in posix_execve versions: Python 3.0 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2320] Race condition in subprocess using stdin
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2320> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1683] Thread local storage and PyGILState_* mucked up by os.fork()
Adam Olsen <[EMAIL PROTECTED]> added the comment: Updated version of roudkerk's patch. Adds the new function to pythread.h and is based off of current trunk. Note that Parser/intrcheck.c isn't used on my box, so it's completely untested. roudkerk's original analysis is correct. The TLS is never informed that the old thread is gone, so when it seems the same id again it assumes it is the old thread, which PyThreadState_Swap doesn't like. Added file: http://bugs.python.org/file10595/fork-thread-patch-2 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1683> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1683] Thread local storage and PyGILState_* mucked up by os.fork()
Adam Olsen <[EMAIL PROTECTED]> added the comment: Incidentally, it doesn't seem necessary to reinitialize the lock. Posix duplicates the lock, so if you hold it when you fork your child will be able to unlock it and use it as normal. Maybe there's some non-Posix behaviour or something even more obscure when #401226 was done? (reinitializing is essentially harmless though, so in no way should this hold up release.) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1683> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3093] Namespace polution from multiprocessing
New submission from Adam Olsen <[EMAIL PROTECTED]>: All these in multiprocessing.h are lacking suitable py/_py/Py/_Py/PY/_PY prefixes: PyObject *mp_SetError(PyObject *Type, int num); extern PyObject *pickle_dumps; extern PyObject *pickle_loads; extern PyObject *pickle_protocol; extern PyObject *BufferTooShort; extern PyTypeObject SemLockType; extern PyTypeObject ConnectionType; extern PyTypeObject PipeConnectionType; extern HANDLE sigint_event; Additionally, win32_functions.c exposes Win32Type and create_win32_namespace. semaphore.c has sem_timedwait_save. multiprocessing.c has ProcessError. -- messages: 68078 nosy: Rhamphoryncus severity: normal status: open title: Namespace polution from multiprocessing ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3093> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3095] multiprocessing initializes flags dict unsafely
New submission from Adam Olsen <[EMAIL PROTECTED]>: multiprocessing.c currently has code like this: temp = PyDict_New(); if (!temp) return; if (PyModule_AddObject(module, "flags", temp) < 0) return; PyModule_AddObject consumes the reference to temp, so it could conceivable be deleted before the rest of this function finishes. -- messages: 68081 nosy: Rhamphoryncus severity: normal status: open title: multiprocessing initializes flags dict unsafely versions: Python 2.6, Python 3.0 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3095> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3093] Namespace polution from multiprocessing
Adam Olsen <[EMAIL PROTECTED]> added the comment: The directory is irrelevant. C typically uses a flat namespace for symbols. If python loads this library it will conflict with any other libraries using the same name. This has happened numerous times in the past, so there's no questioning the correct practises. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3093> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3095] multiprocessing initializes flags dict unsafely
Adam Olsen <[EMAIL PROTECTED]> added the comment: This doesn't look right. PyDict_SetItemString doesn't steal the references passed to it, so your reference to flags will be leaked each time. Besides, I think it's a little cleaner to INCREF it before call PyModule_AddObject, then DECREF it at any point you return. Additionally, I've just noticed that the result of Py_BuildValue is getting leaked. It should be stored to a temporary, added to flags, then the temporary should be released. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3095> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault after loading multiprocessing.reduction
New submission from Adam Olsen <[EMAIL PROTECTED]>: $ ./python Python 2.6a3+ (unknown, Jun 12 2008, 20:10:55) [GCC 4.2.3 (Debian 4.2.3-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing.reduction [55604 refs] >>> [55604 refs] Segmentation fault (core dumped) -- components: Extension Modules messages: 68120 nosy: Rhamphoryncus severity: normal status: open title: segfault after loading multiprocessing.reduction type: crash versions: Python 2.6, Python 3.0 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault after loading multiprocessing.reduction
Adam Olsen <[EMAIL PROTECTED]> added the comment: op is a KeyedRef instance. The instance being cleared from the module is the multiprocessing.util._afterfork_registry. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb7d626b0 (LWP 2287)] 0x0809a131 in _Py_ForgetReference (op=0xb7a9a814) at Objects/object.c:2022 2022if (op == &refchain || (gdb) bt #0 0x0809a131 in _Py_ForgetReference (op=0xb7a9a814) at Objects/object.c:2022 #1 0x0809a1a0 in _Py_Dealloc (op=0xb7a9a814) at Objects/object.c:2043 #2 0x080b436e in tupledealloc (op=0xb79ad1f4) at Objects/tupleobject.c:169 #3 0x0809a1ab in _Py_Dealloc (op=0xb79ad1f4) at Objects/object.c:2044 #4 0x08065bdf in PyObject_CallFunctionObjArgs (callable=0xb79baa84) at Objects/abstract.c:2716 #5 0x080cabc4 in handle_callback (ref=0xb7a9a814, callback=0xb79baa84) at Objects/weakrefobject.c:864 #6 0x080cad6e in PyObject_ClearWeakRefs (object=0xb79bd624) at Objects/weakrefobject.c:910 #7 0x08168971 in func_dealloc (op=0xb79bd624) at Objects/funcobject.c:453 #8 0x0809a1ab in _Py_Dealloc (op=0xb79bd624) at Objects/object.c:2044 #9 0x080b436e in tupledealloc (op=0xb79a65f4) at Objects/tupleobject.c:169 #10 0x0809a1ab in _Py_Dealloc (op=0xb79a65f4) at Objects/object.c:2044 #11 0x080b7c26 in clear_slots (type=0x82af4e4, self=0xb7a9a814) at Objects/typeobject.c:821 #12 0x080b806e in subtype_dealloc (self=0xb7a9a814) at Objects/typeobject.c:950 #13 0x0809a1ab in _Py_Dealloc (op=0xb7a9a814) at Objects/object.c:2044 #14 0x080915b2 in dict_dealloc (mp=0xb79b9674) at Objects/dictobject.c:907 #15 0x0809a1ab in _Py_Dealloc (op=0xb79b9674) at Objects/object.c:2044 #16 0x080915b2 in dict_dealloc (mp=0xb79b9494) at Objects/dictobject.c:907 #17 0x0809a1ab in _Py_Dealloc (op=0xb79b9494) at Objects/object.c:2044 #18 0x08068720 in instance_dealloc (inst=0xb79b6edc) at Objects/classobject.c:668 #19 0x0809a1ab in _Py_Dealloc (op=0xb79b6edc) at Objects/object.c:2044 #20 0x08090517 in insertdict (mp=0xb79a5b74, key=0xb7a9ae38, hash=-1896994012, value=0x81bdd6c) at Objects/dictobject.c:455 #21 0x08090da6 in PyDict_SetItem (op=0xb79a5b74, key=0xb7a9ae38, value=0x81bdd6c) at Objects/dictobject.c:697 #22 0x08095ad3 in _PyModule_Clear (m=0xb7a88334) at Objects/moduleobject.c:125 #23 0x08111443 in PyImport_Cleanup () at Python/import.c:479 #24 0x08120cb3 in Py_Finalize () at Python/pythonrun.c:430 #25 0x0805b618 in Py_Main (argc=1, argv=0xbfbaf434) at Modules/main.c:623 #26 0x0805a2e6 in main (argc=0, argv=0x0) at ./Modules/python.c:23 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault from multiprocessing.util.register_after_fork
Adam Olsen <[EMAIL PROTECTED]> added the comment: More specific test case. -- title: segfault after loading multiprocessing.reduction -> segfault from multiprocessing.util.register_after_fork Added file: http://bugs.python.org/file10610/register_after_fork-crash.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault from multiprocessing.util.register_after_fork
Adam Olsen <[EMAIL PROTECTED]> added the comment: Very specific test case, eliminating multiprocessing entirely. It may be an interaction between having the watched obj as its own key in the WeakValueDictionary and the order in which the two modules are cleared. Added file: http://bugs.python.org/file10611/outer.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault from multiprocessing.util.register_after_fork
Changes by Adam Olsen <[EMAIL PROTECTED]>: Added file: http://bugs.python.org/file10612/inner.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault from multiprocessing.util.register_after_fork
Changes by Adam Olsen <[EMAIL PROTECTED]>: Removed file: http://bugs.python.org/file10610/register_after_fork-crash.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] segfault with WeakValueDictionary and module clearing
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- title: segfault from multiprocessing.util.register_after_fork -> segfault with WeakValueDictionary and module clearing ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Specific enough yet? Seems the WeakValueDictionary and the module clearing aren't necessary. A subclass of weakref is created. The target of this weakref is added as an attribute of the weakref. So long as a callback is present there will be a crash on shutdown. However, if the callback prints the attribute, you get a crash right then. The weakref claims to be dead, which shouldn't be possible when the weakref's attributes have a strong reference to the target. -- title: segfault with WeakValueDictionary and module clearing -> weakref subclass segfault Added file: http://bugs.python.org/file10613/weakref_subclass_cycle.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Changes by Adam Olsen <[EMAIL PROTECTED]>: Removed file: http://bugs.python.org/file10612/inner.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Changes by Adam Olsen <[EMAIL PROTECTED]>: Removed file: http://bugs.python.org/file10611/outer.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: 1. MyRef is released from the module as part of shutdown 2. MyRef's subtype_dealloc DECREFs its dictptr (not clearing it, as MyRef is dead and should be unreachable) 3. the dict DECREFs the Dummy (MyRef's target) 4. Dummy's subtype_dealloc calls PyObject_ClearWeakRefs to notify of its demise 5. the callback is called, with the dead MyRef as an argument 6. If MyRef's dict is accessed a segfault occurs. Presumably just calling the callback does enough damage to explain the segfault without accessing MyRef's dict. As I understand, a deleted weakref doesn't call its callback. However, subtype_dealloc doesn't call the base type's tp_dealloc until *after* it does everything else. Does it need a special case for weakrefs, or maybe a tp_predealloc slot? ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Ahh, I missed a detail: when the callback is called the weakref has a refcount of 0, which is ICNREFed to 1 when getting put in the args, then drops down to 0 again when the args are DECREFed (causing it to get _Py_ForgetReference to be called a second time, which segfaults.) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Patch to add extra sanity checks to Py_INCREF (only if Py_DEBUG is set). If the refcount is 0 or negative if calls Py_FatalError. This should catch revival bugs such as this one a little more clearly. The patch also adds a little more checking to _Py_ForgetReference, so it's more likely to print an error rather than segfaulting. -- keywords: +patch Added file: http://bugs.python.org/file10614/python-incref-from-zero.diff ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3095] multiprocessing initializes flags dict unsafely
Adam Olsen <[EMAIL PROTECTED]> added the comment: Aww, that's cheating. (Why didn't I think of that?) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3095> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +jnoller ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Well, my attempt at a patch didn't work, and yours does, so I guess I have to support yours. ;) Can you review my python-incref-from-zero patch? It verifies the invariant that you need, that once an object hits a refcount of 0 it won't get raised again. (The possibility of __del__ makes me worry, but it *looks* okay.) gcmodule.c has an inline copy of handle_callbacks. Is it possible a collection could have the same problem we're fixing here? Minor nit: you're asserting cbcalled, but you're not using the generic callback, so it's meaningless. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Ahh, it seems gcmodule already considers the weakref to be reachable when it calls the callbacks, so it shouldn't be a problem. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Another minor nit: "if(current->ob_refcnt > 0)" should have a space after the "if". Otherwise it's looking good. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2320] Race condition in subprocess using stdin
Adam Olsen <[EMAIL PROTECTED]> added the comment: This is messy. File descriptors from other threads are leaking into child processes, and if the write end of a pipe never gets closed in all of them the read end won't get EOF. I suspect "cat"'s stdin is getting duplicated like that, but I haven't been able to verify - /proc//fd claims fd 0 is /dev/pts/2. Maybe libc does some remapping. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2320> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3088] test_multiprocessing hangs on OS X 10.5.3
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3088> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3114] bus error on lib2to3
Adam Olsen <[EMAIL PROTECTED]> added the comment: I'm not sure that fix is 100% right - it fixes safety, but not correctness. Wouldn't it be more correct to move all 3 into temporaries, assign from tstate, then XDECREF the temporaries? Otherwise you're going to expose just the value or traceback, without a type set. -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3114> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3114] bus error on lib2to3
Adam Olsen <[EMAIL PROTECTED]> added the comment: Looking good. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3114> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3125] test_multiprocessing causes test_ctypes to fail
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus, jnoller ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3125> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3125] test_multiprocessing causes test_ctypes to fail
Adam Olsen <[EMAIL PROTECTED]> added the comment: Jesse, can you be more specific? Thomas, do you have a specific command to reproduce this? It runs fine if I do "./python -m test.regrtest -v test_multiprocessing test_ctypes". That's with amaury's patch from 3100 applied. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3125> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3125] test_multiprocessing causes test_ctypes to fail
Adam Olsen <[EMAIL PROTECTED]> added the comment: I see no common symbols between #3102 and #3092, so unless I missed something, they shouldn't be involved. I second the notion that multiprocessing's use of pickle is the triggering factor. Registering so many types is ugly, and IMO it shouldn't register anything it doesn't control. We should either register them global or not at all, and *never* as a side-effect of loading a separate module. I do see some win32-specific behaviour, which may be broken. Thomas, wanna try commenting out these two lines in sharedtypes.py:rebuild_ctype? if sys.platform == 'win32' and type_ not in copy_reg.dispatch_table: copy_reg.pickle(type_, reduce_ctype) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3125> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3100] weakref subclass segfault
Adam Olsen <[EMAIL PROTECTED]> added the comment: Unfortunately, Py_INCREF is sometimes used in an expression (followed by a comma). I wouldn't expect an assert to be valid there (and I'd want to check ISO C to make sure it's portable, not just accepted by GCC). I'd like if Py_INCREF and friends were made into static inline functions, which *should* have identical performance (at least on GCC), but that's a more significant change. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3100> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3111] multiprocessing ppc Debian/ ia64 Ubuntu compilation error
Adam Olsen <[EMAIL PROTECTED]> added the comment: I don't see a problem with skipping it, but if chroot is the problem, maybe the chroot environment should be fixed to include /dev/shm? ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3111> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3111] multiprocessing ppc Debian/ ia64 Ubuntu compilation error
Adam Olsen <[EMAIL PROTECTED]> added the comment: I agree with your agreement. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3111> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3153] sqlite leaks on error
New submission from Adam Olsen <[EMAIL PROTECTED]>: Found in Modules/_sqlite/cursor.c: self->statement = PyObject_New(pysqlite_Statement, &pysqlite_StatementTy pe); if (!self->statement) { goto error; } rc = pysqlite_statement_create(self->statement, self->connection, operation); if (rc != SQLITE_OK) { self->statement = 0; goto error; } Besides the ugliness of allocating the object before passing it to the "create" function, if pysqlite_statement_create fails, the object is leaked. -- components: Extension Modules messages: 68478 nosy: Rhamphoryncus severity: normal status: open title: sqlite leaks on error type: resource usage ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3153> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3154] "Quick search" box renders too long on FireFox 3
Adam Olsen <[EMAIL PROTECTED]> added the comment: Works for me. -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3154> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3154] "Quick search" box renders too long on FireFox 3
Adam Olsen <[EMAIL PROTECTED]> added the comment: That's the same version I'm using. Maybe there's some font size differences? I'm also on a 64-bit AMD. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3154> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3155] Python should expose a pthread_cond_timedwait API for threading
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3155> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3112] implement PEP 3134 exception reporting
Adam Olsen <[EMAIL PROTECTED]> added the comment: * cause/context cycles should be avoided. Naive traceback printing could become confused, and I can't think of any accidental way to provoke it (besides the problem mentioned here.) * I suspect PyErr_Display handled string exceptions in 2.x, and this is an artifact of that * No opinion on PyErr_DisplaySingle * PyErr_Display is used by PyErr_Print, and it must end up with no active exception. Additionally, third party code may depend on this semantic. Maybe PyErr_DisplayEx? * +1 on standardizing tracebacks -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3112> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3112] implement PEP 3134 exception reporting
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Sun, Jun 22, 2008 at 8:07 AM, Antoine Pitrou <[EMAIL PROTECTED]> wrote: > You mean they should be detected when the exception is set? I was afraid > that it may make exception raising slower. Reporting is not performance > sensitive in comparison to exception raising. > > (the "problem mentioned here" is already avoided in the patch, but the > detection of other cycles is deferred to exception reporting for the > reason given above) I meant only that trivial cycles should be detected. However, I hadn't read your patch, so I didn't realize you already knew of a way to create a non-trivial cycle. This has placed a niggling doubt in my mind about chaining the exceptions, rather than the tracebacks. Hrm. >> * PyErr_Display is used by PyErr_Print, and it must end up with no >> active exception. Additionally, third party code may depend on this >> semantic. Maybe PyErr_DisplayEx? > > I was not proposing to change the exception swallowing semantics, just > to add a return value indicating if any errors had occurred while > displaying the exception. Ahh, harmless then, but to what benefit? Wouldn't the traceback module be better suited to any possible error reporting? ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3112> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3112] implement PEP 3134 exception reporting
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Sun, Jun 22, 2008 at 1:04 PM, Antoine Pitrou <[EMAIL PROTECTED]> wrote: > > Antoine Pitrou <[EMAIL PROTECTED]> added the comment: > > Le dimanche 22 juin 2008 à 17:17 +, Adam Olsen a écrit : >> I meant only that trivial cycles should be detected. However, I >> hadn't read your patch, so I didn't realize you already knew of a way >> to create a non-trivial cycle. >> >> This has placed a niggling doubt in my mind about chaining the >> exceptions, rather than the tracebacks. Hrm. > > Chaining the tracebacks rather than the exceptions loses important > information: what is the nature of the exception which is the cause or > context of the current exception? I assumed each leg of the traceback would reference the relevant exception. Although.. this is effectively the same as creating a new exception instance when reraised, rather than modifying the old one. Reusing the old is done for performance I believe. > It is improbable to create such a cycle involuntarily, it means you > raise an old exception in replacement of a newer one caused by the > older, which I think is quite contorted. It is also quite easy to avoid > creating the cycle, simply by re-raising outside of any except handler. I'm not convinced. try: ... # Lookup except A as a: # Lookup failed try: ... # Fallback except B as b: # Fallback failed raise a # The original exception is of the type we want For this behaviour, this is the most natural way to write it. Conceptually, there shouldn't be a cycle - the traceback should be the lookup, then the fallback, then whatever code is about this - exactly the order the code executed in. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3112> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3112] implement PEP 3134 exception reporting
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Sun, Jun 22, 2008 at 1:48 PM, Antoine Pitrou <[EMAIL PROTECTED]> wrote: > > Antoine Pitrou <[EMAIL PROTECTED]> added the comment: > > Le dimanche 22 juin 2008 à 19:23 +, Adam Olsen a écrit : >> For this behaviour, this is the most natural way to write it. >> Conceptually, there shouldn't be a cycle > > I agree your example is not far-fetched. How about avoiding cycles for > implicit chaining, and letting users shoot themselves in the foot with > explicit recursive chaining if they want? Detection would be cheap > enough, just a simple loop without any memory allocation. That's still O(n). I'm not so easily convinced it's cheap enough. And for that matter, I'm not convinced it's correct. The inner exception's context becomes clobbered when we modify the outer exception's traceback. The inner's context should reference the traceback as it was at that point. This would all be a lot easier if reraising always created a new exception. Can you think of a way to skip that only when we can be sure its safe? Maybe as simple as counting the references to it? ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3112> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3112] implement PEP 3134 exception reporting
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Sun, Jun 22, 2008 at 2:20 PM, Antoine Pitrou <[EMAIL PROTECTED]> wrote: > > Antoine Pitrou <[EMAIL PROTECTED]> added the comment: > > Le dimanche 22 juin 2008 à 19:57 +, Adam Olsen a écrit : >> That's still O(n). I'm not so easily convinced it's cheap enough. > > O(n) when n will almost never be greater than 5 (and very often equal to > 1 or 2), and when the unit is the cost of a pointer dereference plus the > cost of a pointer comparison, still sounds cheap. We could bench it > anyway. Indeed. >> And for that matter, I'm not convinced it's correct. The inner >> exception's context becomes clobbered when we modify the outer >> exception's traceback. The inner's context should reference the >> traceback as it was at that point. > > Yes, I've just thought about that, it's a bit annoying... We have to > decide what is more annoying: that, or a reference cycle that can delay > deallocation of stuff attached to an exception (including local > variables attached to the tracebacks)? The cycle is only created by broken behaviour. The more I think about it, the more I want to fix it (by not reusing the exception). >> This would all be a lot easier if reraising always created a new >> exception. > > How do you duplicate an instance of an user-defined exception? Using an > equivalent of copy.deepcopy()? It will probably end up much more > expensive than the above-mentioned O(n) search. Passing in e.args is probably sufficient. All this would need to be discussed on python-dev (or python-3000?) though. >> Can you think of a way to skip that only when we can be >> sure its safe? Maybe as simple as counting the references to it? > > I don't think so, the exception can be referenced in an unknown number > of local variables (themselves potentially referenced by tracebacks). Can be, or will be? Only the most common behaviour needs to be optimized. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3112> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3112] implement PEP 3134 exception reporting
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Sun, Jun 22, 2008 at 2:56 PM, Antoine Pitrou <[EMAIL PROTECTED]> wrote: > Le dimanche 22 juin 2008 à 20:40 +0000, Adam Olsen a écrit : >> Passing in e.args is probably sufficient. > > I think it's very optimistic :-) Some exception objects can hold dynamic > state which is simply not stored in the "args" tuple. See Twisted's > Failure objects for an extreme example: > http://twistedmatrix.com/trac/browser/trunk/twisted/python/failure.py > > (yes, it is used an an exception: see "raise self" in the trap() method) Failure doesn't have an args tuple and doesn't subclass Exception (or BaseException) - it already needs modification in 3.0. It's heaped full of complexity and implementation details. I wouldn't be surprised if your changes break it in subtle ways too. In short, if forcing Failure to be rewritten is the only consequence of using .args, it's an acceptable tradeoff of not corrupting exception contexts. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3112> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3154] "Quick search" box renders too long on FireFox 3
Adam Olsen <[EMAIL PROTECTED]> added the comment: I've checked it again, using the font preferences rather than the zoom setting, and I can reproduce the problem. Part of the problem stems from using pixels to set the margin, rather than ems (or whatever the text box is based on). However, although the margin (at least visually) scales up evenly, the fonts themselves do not. Arguably this is a defect in Firefox, or maybe even the HTML specs themselves. Additionally, that only seems to control the visual margin. I've yet to figure out what controls the layout (such as wrapping the Go button). ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3154> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3088] test_multiprocessing hangs on OS X 10.5.3
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Wed, Jul 2, 2008 at 3:44 PM, Mark Dickinson <[EMAIL PROTECTED]> wrote: > > Mark Dickinson <[EMAIL PROTECTED]> added the comment: > >> Mark, can you try commenting out _TestCondition and seeing if you can >> still get it to hang?; > > I removed the _TestCondition class entirely from test_multiprocessing, > and did make test again. It didn't hang! :-) It crashed instead. :-( Try running "ulimit -c unlimited" in the shell before running the test (from the same shell). After it aborts it should dump a core file, which you can then inspect using "gdb ./python core", to which "bt" will give you a stack trace ("backtrace"). On a minor note, I'd suggest running "./python -m test.regrtest" explicitly, rather than "make test". The latter runs the test suite twice, deleting all .pyc files before the first run, to detect problems in their creation. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3088> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3088] test_multiprocessing hangs on OS X 10.5.3
Adam Olsen <[EMAIL PROTECTED]> added the comment: On Wed, Jul 2, 2008 at 5:08 PM, Mark Dickinson <[EMAIL PROTECTED]> wrote: > > Mark Dickinson <[EMAIL PROTECTED]> added the comment: > > Okay. I just got about 5 perfect runs of the test suite, followed by: > > Macintosh-3:trunk dickinsm$ ./python.exe -m test.regrtest > [...] > test_multiprocessing > Assertion failed: (bp != NULL), function PyObject_Malloc, file > Objects/obmalloc.c, line 746. > Abort trap (core dumped) > > I then did: > > gdb -c /cores/core.16235 > > I've attached the traceback as traceback.txt Are you sure that's right? That traceback has no mention of PyObject_Malloc or obmalloc.c. Try checking the date. Also, if you use "gdb ./python.exe " to start gdb it should print a warning if the program doesn't match the core. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3088> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3088] test_multiprocessing hangs on OS X 10.5.3
Adam Olsen <[EMAIL PROTECTED]> added the comment: That looks better. It crashed while deleting an exception, who's args tuple has a bogus refcount. Could be a refcount issue of the exception or the args, or of something that that references them, or a dangling pointer, or a buffer overrun, etc. Things to try: 1) Run "pystack" in gdb, from Misc/gdbinit 2) Print the exception type. Use "up" until you reach BaseException_clear, then do "print self->ob_type->tp_name". Also do "print *self" and make sure the ob_refcnt is at 0 and the other fields look sane. 3) Compile using --without-pymalloc and throw it at a real memory debugger. I'd suggest starting with your libc's own debugging options, as they tend to be less invasive: http://developer.apple.com/documentation/Performance/Conceptual/ManagingMemory/Articles/MallocDebug.html . If that doesn't work, look at Electric Fence, Valgrind, or your tool of choice. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3088> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3088] test_multiprocessing hangs on OS X 10.5.3
Adam Olsen <[EMAIL PROTECTED]> added the comment: Also, make sure you do a "make clean" since you last updated the tree or touched any file or ran configure. The automatic dependency checking isn't 100% reliable. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3088> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3268] Cleanup of tp_basicsize inheritance
New submission from Adam Olsen <[EMAIL PROTECTED]>: inherit_special contains logic to inherit the base type's tp_basicsize if the new type doesn't have it set. The logic was spread over several lines, but actually does almost nothing (presumably an artifact of previous versions), so here's a patch to clean it up. There was also an incorrect comment which I've removed. A new one should perhaps be added explaining what the other code there does, but it's not affected by what I'm changing, and I'm not sure why it's doing what it's doing anyway, so I'll leave that to someone else. -- files: python-inheritsize.diff keywords: patch messages: 69169 nosy: Rhamphoryncus, nnorwitz severity: normal status: open title: Cleanup of tp_basicsize inheritance Added file: http://bugs.python.org/file10798/python-inheritsize.diff ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3268> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue874900] threading module can deadlock after fork
Changes by Adam Olsen <[EMAIL PROTECTED]>: -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue874900> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1758146] Crash in PyObject_Malloc
Adam Olsen <[EMAIL PROTECTED]> added the comment: Apparently modwsgi uses subinterpreters because some third-party packages aren't sufficiently thread-safe - modwsgi can't fix those packages, so subinterpreters are the next best thing. http://groups.google.com/group/modwsgi/browse_frm/thread/988bf560a1ae8147/2f97271930870989 This is a weak argument for language design. Subinterpreters should be deprecated, the problems with third-party packages found and fixed, and ultimately subinterpreters ripped out. If you wish to improve the situation, I suggest you help fix the problems in the third-party packages. For example, http://code.google.com/p/modwsgi/wiki/IntegrationWithTrac implies trac is configured with environment variables - clearly not thread-safe. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1758146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1758146] Crash in PyObject_Malloc
Adam Olsen <[EMAIL PROTECTED]> added the comment: Ahh, I did miss that bit, but it doesn't really matter. Tell modwsgi to only use the main interpreter ("PythonInterpreter main_interpreter"), and if you want multiple modules of the same name put them in different packages. Any other problems (trac using env vars for configuration) should be fixed directly. (My previous comment about building your own import mechanism was overkill. Writing a package that uses relative imports is enough - in fact, that's what relative imports are for.) ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1758146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1758146] Crash in PyObject_Malloc
Adam Olsen <[EMAIL PROTECTED]> added the comment: Franco, you need to look at the line above that check: PyThreadState *check = PyGILState_GetThisThreadState(); if (check && check->interp == newts->interp && check != newts) Py_FatalError("Invalid thread state for this thread"); PyGILState_GetThisThreadState returns the original tstate *for that thread*. What it's asserting is that, if there's a second tstate *in that thread*, it must be in a different subinterpreter. It doesn't prevent your second and third tstate from sharing the same subinterpreter, but it probably should, as this check implies it's an invariant. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1758146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1758146] Crash in PyObject_Malloc
Adam Olsen <[EMAIL PROTECTED]> added the comment: It's only checking that the original tstate *for the current thread* and the new tstate have a different subinterpreter. A subinterpreter can have multiple tstates, so long as they're all in different threads. The documentation is referring specifically to the PyGILState_Ensure and PyGILState_Release functions. Calling these says "I want a tstate, and I don't know if I had one already". The problem is that, with subinterpreters, you may not get a tstate with the subinterpreter you want. subinterpreter references saved in globals may lead to obscure crashes or other errors - some of these have been fixed over the years, but I doubt they all have. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1758146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue874900] threading module can deadlock after fork
Adam Olsen <[EMAIL PROTECTED]> added the comment: In general I suggest replacing the lock with a new lock, rather than trying to release the existing one. Releasing *might* work in this case, only because it's really a semaphore underneath, but it's still easier to think about by just replacing. I also suggest deleting _active and recreating it with only the current thread. I don't understand how test_join_on_shutdown could succeed. The main thread shouldn't be marked as done.. well, ever. The test should hang. I suspect test_join_in_forked_process should call os.waitpid(childpid) so it doesn't exit early, which would cause the original Popen.wait() call to exit before the output is produced. The same problem of test_join_on_shutdown also applies. Ditto for test_join_in_forked_from_thread. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue874900> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue874900] threading module can deadlock after fork
Adam Olsen <[EMAIL PROTECTED]> added the comment: Looking over some of the other platforms for thread_*.h, I'm sure replacing the lock is the right thing. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue874900> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3329] API for setting the memory allocator used by Python
Adam Olsen <[EMAIL PROTECTED]> added the comment: How would this allow you to free all memory? The interpreter will still reference it, so you'd have to have called Py_Finalize already, and promise not to call Py_Initialize afterwords. This further supposes the process will live a long time after killing off the interpreter, but in that case I recommend putting python in a child process instead. -- nosy: +Rhamphoryncus ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3329> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3329] API for setting the memory allocator used by Python
Adam Olsen <[EMAIL PROTECTED]> added the comment: Basically you just want to kick the malloc implementation into doing some housekeeping, freeing its caches? I'm kinda surprised you don't add the hook directly to your libc's malloc. IMO, there's no use-case for this until Py_Finalize can completely tear down the interpreter, which requires a lot of special work (killing(!) daemon threads, unloading C modules, etc), and nobody intends to do that at this point. The practical alternative, as I said, is to run python in a subprocess. Let the OS clean up after us. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3329> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3297] Python interpreter uses Unicode surrogate pairs only before the pyc is created
Adam Olsen <[EMAIL PROTECTED]> added the comment: Simpler way to reproduce this (on linux): $ rm unicodetest.pyc $ $ python -c 'import unicodetest' Result: False Len: 2 1 Repr: u'\ud800\udd23' u'\U00010123' $ $ python -c 'import unicodetest' Result: True Len: 1 1 Repr: u'\U00010123' u'\U00010123' Storing surrogates in UTF-32 is ill-formed[1], so the first part definitely shouldn't be failing on linux (with a UTF-32 build). The repr could go either way, as unicode doesn't cover escape sequences. We could allow u'\ud800\udd23' literals to magically become u'\U00010123' on UTF-32 builds. We already allow repr(u'\ud800\udd23') to magically become "u'\U00010123'" on UTF-16 builds (which is why the repr test always passes there, rather than always failing). The bigger problem is how much we prohibit ill-formed character sequences. We already prevent values above U+10, but not inappropriate surrogates. [1] Search for D90 in http://www.unicode.org/versions/Unicode5.0.0/ch03.pdf -- nosy: +Rhamphoryncus Added file: http://bugs.python.org/file10880/unicodetest.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3297> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3297] Python interpreter uses Unicode surrogate pairs only before the pyc is created
Adam Olsen <[EMAIL PROTECTED]> added the comment: No, the configure options are wrong - we do use UTF-16 and UTF-32. Although modern UCS-4 has been restricted down to the range of UTF-32 (it used to be larger!), UCS-2 still doesn't support the supplementary planes (ie no surrogates.) If it really was UCS-2, the repr wouldn't be u'\U00010123' on windows. It'd be a pair of ill-formed code units instead. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3297> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3297] Python interpreter uses Unicode surrogate pairs only before the pyc is created
Adam Olsen <[EMAIL PROTECTED]> added the comment: Marc, perhaps Unicode has refined their definitions since you last looked? Valid UTF-8 *cannot* contain surrogates[1]. If it does, you have CESU-8[2][3], not UTF-8. So there are two bugs: first, the UTF-8 codec should refuse to load surrogates. Second, since the original bug showed up before the .pyc is created, something in the parse/compilation/whatever stage is producing CESU-8. [1] 4th bullet point of D92 in http://www.unicode.org/versions/Unicode5.0.0/ch03.pdf [2] http://unicode.org/reports/tr26/ [3] http://en.wikipedia.org/wiki/CESU-8 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3297> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com