[issue8426] multiprocessing.Queue fails to get() very large objects
Brian Cain added the comment: Please don't close the issue. Joining aside, the basic point ("But when size = 7279, the data submitted reaches 64k, so the writting thread blocks on the write syscall.") is not clear from the docs, right? IMO, it would be nice if I could ask my queue, "Just what is your capacity (in bytes, not entries) anyways? I want to know how much I can put in here without worrying about whether the remote side is dequeueing." I guess I'd settle for explicit documentation that the bound exists. But how should I expect my code to be portable? Are there platforms which provide less than 64k? Less than 1k? Less than 256 bytes? -- Added file: http://bugs.python.org/file21709/unnamed ___ Python tracker <http://bugs.python.org/issue8426> ___Please don't close the issue.Joining aside, the basic point ("But when size = 7279, the data submitted reaches 64k, so the writting thread blocks on the write syscall.") is not clear from the docs, right? IMO, it would be nice if I could ask my queue, "Just what is your capacity (in bytes, not entries) anyways? Â I want to know how much I can put in here without worrying about whether the remote side is dequeueing." Â I guess I'd settle for explicit documentation that the bound exists. Â But how should I expect my code to be portable? Â Are there platforms which provide less than 64k? Â Less than 1k? Â Less than 256 bytes? ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8426] multiprocessing.Queue fails to get() very large objects
Brian Cain added the comment: I don't think the problem is limited to when hundreds of megabytes are being transmitted. I believe I am experiencing a problem with the same root cause whose symptoms are slightly different. It seems like there's a threshhold which causes not merely poor performance, but likely an unrecoverable fault. Here's the output when I run my example on SLES11.1: $ ./multiproc.py $((8*1024)) 2 on 2.6 (r26:66714, May 5 2010, 14:02:39) [GCC 4.3.4 [gcc-4_3-branch revision 152973]] - Linux linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 0 entries in flight, join() took 5949.97 usec, get() did 0.00 items/sec 2 entries in flight, join() took 1577.85 usec, get() did 42581.766497 items/sec 4 entries in flight, join() took 1966.00 usec, get() did 65536.00 items/sec 6 entries in flight, join() took 1894.00 usec, get() did 105296.334728 items/sec 8 entries in flight, join() took 1420.02 usec, get() did 199728.761905 items/sec 10 entries in flight, join() took 1950.03 usec, get() did 163840.00 items/sec 12 entries in flight, join() took 1241.92 usec, get() did 324720.309677 items/sec ... 7272 entries in flight, join() took 2516.03 usec, get() did 10983427.687432 items/sec 7274 entries in flight, join() took 1813.17 usec, get() did 10480717.037444 items/sec 7276 entries in flight, join() took 1979.11 usec, get() did 11421315.832335 items/sec 7278 entries in flight, join() took 2043.01 usec, get() did 11549808.744608 items/sec ^C7280 entries: join() ABORTED by user after 83.08 sec ... I see similar results when I run this test with a larger step, I just wanted to get finer resolution on the failure point. -- nosy: +Brian.Cain versions: +Python 2.6 Added file: http://bugs.python.org/file19979/multiproc.py ___ Python tracker <http://bugs.python.org/issue8426> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8426] multiprocessing.Queue fails to get() very large objects
Brian Cain added the comment: I was able to reproduce the problem on a more recent release. 7279 entries fails, 7278 entries succeeds. $ ./multiproc3.py on 3.1.2 (r312:79147, Apr 15 2010, 12:35:07) [GCC 4.4.3] - Linux mini 2.6.32-26-generic #47-Ubuntu SMP Wed Nov 17 15:59:05 UTC 2010 i686 7278 entries in flight, join() took 12517.93 usec, get() did 413756.736588 items/sec 7278 entries in flight, join() took 19458.06 usec, get() did 345568.562217 items/sec 7278 entries in flight, join() took 21326.07 usec, get() did 382006.563784 items/sec 7278 entries in flight, join() took 14937.16 usec, get() did 404244.835554 items/sec 7278 entries in flight, join() took 18877.98 usec, get() did 354435.878968 items/sec 7278 entries in flight, join() took 20811.08 usec, get() did 408343.738456 items/sec 7278 entries in flight, join() took 14394.04 usec, get() did 423727.055218 items/sec 7278 entries in flight, join() took 18940.21 usec, get() did 361012.624762 items/sec 7278 entries in flight, join() took 19073.96 usec, get() did 367559.024118 items/sec 7278 entries in flight, join() took 16229.87 usec, get() did 424764.763755 items/sec 7278 entries in flight, join() took 18527.03 usec, get() did 355546.367937 items/sec 7278 entries in flight, join() took 21500.11 usec, get() did 390429.802164 items/sec 7278 entries in flight, join() took 13646.84 usec, get() did 410468.669903 items/sec 7278 entries in flight, join() took 18921.14 usec, get() did 355873.819767 items/sec 7278 entries in flight, join() took 13582.94 usec, get() did 287553.877353 items/sec 7278 entries in flight, join() took 21958.11 usec, get() did 405549.873285 items/sec ^C7279 entries: join() ABORTED by user after 5.54 sec ^CError in atexit._run_exitfuncs: Segmentation fault -- Added file: http://bugs.python.org/file19982/multiproc3.py ___ Python tracker <http://bugs.python.org/issue8426> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8426] multiprocessing.Queue fails to get() very large objects
Brian Cain added the comment: Detailed stack trace when the failure occurs (gdb_stack_trace.txt) -- Added file: http://bugs.python.org/file19983/gdb_stack_trace.txt ___ Python tracker <http://bugs.python.org/issue8426> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10673] multiprocess.Process join method - timeout indistinguishable from success
New submission from Brian Cain : When calling Process' join([timeout]) method, the timeout expiration case is indistinguishable from the successful join. I suppose the 'exitcode' attribute can deliver the necessary information, but perhaps join could stand on its own. If join() shouldn't be changed, could we make explicit reference to the exitcode attribute in the documentation? -- components: Library (Lib) files: Process_join.patch keywords: patch messages: 123733 nosy: Brian.Cain priority: normal severity: normal status: open title: multiprocess.Process join method - timeout indistinguishable from success type: feature request versions: Python 2.7, Python 3.1 Added file: http://bugs.python.org/file19998/Process_join.patch ___ Python tracker <http://bugs.python.org/issue10673> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24655] _ssl.c: Missing "do" for do {} while(0) idiom
New submission from Brian Cain: _ssl.c has a "convert()" macro which misuses the "do { ... } while(0)" pattern by accidentally omitting the "do". This was discovered when building with clang, it reports "while loop has empty body". Effectively, convert puts the body into gratuitous scope braces and happens to be followed by a "while(0);". If convert() were used in some context where it weren't followed by a semicolon, it might do something terribly interesting. Or, more likely, just fail to build. -- components: Extension Modules files: ssl_convert.patch keywords: patch messages: 246868 nosy: Brian.Cain priority: normal severity: normal status: open title: _ssl.c: Missing "do" for do {} while(0) idiom versions: Python 3.6 Added file: http://bugs.python.org/file39938/ssl_convert.patch ___ Python tracker <http://bugs.python.org/issue24655> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24655] _ssl.c: Missing "do" for do {} while(0) idiom
Brian Cain added the comment: New patch. -- Added file: http://bugs.python.org/file39941/ssl_convert_2nd.patch ___ Python tracker <http://bugs.python.org/issue24655> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24655] _ssl.c: Missing "do" for do {} while(0) idiom
Brian Cain added the comment: Whoops, that's not right. Corrected. -- Added file: http://bugs.python.org/file39942/ssl_convert_3rd.patch ___ Python tracker <http://bugs.python.org/issue24655> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25388] tokenizer crash/misbehavior
New submission from Brian Cain: This issue is similar to (but I believe distinct from) the one reported earlier as http://bugs.python.org/issue24022. Tokenizer failures strike me as difficult to exploit, but risky nonetheless. Attached is a test case that illustrates the problem and the output from ASan when it encounters the failure. All of the versions below that I tested failed in one way or another (segfault, assertion failure, printing enormous blank output to console). Some fail frequently and some exhibit this failure only occasionally. Python 3.4.3 (default, Mar 26 2015, 22:03:40) Python 2.7.9 (default, Apr 2 2015, 15:33:21) [GCC 4.9.2] on linux2 Python 3.6.0a0 (default:2a8a39640aa2+, Jul 9 2015, 12:28:50) [GCC 4.9.2] on linux -- components: Interpreter Core files: vuln.patch keywords: patch messages: 252905 nosy: Brian.Cain priority: normal severity: normal status: open title: tokenizer crash/misbehavior versions: Python 2.7, Python 3.4, Python 3.6 Added file: http://bugs.python.org/file40764/vuln.patch ___ Python tracker <http://bugs.python.org/issue25388> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25388] tokenizer crash/misbehavior
Brian Cain added the comment: asan output -- Added file: http://bugs.python.org/file40765/asan.txt ___ Python tracker <http://bugs.python.org/issue25388> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25388] tokenizer crash/misbehavior -- heap use-after-free
Changes by Brian Cain : -- title: tokenizer crash/misbehavior -> tokenizer crash/misbehavior -- heap use-after-free ___ Python tracker <http://bugs.python.org/issue25388> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25388] tokenizer crash/misbehavior -- heap use-after-free
Changes by Brian Cain : -- type: -> crash ___ Python tracker <http://bugs.python.org/issue25388> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25388] tokenizer crash/misbehavior -- heap use-after-free
Brian Cain added the comment: Sorry, the report would have been clearer if I'd included a build with symbols and a stack trace. The test was inspired by the test from issue24022 (https://hg.python.org/cpython/rev/03b2259c6cd3), it sounds like it should not have been. But indeed it seems like you've reproduced this issue, and you agree it's a bug? -- ___ Python tracker <http://bugs.python.org/issue25388> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25388] tokenizer crash/misbehavior -- heap use-after-free
Brian Cain added the comment: Here is a more useful ASan report: = ==12168==ERROR: AddressSanitizer: heap-use-after-free on address 0x6251e110 at pc 0x00697238 bp 0x7fff412b9240 sp 0x7fff412b9238 READ of size 1 at 0x6251e110 thread T0 #0 0x697237 in tok_nextc /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:911:20 #1 0x68c63b in tok_get /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:1460:13 #2 0x689d93 in PyTokenizer_Get /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:1809:18 #3 0x67fec3 in parsetok /home/brian/src/fuzzpy/cpython/Parser/parsetok.c:208:16 #4 0x6837d4 in PyParser_ParseFileObject /home/brian/src/fuzzpy/cpython/Parser/parsetok.c:134:12 #5 0x52f50c in PyParser_ASTFromFileObject /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:1150:15 #6 0x532e16 in PyRun_FileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:916:11 #7 0x52c3f8 in PyRun_SimpleFileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:396:13 #8 0x52a460 in PyRun_AnyFileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:80:16 #9 0x5cb04a in run_file /home/brian/src/fuzzpy/cpython/Modules/main.c:318:11 #10 0x5c5a42 in Py_Main /home/brian/src/fuzzpy/cpython/Modules/main.c:768:19 #11 0x4fbace in main /home/brian/src/fuzzpy/cpython/./Programs/python.c:69:11 #12 0x7fe8a9a4aa3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20a3f) #13 0x431548 in _start (/home/brian/src/fuzzpy/cpython/python+0x431548) 0x6251e110 is located 16 bytes inside of 8224-byte region [0x6251e100,0x62520120) freed by thread T0 here: #0 0x4cdef0 in realloc /home/brian/src/fuzzpy/llvm_src/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:61 #1 0x501280 in _PyMem_RawRealloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:84:12 #2 0x4fc68d in _PyMem_DebugRealloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:1921:18 #3 0x4fdf42 in PyMem_Realloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:343:12 #4 0x69a338 in tok_nextc /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:1050:34 #5 0x68a2c9 in tok_get /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:1357:17 #6 0x689d93 in PyTokenizer_Get /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:1809:18 #7 0x67fec3 in parsetok /home/brian/src/fuzzpy/cpython/Parser/parsetok.c:208:16 #8 0x6837d4 in PyParser_ParseFileObject /home/brian/src/fuzzpy/cpython/Parser/parsetok.c:134:12 #9 0x52f50c in PyParser_ASTFromFileObject /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:1150:15 #10 0x532e16 in PyRun_FileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:916:11 #11 0x52c3f8 in PyRun_SimpleFileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:396:13 #12 0x52a460 in PyRun_AnyFileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:80:16 #13 0x5cb04a in run_file /home/brian/src/fuzzpy/cpython/Modules/main.c:318:11 #14 0x5c5a42 in Py_Main /home/brian/src/fuzzpy/cpython/Modules/main.c:768:19 #15 0x4fbace in main /home/brian/src/fuzzpy/cpython/./Programs/python.c:69:11 #16 0x7fe8a9a4aa3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20a3f) previously allocated by thread T0 here: #0 0x4cdb88 in malloc /home/brian/src/fuzzpy/llvm_src/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:40 #1 0x501030 in _PyMem_RawMalloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:62:12 #2 0x5074db in _PyMem_DebugAlloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:1838:22 #3 0x4fc213 in _PyMem_DebugMalloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:1861:12 #4 0x4fdbfa in PyMem_Malloc /home/brian/src/fuzzpy/cpython/Objects/obmalloc.c:325:12 #5 0x68791d in PyTokenizer_FromFile /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:861:29 #6 0x68359e in PyParser_ParseFileObject /home/brian/src/fuzzpy/cpython/Parser/parsetok.c:126:16 #7 0x52f50c in PyParser_ASTFromFileObject /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:1150:15 #8 0x532e16 in PyRun_FileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:916:11 #9 0x52c3f8 in PyRun_SimpleFileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:396:13 #10 0x52a460 in PyRun_AnyFileExFlags /home/brian/src/fuzzpy/cpython/Python/pythonrun.c:80:16 #11 0x5cb04a in run_file /home/brian/src/fuzzpy/cpython/Modules/main.c:318:11 #12 0x5c5a42 in Py_Main /home/brian/src/fuzzpy/cpython/Modules/main.c:768:19 #13 0x4fbace in main /home/brian/src/fuzzpy/cpython/./Programs/python.c:69:11 #14 0x7fe8a9a4aa3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x20a3f) SUMMARY: AddressSanitizer: heap-use-after-free /home/brian/src/fuzzpy/cpython/Parser/tokenizer.c:911:20 in tok_nextc Shadow bytes around the buggy address: 0x0c4a7fffbbd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c4a7fffbbe0: fa