[issue46090] C extensions can't swap out live frames anymore
Change by Jason Madden : -- nosy: +jmadden ___ Python tracker <https://bugs.python.org/issue46090> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39674] Keep deprecated features in Python 3.9 to ease migration from Python 2.7, but remove in Python 3.10
Change by Jason Madden : -- nosy: +jmadden ___ Python tracker <https://bugs.python.org/issue39674> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40018] test_ssl fails with OpenSSL 1.1.1e
Change by Jason Madden : -- nosy: +jmadden ___ Python tracker <https://bugs.python.org/issue40018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43196] logging.config.dictConfig shuts down socket for existing SysLogHandlers
Change by Jason Madden : -- nosy: +jmadden ___ Python tracker <https://bugs.python.org/issue43196> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35291] duplicate of memoryview from io.BufferedWriter leaks
New submission from Jason Madden : Using Python 2.7.15, if a BufferedWriter wraps an IO object that duplicates the memoryview passed to the IO object's `write` method, that memoryview leaks. This script demonstrates the problem by leaking a memoryview for each iteration of the loop (if the `flush` is skipped, the leaks are less frequent but still occur): ``` from __future__ import print_function import io import gc def count_memoryview(): result = 0 for x in gc.get_objects(): if type(x) is memoryview: result += 1 return result class FileLike(object): closed = False def writable(self): return True def write(self, data): memoryview(data) # XXX: This causes the problem return len(data) bf = io.BufferedWriter(FileLike()) i = 0 memoryview_count = 0 while True: if i == 0 or i % 100 == 0: # This reports 100 new memoryview objects each time old = memoryview_count new = count_memoryview() print(i, "memoryview", new, "+%s" % (new - old)) memoryview_count = new bf.write(b"test") bf.flush() i += 1 ``` The leak can also be observed using the operating system's memory monitoring tools for the process (seen on both Fedora and macOS). Commenting out the line in `FileLike.write` that makes a duplicate memoryview of the given buffer solves the leak. Deleting the BufferedWriter doesn't appear to reclaim the leaked memoryviews. I can't duplicate this in Python 3.4 or above. Originally reported to gevent in https://github.com/gevent/gevent/issues/1318 Possibly related to Issue 26720 and Issue 15994 -- components: IO messages: 330206 nosy: jmadden priority: normal severity: normal status: open title: duplicate of memoryview from io.BufferedWriter leaks versions: Python 2.7 ___ Python tracker <https://bugs.python.org/issue35291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36843] AIX build fails with failure to get random numbers
Change by Jason Madden : -- nosy: +jmadden ___ Python tracker <https://bugs.python.org/issue36843> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33005] 3.7.0b2 Interpreter crash in dev mode (or with PYTHONMALLOC=debug) with 'python -X dev -c 'import os; os.fork()'
New submission from Jason Madden : At the request of Victor Stinner on twitter, I ran the gevent test suite with Python 3.7.0b2 with the new '-X dev' argument and discovered an interpreter crash. With a bit of work, it boiled down to a very simple command: $ env -i .runtimes/snakepit/python3.7.0b2 -X dev -c 'import os; os.fork()' *** Error in `.runtimes/snakepit/python3.7.0b2': munmap_chunk(): invalid pointer: 0x01c43a80 *** === Backtrace: = /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f5a971607e5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x1a8)[0x7f5a9716d698] .runtimes/snakepit/python3.7.0b2(_PyRuntimeState_Fini+0x30)[0x515d90] .runtimes/snakepit/python3.7.0b2[0x51445f] .runtimes/snakepit/python3.7.0b2[0x42ce40] .runtimes/snakepit/python3.7.0b2(_Py_UnixMain+0x7b)[0x42eaab] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f5a97109830] .runtimes/snakepit/python3.7.0b2(_start+0x29)[0x42a0d9] === Memory map: 0040-00689000 r-xp 08:01 177409 //.runtimes/versions/python3.7.0b2/bin/python3.7 00888000-00889000 r--p 00288000 08:01 177409 //.runtimes/versions/python3.7.0b2/bin/python3.7 00889000-008f3000 rw-p 00289000 08:01 177409 //.runtimes/versions/python3.7.0b2/bin/python3.7 008f3000-00914000 rw-p 00:00 0 01b84000-01c64000 rw-p 00:00 0 [heap] 7f5a96052000-7f5a96068000 r-xp 08:01 265946 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f5a96068000-7f5a96267000 ---p 00016000 08:01 265946 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f5a96267000-7f5a96268000 rw-p 00015000 08:01 265946 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f5a96268000-7f5a96273000 r-xp 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96273000-7f5a96472000 ---p b000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96472000-7f5a96473000 r--p a000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96473000-7f5a96474000 rw-p b000 08:01 268943 /lib/x86_64-linux-gnu/libnss_files-2.23.so 7f5a96474000-7f5a9647a000 rw-p 00:00 0 7f5a9647a000-7f5a96485000 r-xp 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96485000-7f5a96684000 ---p b000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96684000-7f5a96685000 r--p a000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96685000-7f5a96686000 rw-p b000 08:01 268947 /lib/x86_64-linux-gnu/libnss_nis-2.23.so 7f5a96686000-7f5a9669c000 r-xp 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9669c000-7f5a9689b000 ---p 00016000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9689b000-7f5a9689c000 r--p 00015000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9689c000-7f5a9689d000 rw-p 00016000 08:01 268927 /lib/x86_64-linux-gnu/libnsl-2.23.so 7f5a9689d000-7f5a9689f000 rw-p 00:00 0 7f5a9689f000-7f5a968a7000 r-xp 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a968a7000-7f5a96aa6000 ---p 8000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a96aa6000-7f5a96aa7000 r--p 7000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a96aa7000-7f5a96aa8000 rw-p 8000 08:01 268938 /lib/x86_64-linux-gnu/libnss_compat-2.23.so 7f5a96acc000-7f5a96b4c000 rw-p 00:00 0 7f5a96b4c000-7f5a96b4e000 r-xp 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96b4e000-7f5a96d4e000 ---p 2000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96d4e000-7f5a96d4f000 r--p 2000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96d4f000-7f5a96d51000 rw-p 3000 08:01 184551 //.runtimes/versions/python3.7.0b2/lib/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so 7f5a96d51000-7f5a96e11000 rw-p 00:00 0 7f5a96e11000-7f5a970e9000 r--p 08:01 133586 /usr/lib/locale/locale-archive 7f5a970e9000-7f5a972a9000 r-xp 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a972a9000-7f5a974a9000 ---p 001c 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a974a9000-7f5a974ad000 r--p 001c 08:01 268930 /lib/x86_64-linux-gnu/libc-2.23.so 7f5a974ad000-7f5a974
[issue33005] 3.7.0b2 Interpreter crash in dev mode (or with PYTHONMALLOC=debug) with 'python -X dev -c 'import os; os.fork()'
Jason Madden added the comment: I built a local version of master (6821e73) and was able to get some line numbers (they're off by one for some reason, it appears): Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x7fff78972e3e __pthread_kill + 10 1 libsystem_pthread.dylib 0x7fff78ab1150 pthread_kill + 333 2 libsystem_c.dylib 0x7fff788cf312 abort + 127 3 libsystem_malloc.dylib 0x7fff789cc866 free + 521 4 python.exe 0x000100fba715 _PyRuntimeState_Fini + 37 (pystate.c:90) 5 python.exe 0x000100fb9d73 Py_FinalizeEx + 547 (pylifecycle.c:1231) 6 python.exe 0x000100fddd80 pymain_main + 5808 (main.c:2664) 7 python.exe 0x000100fdec82 _Py_UnixMain + 178 (main.c:2697) 8 libdyld.dylib 0x7fff78823115 start + 1 -- ___ Python tracker <https://bugs.python.org/issue33005> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33005] 3.7.0b2 Interpreter crash in dev mode (or with PYTHONMALLOC=debug) with 'python -X dev -c 'import os; os.fork()'
Jason Madden added the comment: Thank you! I can confirm that git commit 31e2b76f7bbcb8278748565252767a8b7790ff27 on the 3.7 branch fixes the issue for me. -- ___ Python tracker <https://bugs.python.org/issue33005> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Unbounded memory growth resizing split-table dicts
Changes by Jason Madden : -- nosy: +jmadden ___ Python tracker <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25940] SSL tests failed due to expired svn.python.org SSL certificate
Changes by Jason Madden : -- nosy: +jmadden ___ Python tracker <http://bugs.python.org/issue25940> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24291] wsgiref.handlers.SimpleHandler truncates large output blobs
Jason Madden added the comment: gevent has another simple reproducer for this. I do believe it's not gevent's fault, the fault is in the standard library; SimpleHandler._write needs to loop until `sent += self.stdeout.write(data)` is `len(data)`. I have written up more on this at https://github.com/gevent/gevent/issues/778#issuecomment-205046001 For convenience I'll reproduce the bulk of that comment here: The user supplied a django application that produced a very large response that was getting truncated when using gevent under Python 3.4. (I believe gevent's non-blocking sockets are simply running into a different buffering behaviour, making it more likely to be hit under those conditions simply because they are faster). This looks like a bug in the standard library's `wsgiref` implementation. I tracked this down to a call to `socket.send(data)`. This method only sends whatever portion of the data it is possible to send at the time, and it returns the count of the data that was sent. The caller of `socket.send()` is responsible for looping to make sure the full `len` of the data is sent. This [is clearly documented](https://docs.python.org/3/library/socket.html#socket.socket.send). In this case, there is a call to `send` trying to send the full response, but only a portion of it is able to be immediately written. Here's a transcript of the first request (I modified gevent's `socket.send` method to print how much data is actually sent each time): ``` Django version 1.9.5, using settings 'gdc.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. SENDING 17 SENT 17 OF 17 SENDING 37 SENT 37 OF 37 SENDING 38 SENT 38 OF 38 SENDING 71 SENT 71 OF 71 SENDING 1757905 SENT 555444 OF 1757905 [03/Apr/2016 19:48:31] "GET / HTTP/1.1" 200 1757905 ``` Note that there's no retry on the short send. Here's the stack trace for that short send; we can clearly see that there is no retry loop in place: ``` //3.4/lib/python3.4/wsgiref/handlers.py(138)run() 136 self.setup_environ() 137 self.result = application(self.environ, self.start_response) --> 138 self.finish_response() 139 except: 140 try: //3.4/lib/python3.4/wsgiref/handlers.py(180)finish_response() 178 if not self.result_is_file() or not self.sendfile(): 179 for data in self.result: --> 180 self.write(data) 181 self.finish_content() 182 finally: //3.4/lib/python3.4/wsgiref/handlers.py(279)write() 277 278 # XXX check Content-Length and truncate if too many bytes written? --> 279 self._write(data) 280 self._flush() 281 //3.4/lib/python3.4/wsgiref/handlers.py(453)_write() 451 452 def _write(self,data): --> 453 self.stdout.write(data) 454 455 def _flush(self): //3.4/lib/python3.4/socket.py(398)write() 396 self._checkWritable() 397 try: --> 398 return self._sock.send(b) 399 except error as e: 400 # XXX what about EINTR? > //gevent/_socket3.py(384)send() 382 from IPython.core.debugger import Tracer; Tracer()() ## DEBUG ## 383 --> 384 return count ``` `self.stdout` is an instance of `socket.SocketIO` (which is returned from `socket.makefile`). This is not documented on the web, but the [docstring also clearly documents](https://github.com/python/cpython/blob/3.4/Lib/socket.py#L389) that callers of `write` should loop to make sure all data gets sent. -- nosy: +jmadden versions: +Python 3.4 ___ Python tracker <http://bugs.python.org/issue24291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24291] wsgiref.handlers.SimpleHandler truncates large output blobs
Jason Madden added the comment: Django uses a `wsgiref.simple_server` to serve requests, which in turn uses `socketserver.StreamRequestHandler` to implement its `WSGIRequestHandler`. That base class explicitly turns off buffering for writes (`wbufsize = 0` is the class default which gets passed to `socket.makefile`). So that explains how there's no `BufferedWriter` wrapped around the `SocketIO` instance. -- ___ Python tracker <http://bugs.python.org/issue24291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24291] wsgiref.handlers.SimpleHandler truncates large output blobs
Jason Madden added the comment: Is there an expected `self.stdout` implementation that doesn't return the number of bytes it writes? `sys.stdout` does, as does `io.RawIOBase`. It doesn't seem clear to me that practically speaking there's a compatibility problem with requiring that for `self.stdout`. -- ___ Python tracker <http://bugs.python.org/issue24291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24291] wsgiref.handlers.SimpleHandler truncates large output blobs
Jason Madden added the comment: `self.stdin` and `self.stderr` are documented to be `wsgi.input` and `wsgi.errors`, which are both described as "file-like" objects, meaning that the `write` method should return bytes written. It seems like the same could reasonably be said to be true for `self.stdout`, though it isn't strictly documented as such. The WSGI spec says that each chunk the application yields should be written immediately, with no buffering (https://www.python.org/dev/peps/pep-/#buffering-and-streaming), so I don't think having the default output stream be buffered would be in compliance. If there is a compatibility problem, writing the loop this way could bypass it (untested): def _write(self, data): written = self.stdout.write(data) while written is not None and written < len(data): written += self.stdout.write(data[written:]) -- ___ Python tracker <http://bugs.python.org/issue24291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24291] wsgiref.handlers.SimpleHandler truncates large output blobs
Jason Madden added the comment: I'm sorry, I'm still not following the argument that `write` is likely to return nothing. `RawIOBase` and `BufferedIOBase` both document that `write` must return the number of written bytes; if you don't return that, you break anything that assumes you do, as documented (because both patterns for checking if you need to keep looping are both TypeErrors: `written = 0; written += None` and `None < len(data)`); and if you ignore the return value, you fail when using any `IOBase` object that *isn't* buffered (exactly this case). But you are definitely right, explicitly checking for None can be done. It adds a trivial amount of overhead, but this isn't a production server. The only cost is code readability. Good point about the explicit calls to `flush()`, I thought flush was a no-op in some streams, but that's only the case for streams where it doesn't already matter. -- ___ Python tracker <http://bugs.python.org/issue24291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com