[issue7923] StreamHandler and FileHandler located in logging, not in logging.handlers
New submission from Kirill : Index: library/logging.rst === --- library/logging.rst (revision 78171) +++ library/logging.rst (working copy) @@ -1659,7 +1659,7 @@ StreamHandler ^ -.. module:: logging.handlers +.. currentmodule:: logging The :class:`StreamHandler` class, located in the core :mod:`logging` package, sends logging output to streams such as *sys.stdout*, *sys.stderr* or any @@ -1731,6 +1731,8 @@ .. versionadded:: 2.6 +.. currentmodule:: logging.handlers + The :class:`WatchedFileHandler` class, located in the :mod:`logging.handlers` module, is a :class:`FileHandler` which watches the file it is logging to. If the file changes, it is closed and reopened using the file name. -- assignee: georg.brandl components: Documentation messages: 99319 nosy: georg.brandl, x746e severity: normal status: open title: StreamHandler and FileHandler located in logging, not in logging.handlers versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue7923> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16972] Useless function call in site.py
New submission from Kirill: In Lib/site.py:149 [1] _init_pathinfo call has no effect. Looks like it's here because in the past _init_pathinfo was changing a global variable [2]. I believe that it should be changed to `known_paths = _init_pathinfo()`, in the same way as it's done in addsitedir function [3]. [1] http://hg.python.org/cpython/file/fb17969ace93/Lib/site.py#l149 [2] http://hg.python.org/cpython/annotate/ac13a6ce13e2/Lib/site.py#l102 [3] http://hg.python.org/cpython/file/fb17969ace93/Lib/site.py#l189 -- components: Library (Lib) files: patch.diff keywords: patch messages: 180022 nosy: x746e priority: normal severity: normal status: open title: Useless function call in site.py type: enhancement Added file: http://bugs.python.org/file28740/patch.diff ___ Python tracker <http://bugs.python.org/issue16972> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13114] check -r fails with non-ASCII unicode long_description
Kirill Kuzminykh added the comment: Latin transliteration of my name is Kirill Kuzminykh. -- ___ Python tracker <http://bugs.python.org/issue13114> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10671] urllib2 redirect to another host doesn't work
New submission from Kirill Subbotin : When you open url which redirects to another host (either with 301 or 302), HTTPRedirectHandler keeps "Host" header from the previous request, which leads to a error. Instead a host should be taken from a new location url. Attached patch is tested with Python 2.6.5. -- components: Library (Lib) files: urllib2.diff keywords: patch messages: 123729 nosy: Kirax priority: normal severity: normal status: open title: urllib2 redirect to another host doesn't work type: behavior versions: Python 2.6 Added file: http://bugs.python.org/file19997/urllib2.diff ___ Python tracker <http://bugs.python.org/issue10671> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10928] Strange input processing
New submission from Kirill Bystrov : I have written a simple script which evaluates some numeric expressions and faced a strange problem at some point. Some of these expressions cannot evaluate correctly. Here is an example: Python 2.7.1+ (r271:86832, Dec 24 2010, 10:03:35) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 3158 + 04 3162 >>> 3158 + 05 3163 >>> 3158 + 06 3164 >>> 3158 + 07 3165 >>> 3158 + 08 File "", line 1 3158 + 08 ^ SyntaxError: invalid token >>> 3158 + 09 File "", line 1 3158 + 09 ^ SyntaxError: invalid token >>> Both 2.6 and 2.7 raise this exception. My distro is Ubuntu Natty if this matters. P.S.: sorry for my bad English :) -- components: Interpreter Core messages: 126411 nosy: byss priority: normal severity: normal status: open title: Strange input processing type: behavior versions: Python 2.6, Python 2.7 ___ Python tracker <http://bugs.python.org/issue10928> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10928] Strange input processing
Kirill Bystrov added the comment: Sorry, i have really forgotten about these octals. -- ___ Python tracker <http://bugs.python.org/issue10928> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3884] turtle in the tkinter package?
New submission from Kirill Simonov <[EMAIL PROTECTED]>: I wonder why the module 'turtle' was moved to the 'tkinter' package. It is not a part of Tk, it does not provide new or extend existing tkinter API. While it uses tkinter, so do pydoc or idle; this is just an implementation detail. If some day a new GUI library replaces tkinter in the standard Python library, turtle's interface will not have to be changed, only the implementation. Moreover this change unnecessarily breaks all existing demos and tutorials that use turtle. Why do this if it does not give any substantial benefits? Finally, 'import turtle' is easier than 'from tkinter import turtle' for complete newbies in programming, who are the primary users of this module. So I propose to keep turtle a top-level module as it was in Python 1 and 2. -- components: Library (Lib) messages: 73311 nosy: kirill_simonov severity: normal status: open title: turtle in the tkinter package? type: feature request versions: Python 3.0 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3884> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3884] turtle in the tkinter package?
Kirill Simonov <[EMAIL PROTECTED]> added the comment: Thank you for the fix, I really appeciate it. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3884> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to MEMORY CORRUPTION and DEADLOCK
New submission from Kirill Smelkov : Hello up there. I believe I've discovered a race in PyThread_release_lock on Python2.7 that, on systems where POSIX semaphores are not available and Python locks are implemented with mutexes and condition variables, can lead to MEMORY CORRUPTION and DEADLOCK. The particular system I've discovered the bug on is macOS Mojave 10.14.6. The bug is already fixed on Python3 and the fix for Python2 is easy: git cherry-pick 187aa545165d Thanks beforehand, Kirill Bug description ( Please see attached pylock_bug.pyx for the program that triggers the bug for real. ) On Darwin, even though this is considered as POSIX, Python uses mutex+condition variable to implement its lock, and, as of 20190828, Py2.7 implementation, even though similar issue was fixed for Py3 in 2012, contains synchronization bug: the condition is signalled after mutex unlock while the correct protocol is to signal condition from under mutex: https://github.com/python/cpython/blob/v2.7.16-127-g0229b56d8c0/Python/thread_pthread.h#L486-L506 https://github.com/python/cpython/commit/187aa545165d (py3 fix) PyPy has the same bug for both pypy2 and pypy3: https://bitbucket.org/pypy/pypy/src/578667b3fef9/rpython/translator/c/src/thread_pthread.c#lines-443:465 https://bitbucket.org/pypy/pypy/src/5b42890d48c3/rpython/translator/c/src/thread_pthread.c#lines-443:465 Signalling condition outside of corresponding mutex is considered OK by POSIX, but in Python context it can lead to at least memory corruption if we consider the whole lifetime of python level lock. For example the following logical scenario: T1 T2 sema = Lock() sema.acquire() sema.release() sema.acquire() free(sema) ... can translate to the next C-level calls: T1 T2 # sema = Lock() sema = malloc(...) sema.locked = 0 pthread_mutex_init(&sema.mut) pthread_cond_init (&sema.lock_released) # sema.acquire() pthread_mutex_lock(&sema.mut) # sees sema.locked == 0 sema.locked = 1 pthread_mutex_unlock(&sema.mut) # sema.release() pthread_mutex_lock(&sema.mut) sema.locked = 0 pthread_mutex_unlock(&sema.mut) # OS scheduler gets in and relinquishes control from T2 # to another process ... # second sema.acquire() pthread_mutex_lock(&sema.mut) # sees sema.locked == 0 sema.locked = 1 pthread_mutex_unlock(&sema.mut) # free(sema) pthread_mutex_destroy(&sema.mut) pthread_cond_destroy (&sema.lock_released) free(sema) # ... e.g. malloc() which returns memory where sema was ... # OS scheduler returns control to T2 # sema.release() continues # # BUT sema was already freed and writing to anywhere # inside sema block CORRUPTS MEMORY. In particular if # _another_ python-level lock was allocated where sema # block was, writing into the memory can have effect on # further synchronization correctness and in particular # lead to deadlock on lock that was next allocated. pthread_cond_signal(&sema.lock_released) Note that T2.pthread_cond_signal(&sema.lock_released) CORRUPTS MEMORY as it is called when sema memory was already freed and is potentially reallocated for another object. The fix is to move pthread_cond_signal to be done under corresponding mutex: # sema.release() pthread_mutex_lock(&sema.mut) sema.locked = 0 pthread_cond_signal(&sema.lock_released) pthread_mutex_unlock(&sema.mut) by cherry-picking commit 187aa545165d ("Signal condition variables with the mutex held. Destroy condition variables before their mutexes"). Bug history The bug was there since 1994 - since at least [1]. It was discussed in 2001 with original code author[2], but the code was still considered to be race-free. In 2010 the place where pthread_cond_signal should be - before or after pthread_mutex_unlock - was discussed with the rationale to avoid threads bouncing[3,4,5], and in 2012 pthread_cond_signal was moved to be called from under mutex, but only for CPython3[6,7]. In 2019 the bug was (re-)discovered while testing Pygolang[8] on macOS with CPython2 and PyPy2 and PyPy3. [1] https://github.com/python/cpython/commit/2c8cb9f3d240 [2] https://bugs.python.org/issue433625 [3] https://bugs.python.org/issue8299#msg103224 [4] h
[issue433625] bug in PyThread_release_lock()
Kirill Smelkov added the comment: I still believe there is a race here and it can lead to MEMORY CORRUPTION and DEADLOCK: https://bugs.python.org/issue38106. -- nosy: +navytux ___ Python tracker <https://bugs.python.org/issue433625> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8410] Fix emulated lock to be 'fair'
Kirill Smelkov added the comment: At least condition variable signalling has to be moved to be done under mutex for correctness: https://bugs.python.org/issue38106. -- nosy: +navytux ___ Python tracker <https://bugs.python.org/issue8410> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Kirill Smelkov added the comment: Thanks for feedback. Yes, since for Python-level lock, PyThread_release_lock() is called with GIL held: https://github.com/python/cpython/blob/v2.7.16-129-g58d61efd4cd/Modules/threadmodule.c#L69-L82 the GIL effectively serves as the synchronization device in between T2 releasing the lock, and T1 proceeding after second sema.acquire() when it gets to execute python-level code with `del sema`. However a) there is no sign that this aspect - that release must be called under GIL - is being explicitly relied upon by PyThread_release_lock() code, and b) e.g. _testcapimodule.c already has a test which calls PyThread_release_lock() with GIL released: https://github.com/python/cpython/blob/v2.7.16-129-g58d61efd4cd/Modules/_testcapimodule.c#L1972-L2053 https://github.com/python/cpython/blob/v2.7.16-129-g58d61efd4cd/Modules/_testcapimodule.c#L1998-L2002 Thus, I believe, there is a bug in PyThread_release_lock() and we were just lucky not to hit it due to GIL and Python-level usage. For the reference, I indeed started to observe the problem when I moved locks and other code that implement channels in Pygolang from Python to C level: https://lab.nexedi.com/kirr/pygolang/commit/69db91bf https://lab.nexedi.com/kirr/pygolang/commit/3b241983?expand_all_diffs=1 -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Kirill Smelkov added the comment: And it is indeed better to always do pthread_cond_signal() from under mutex. Many pthread libraries delay the signalling to associated mutex unlock, so there should be no performance penalty here and the correctness is much more easier to reason about. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8410] Fix emulated lock to be 'fair'
Kirill Smelkov added the comment: Thanks for feedback. -- ___ Python tracker <https://bugs.python.org/issue8410> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26360] Deadlock in thread.join on Python 2.7/Mac OS X 10.9, 10.10
Kirill Smelkov added the comment: Maybe issue38106 related. -- nosy: +navytux ___ Python tracker <https://bugs.python.org/issue26360> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Kirill Smelkov added the comment: I agree it seems like a design mistake. Not only it leads to suboptimal implementations, but what is more important, it throws misuse risks onto the user. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Change by Kirill Smelkov : -- keywords: +patch pull_requests: +15669 stage: -> patch review pull_request: https://github.com/python/cpython/pull/16047 ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Kirill Smelkov added the comment: Ok, I did https://github.com/python/cpython/pull/16047. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26360] Deadlock in thread.join on Python 2.7/macOS
Kirill Smelkov added the comment: > > Maybe issue38106 related. > > That looks plausible, but unfortunately I'm still able to reproduce the hang > with your PR (commit 9b135c02aa1edab4c99c915c43cd62d988f1f9c1, macOS 10.14.6). Thanks for feedback. Then hereby bug is probably deadlock of another kind. -- ___ Python tracker <https://bugs.python.org/issue26360> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] [2.7] Race in PyThread_release_lock on macOS - can lead to memory corruption and deadlock
Kirill Smelkov added the comment: Victor, thanks for merging. > I'm surprised that we still find new bugs in this code which is supposed to > be battle tested! Maybe recent Darwin changed made the bug more likely. As discussed above (https://bugs.python.org/issue38106#msg351917, https://bugs.python.org/issue38106#msg351970), due to the GIL, the bug is not possible to trigger from pure-python code, and it can be hit only via using CAPI directly. I indeed started to observe the problem after moving locks and other code that implement channels in Pygolang from Python to C level (see "0.0.3" in https://pypi.org/project/pygolang/#pygolang-change-history) The bug was there since 1994 and, from my point of view, it was not discovered because locking functionality was not enough hammer-tested. The bug was also not possible to explain without taking lock lifetime into account, as, without create/destroy, just lock/unlock sequence was race free. https://bugs.python.org/issue433625 confirms that. I cannot say about whether recent macOS changes are relevant. My primary platform is Linux and I only recently started to use macOS under QEMU for testing. However from my brief study of https://github.com/apple/darwin-libpthread I believe the difference in scheduling related to pthread condition variable signalling under macOS and Linux is there already for a long time. > PyPy: it's now your turn to fix it ;-) PyPy people fixed the bug the same day it was reported: https://bitbucket.org/pypy/pypy/issues/3072 :) Kirill P.S. Mariusz, thanks also for your feedback. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] [2.7] Race in PyThread_release_lock on macOS - can lead to memory corruption and deadlock
Kirill Smelkov added the comment: :) Yes and no. PyPy did not make a new release with the fix yet. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44291] Unify logging.handlers.SysLogHandler behavior with SocketHandlers
New submission from Kirill Pinchuk : Probably we should make the behavior of SysLogHandler consistent with other Socket handlers. Right now SocketHandler and DatagramHandler implement such behavior: 1) on `close` set `self.socket = None` 2) when trying to send - make socket when it is None SysLogHandler doesn't implement this behavior and when you close the socket for some reason (eg. restart of uWSGI server on code change) it leaves it in the closed state, then raises an error when you try to send any message because it is closed ``` --- Logging error --- Traceback (most recent call last): File "/usr/lib/python3.9/logging/handlers.py", line 959, in emit self.socket.sendto(msg, self.address) OSError: [Errno 9] Bad file descriptor ``` -- components: Library (Lib) messages: 394932 nosy: Kirill Pinchuk priority: normal severity: normal status: open title: Unify logging.handlers.SysLogHandler behavior with SocketHandlers type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue44291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44291] Unify logging.handlers.SysLogHandler behavior with SocketHandlers
Kirill Pinchuk added the comment: UPD: right now it has reconnection logic for unixsocket but not for tcp/udp -- ___ Python tracker <https://bugs.python.org/issue44291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44291] Unify logging.handlers.SysLogHandler behavior with SocketHandlers
Kirill Pinchuk added the comment: Oh, sorry bad wording. The current implementation has reconnection logic only for UNIX sockets The patch adds reconnection logic for UDP/TCP sockets as well. I've done it with minimal changes to the existing code to accomplish that. And probably it can be merged. But in general, it looks like we can refactor SysLogHandler to inherit from SocketHandler. Not sure if it should be done in this PR or better to create separate? -- ___ Python tracker <https://bugs.python.org/issue44291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44979] pathlib: support relative path construction
New submission from Kirill Pinchuk : Hi. I've been using this snippet for years and believe that it would be a nice addition to pathlib's functionality. Basically, it allows constructing path relative to the current file (instead of cwd). Comes quite handy when you're working with deeply nested resources like file fixtures in tests and many other cases. ``` @classmethod def relative(cls, path, depth=1): """ Return path that is constructed relatively to caller file. """ base = Path(sys._getframe(depth).f_code.co_filename).parent return (base / path).resolve() ``` -- components: Library (Lib) messages: 400075 nosy: cybergrind priority: normal severity: normal status: open title: pathlib: support relative path construction type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue44979> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44979] pathlib: support relative path construction
Change by Kirill Pinchuk : -- keywords: +patch pull_requests: +26344 stage: -> patch review pull_request: https://github.com/python/cpython/pull/27890 ___ Python tracker <https://bugs.python.org/issue44979> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37688] The results from os.path.isdir(...) an Path(...).is_dir() are not equivalent for empty path strings.
New submission from Kirill Balunov : In the documentation it is said that os.path.isdir(...) an Path(...).is_dir()are equivalent substitutes. https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module But they give different result for empty path strings: >>> import os >>> from pathlib import Path >>> dummy = "" >>> os.path.isdir(dummy) False Obviously it's not an equivalence, so either this should be noted in the documentation or corrected in the code. -- assignee: docs@python components: Documentation, Library (Lib) messages: 348475 nosy: docs@python, godaygo priority: normal severity: normal status: open title: The results from os.path.isdir(...) an Path(...).is_dir() are not equivalent for empty path strings. type: behavior ___ Python tracker <https://bugs.python.org/issue37688> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37688] The results from os.path.isdir(...) an Path(...).is_dir() are not equivalent for empty path strings.
Kirill Balunov added the comment: Forgot to write the result for Path variant: >>> Path(dummy).is_dir() True -- ___ Python tracker <https://bugs.python.org/issue37688> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37688] The results from os.path.isdir(...) an Path(...).is_dir() are not equivalent for empty path strings.
Kirill Balunov added the comment: I understand the reasons, I only say that it does not correspond to my perception of their equivalence, because: os.path.isdir('') != os.path.isdir('.') while: Path('').is_dir() == Path('.').is_dir() and I can confirm that some libraries rely on os.path.isdir('') -> False behavior. -- ___ Python tracker <https://bugs.python.org/issue37688> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37688] The results from os.path.isdir(...) an Path(...).is_dir() are not equivalent for empty path strings.
Kirill Balunov added the comment: I am reading "equivalence" too strictly (like "as a substitute"), because this is part of the documentation :) and I agree that in ordinary speech I would use it rather in the sense of “similar”. In order to make sure, that everyone agrees only on that this requires only a documentation change? Because as for me, I think that it will better for `os.path.isdir` to raise `ValueError` or `DeprecationWarning` - `False` on empty string is not well defined behavior. But I'm fine to be alone with the last one. -- ___ Python tracker <https://bugs.python.org/issue37688> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35757] slow subprocess.Popen(..., close_fds=True)
New submission from Kirill Kolyshkin : In case close_fds=True is passed to subprocess.Popen() or its users (subprocess.call() etc), it might spend some considerable time closing non-opened file descriptors, as demonstrated by the following snippet from strace: close(3)= -1 EBADF (Bad file descriptor) close(5)= -1 EBADF (Bad file descriptor) close(6)= -1 EBADF (Bad file descriptor) close(7)= -1 EBADF (Bad file descriptor) ... close(1021) = -1 EBADF (Bad file descriptor) close(1022) = -1 EBADF (Bad file descriptor) close(1023) = -1 EBADF (Bad file descriptor) This happens because the code in _close_fds() iterates from 3 up to MAX_FDS = os.sysconf("SC_OPEN_MAX"). Now, syscalls are cheap, but SC_OPEN_MAX (also known as RLIMIT_NOFILE or ulimit -n) can be quite high, for example: $ docker run --rm python python3 -c \ $'import os\nprint(os.sysconf("SC_OPEN_MAX"))' 1048576 This means a million syscalls before spawning a child process, which can result in a major delay, like 0.1s as measured on my fast and mostly idling laptop. Here is the comparison with python3 (which does not have this problem): $ docker run --rm python python3 -c $'import subprocess\nimport time\ns = time.time()\nsubprocess.check_call([\'/bin/true\'], close_fds=True)\nprint(time.time() - s)\n' 0.0009245872497558594 $ docker run --rm python python2 -c $'import subprocess\nimport time\ns = time.time()\nsubprocess.check_call([\'/bin/true\'], close_fds=True)\nprint(time.time() - s)\n' 0.0964419841766 -- components: Library (Lib) messages: 333819 nosy: Kirill Kolyshkin priority: normal pull_requests: 11269 severity: normal status: open title: slow subprocess.Popen(..., close_fds=True) type: performance versions: Python 2.7 ___ Python tracker <https://bugs.python.org/issue35757> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28685] Optimizing list.sort() by performing safety checks in advance
Kirill Balunov added the comment: What is the current status of this issue and will it go into Python 3.7? -- nosy: +godaygo ___ Python tracker <https://bugs.python.org/issue28685> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32910] venv: Deactivate.ps1 is not created when Activate.ps1 was used
New submission from Kirill Balunov : There was a related issue, which was closed https://bugs.python.org/issue26715. If virtual environment was activated using Powershell script - Activate.ps1, the Deactivate.ps1 was not created, while the documentation says that it should. "You can deactivate a virtual environment by typing “deactivate” in your shell. The exact mechanism is platform-specific: for example, the Bash activation script defines a “deactivate” function, whereas on Windows there are separate scripts called deactivate.bat and Deactivate.ps1 which are installed when the virtual environment is created." Way to reproduce under Windows 10, Python 3.6.4 1. Open elevated Powershell (Administrator access). 2. Activate virtual environment using Activate.ps1. 3. There is no Deactivate.ps1 Also, when the environment was activated with Activate.ps1, `deactivate` will not work. On the other hand, if the environment was activated simply with `activate` (it works) in Powershell, `deactivate` will also work. -- components: Windows messages: 312551 nosy: godaygo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: venv: Deactivate.ps1 is not created when Activate.ps1 was used type: behavior versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue32910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32910] venv: Deactivate.ps1 is not created when Activate.ps1 was used
Kirill Balunov added the comment: Sorry, `deactivate` works in both cases `Scripts/Activate.ps1` and `Scripts/activate`. Only `Deactivate.ps1` is not created for the former, but the docs says that it should. -- ___ Python tracker <https://bugs.python.org/issue32910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32910] venv: Deactivate.ps1 is not created when Activate.ps1 was used
Kirill Balunov added the comment: Yes, I agree, I did not understand the documentation correctly. It seems to me that the problem in the perception arose because of the fact that "deactivate" is not formatted as shell command, while `Deactivate.ps1` and others are. So I think simple formatting will be enough. Also, it is not mentioned in the documentation that it is possible to activate environment in Powershell with "Drive:\> \Scripts\activate", but maybe it's not always true and I have nowhere to check. -- ___ Python tracker <https://bugs.python.org/issue32910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32696] Fix pickling exceptions with multiple arguments
Kirill Matsaberydze added the comment: Hi, I encounter similar behavior in python 3.6.5 with following code: import pickle class CustomException(Exception): def __init__(self, arg1, arg2): msg = "Custom message {} {}".format(arg1, arg2) super().__init__(msg) obj_dump = pickle.dumps(CustomException("arg1", "arg2")) obj = pickle.loads(obj_dump) Traceback (most recent call last): File "", line 1, in TypeError: __init__() missing 1 required positional argument: 'arg2' So it looks like it not only python 2.7 problem -- nosy: +Kirill Matsaberydze ___ Python tracker <https://bugs.python.org/issue32696> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27129] Wordcode, part 2
Kirill Balunov added the comment: Hello, what is the future of this patch? Such a feeling that the transition to wordcode is still in some half-way state. -- nosy: +godaygo ___ Python tracker <https://bugs.python.org/issue27129> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33326] Convert collections (cmp_op, hasconst, hasname and others) in opcode module to more optimal type
New submission from Kirill Balunov : The opcode module contains several collections: `cmp_op` `hasconst` `hasname` `hasjrel` ... which are only used for `in` checks. At the same time, they are stored as `list`s and `cmp_op` as tuple. Both these types are not optimal for `__contains__` checks. Maybe it is worth at least to convert them to `frozenset` type after they are filled? -- components: Library (Lib) messages: 315576 nosy: godaygo priority: normal severity: normal status: open title: Convert collections (cmp_op, hasconst, hasname and others) in opcode module to more optimal type type: performance versions: Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue33326> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32455] PyCompile_OpcodeStackEffect() and dis.stack_effect() are not particularly useful
Kirill Balunov added the comment: Sorry if this doesn't fit this issue and needs a separate one. Since Python switched to 2 byte wordcode, all opcodes which do not imply an argument, technically have it - augmented with 0. So it is convenient to iterate over bytecode like: op, arg = instruction. But there is a check in stack_effect that the second argument for this opcodes must be None. file::_opcode.c else if (oparg != Py_None) { PyErr_SetString(PyExc_ValueError, "stack_effect: opcode does not permit oparg but oparg was specified"); return -1; } So you need to perform a somewhat _redundant_ check before calling: arg = arg if op >= opcode.HAVE_ARGUMENT else None. st = stack_effect(op, arg) Maybe it's normal to relax this condition - be None or 0 for opcode < opcode.HAVE_ARGUMENT? -- nosy: +godaygo ___ Python tracker <https://bugs.python.org/issue32455> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33326] Convert collections (cmp_op, hasconst, hasname and others) in opcode module to more optimal type
Change by Kirill Balunov : -- nosy: +larry, serhiy.storchaka ___ Python tracker <https://bugs.python.org/issue33326> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33326] Convert collections (cmp_op, hasconst, hasname and others) in opcode module to more optimal type
Kirill Balunov added the comment: Small risk of breaking is a fair point (maybe some FutureWarning with new __getattr__ PEP 562?). I've checked several packages: --- vstinner/bytecode:: uses: @staticmethod def _has_jump(opcode): return (opcode in _opcode.hasjrel or opcode in _opcode.hasjabs) --- maynard:: defines them as sets and does not rely on opcode module. all_jumps = absolute_jumps | relative_jumps --- numba:: converts them to frozensets: JREL_OPS = frozenset(dis.hasjrel) JABS_OPS = frozenset(dis.hasjabs) JUMP_OPS = JREL_OPS | JABS_OPS --- codetransfromer:: uses: absjmp = opcode in hasjabs reljmp = opcode in hasjrel --- anotherassembler.py:: uses elif opcode in hasjrel or opcode in hasjabs: --- byteplay:: converts them to set: hasjrel = set(Opcode(x) for x in opcode.hasjrel) hasjabs = set(Opcode(x) for x in opcode.hasjabs) hasjump = hasjrel.union(hasjabs) --- byterun:: uses: elif byteCode in dis.hasjrel: arg = f.f_lasti + intArg elif byteCode in dis.hasjabs: arg = intArg In fact, all of the above indicated does not mean anything, but I have not found cases of hasjrel+hasjabs. Despite the fact that they are small, on average, with sets I gain 5-6x speed-up. -- ___ Python tracker <https://bugs.python.org/issue33326> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33326] Convert collections (cmp_op, hasconst, hasname and others) in opcode module to more optimal type
Kirill Balunov added the comment: I apologize for FutureWarning and __getattr__. I myself do not understand what I meant and how it will help in this situation :) -- ___ Python tracker <https://bugs.python.org/issue33326> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24066] send_message should take all the addresses in the To: header into account
New submission from Kirill Elagin: If I have a message with multiple `To` headers and I send it using `send_message` not specifying `to_addrs`, the message gets sent only to one of the recipients. I’m attaching my patch that makes it send to _all_ the addresses listed in `To`, `Cc` and `Bcc`. I didn’t add any new tests as the existing ones already cover those cases and I have no idea how on Earth do they pass. -- components: Library (Lib) messages: 242158 nosy: kirelagin priority: normal severity: normal status: open title: send_message should take all the addresses in the To: header into account ___ Python tracker <http://bugs.python.org/issue24066> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24066] send_message should take all the addresses in the To: header into account
Kirill Elagin added the comment: x_x -- keywords: +patch Added file: http://bugs.python.org/file39219/multiple_to.patch ___ Python tracker <http://bugs.python.org/issue24066> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24066] send_message should take all the addresses in the To: header into account
Kirill Elagin added the comment: Ah, I’m so dumb. Of course the tests work as there are multiple addresses but still just one field. Here is the test for multiple fields. -- Added file: http://bugs.python.org/file39263/multiple_fields_test.patch ___ Python tracker <http://bugs.python.org/issue24066> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24066] send_message should take all the addresses in the To: header into account
Kirill Elagin added the comment: Oh, I see now. It is a good idea to raise an error either in `send_message` or at the moment when a second `To`/`Cc`/`Bcc` header is added to the message. -- resolution: -> not a bug status: open -> closed ___ Python tracker <http://bugs.python.org/issue24066> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com