[issue8258] Multiple Python Interpreter Memory Leak
New submission from William : Context: I am embedding Python into a Windows based C++ application, where a new Python interpreter (using Py_NewInterpreter) is created for each user who connects to the system. When the user logs off, the function "Py_EndInterpreter" is used to free all the associated resources. Problem: After starting the application on a server, the memory usage increases rapidly as some users login and log-off from the system. Some Tests: I have conducted some tests along with the Python interpreter. I have written a simple C++ program which simply creates 100 Python interpreters and then ends them one by one. If we check the Windows Task Manager, the following are the observations:- Memory usage before starting to create the Python Interpreters: 4316K Memory usage after creating 100 Python Interpreters: 61248K Memory usage after ending the 100 Python Interpreters: 47664K This shows that there has been a memory leak of approximately 43348K Please do consider this problem for fixing at the earliest or let me know if I am doing something wrong. -- components: Interpreter Core, Windows files: PythonCall.cpp messages: 101885 nosy: ewillie007 severity: normal status: open title: Multiple Python Interpreter Memory Leak type: resource usage versions: Python 2.6 Added file: http://bugs.python.org/file16687/PythonCall.cpp ___ Python tracker <http://bugs.python.org/issue8258> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26571] turtle regression in 3.5
Change by William Navaraj : -- nosy: +williamnavaraj nosy_count: 6.0 -> 7.0 pull_requests: +28569 pull_request: https://github.com/python/cpython/pull/30355 ___ Python tracker <https://bugs.python.org/issue26571> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46260] Misleading SyntaxError on f-string
William Navaraj added the comment: @Eric Smith,Thanks for explaining the intuition behind this statement. I agree. Just to avoid ambiguity we could add "f-string: unmatched '%c' - no matching open parenthesis or missing '}'" The only possibility at nested_depth==0 line 665 is either of those because of the check in line 559 i.e. assert(**str == '{') https://github.com/python/cpython/blob/cae55542d23e606dde9819d5dadd7430085fcc77/Parser/string_parser.c#L559 Will help folks to look backwards (close to open as well) than forwards (open to close) as in Matt's case which is also a valid interpretation where there is an opening paren outside the f-string. or simply "f-string: unmatched '%c' or missing '}'" will also do. -- nosy: +williamnavaraj ___ Python tracker <https://bugs.python.org/issue46260> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46275] caret location for syntax error pointing with f-strings
New submission from William Navaraj : Currently for non-f-string syntax errors, the caret points to the correct location of the syntax errors Example 1: ``` a=foo)+foo()+foo() ``` a=foo)+foo()+foo() ^ SyntaxError: unmatched ')' For f-string syntax errors, the caret points two locations after the f-string (almost correct location of the f-strings as a whole but will be more helpful as much as possible to point where exactly the syntax error is) Example 2: ``` temp=f"blank ({foo(}" ``` temp=f"blank ({foo(}" ^ SyntaxError: f-string: closing parenthesis '}' does not match opening parenthesis '(' Example 3: ``` temp=f"blank ({foo)blank ({foo()}) blank foo()})" ``` temp=f"blank ({foo)blank ({foo()}) blank foo()})" ^ SyntaxError: f-string: unmatched ')' -- components: Parser messages: 409813 nosy: lys.nikolaou, pablogsal, williamnavaraj priority: normal severity: normal status: open title: caret location for syntax error pointing with f-strings versions: Python 3.11, Python 3.8 ___ Python tracker <https://bugs.python.org/issue46275> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46275] caret location for syntax error pointing with f-strings
William Navaraj added the comment: A potential solution or in that direction https://github.com/williamnavaraj/cpython/tree/fix-issue-46275 Example 1: ``` temp=f"blank {foo)" ``` temp=f"blank {foo)" ^ SyntaxError: f-string: unmatched ')' Example 2: ``` temp=f"blank ({foo)foo2" ``` temp=f"blank ({foo)foo2" ^ SyntaxError: f-string: unmatched ')' Example 3: ``` temp=f"blank ({foo)blank ({foo()}) blank foo()})" ``` temp=f"blank ({foo)blank ({foo()}) blank foo()})" ^ SyntaxError: f-string: unmatched ')' -- ___ Python tracker <https://bugs.python.org/issue46275> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26571] turtle regression in 3.5
William Navaraj added the comment: Hi all, Sorry. I seem to have stepped on someone's toes or no one likes turtle any more (as this is active since 2016). As you can see, I am new here and still getting a feel for these procedures. I was preparing a Jupyter notebook for my students. I planned some exercises with turtle at start before we jump into real robots. I noticed this annoying Terminator error, digged deeper into the code to find about the _RUNNING class variable. The PR from Furkan suggest to remove the Raise Terminator. May be we could amend that and I will close my PR, if you prefer. Thanks, -- ___ Python tracker <https://bugs.python.org/issue26571> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45723] Improve and simplify configure.ac checks
William Fisher added the comment: In the conversion to PY_CHECK_FUNC, there's a mistake in HAVE_EPOLL. Python 3.10.1 defines HAVE_EPOLL by checking for the `epoll_create` function. Python 3.11.0a3 checks for the `epoll` function instead. There is no epoll() function so this always fails. The effect is that `epoll` doesn't exist in the `select` module on Python 3.11.0a3. Most code that uses epoll falls back when it is not available, so this may not be failing any tests. -- nosy: +byllyfish ___ Python tracker <https://bugs.python.org/issue45723> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46275] caret location for syntax error pointing with f-strings
William Navaraj added the comment: The variation in the caret position was also due to the trailing spaces. This is now sorted in this solution with a factored out function to find out the correct offset. https://github.com/python/cpython/compare/main...williamnavaraj:fix-issue-46275?expand=1 Tested against all of the following cases and it works great: temp=f"blank ({foo()}"+f"blank ({foo(}"+f"blank ({foo()}" temp=f"blank*{foo(*blank*foo()*blank*foo()}"+f"({foo(}"+f"blank ({foo(}" a=foo)+foo()+foo() f"blank ({foo(blank ({foo()}) blank foo()})" temp=f"blank ({foo)foo2" temp=f"blank {foo)" temp=f"blank {foo)" temp=f"blank ({foo)blank ({foo()}) blank foo()})" yetAnotherBlah temp=f"blank ({foo)blank ({foo()}) blank foo()})" yetAnotherBlahWithFurtherSpacesAfter -- ___ Python tracker <https://bugs.python.org/issue46275> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46900] marshal.dumps represents the same list object differently
New submission from William Dreese : Hello, I've been working with the marshal package and came across this issue (I think) - Python 3.9.10 Interpreter (Same output in 3.11.0a5+ (heads/master:b6b711a1aa) on darwin) >>> import marshal >>> var_example = [(1,2,3),(4,5,6)] >>> var_marshaled = marshal.dumps(var_example) >>> raw_marshaled = marshal.dumps([(1,2,3),(4,5,6)]) >>> def pp(to_print): >>> [print(byt) for byt in to_print] >>> pp(var_marshaled) 219 2 0 0 0 41 3 233 1 0 0 0 233 2 0 0 0 233 3 0 0 0 169 3 233 4 0 0 0 233 5 0 0 0 233 6 0 0 0 91 2 0 0 0 41 3 233 1 0 0 0 233 2 0 0 0 233 3 0 0 0 169 3 233 4 0 0 0 233 5 0 0 0 233 6 0 0 0 >>> pp(raw_marshaled) 219 2 0 0 0 169 3 233 1 0 0 0 233 2 0 0 0 233 3 0 0 0 169 3 233 4 0 0 0 233 5 0 0 0 233 6 0 0 0 91 2 0 0 0 169 3 233 1 0 0 0 233 2 0 0 0 233 3 0 0 0 169 3 233 4 0 0 0 233 5 0 0 0 233 6 0 0 0 The difference above lies in the byte representation of the tuple type (41 in the variable version and 169 in the raw version). Is this intended behavior? -- components: C API messages: 414362 nosy: Dreeseaw priority: normal severity: normal status: open title: marshal.dumps represents the same list object differently type: behavior versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue46900> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46900] marshal.dumps represents the same list object differently
William Dreese added the comment: I've made a very bad copy & paste error with the terminal output below, I apologize. The corrected output is >>> pp(var_marshaled) 91 2 0 0 0 41 3 233 1 0 0 0 233 2 0 0 0 233 3 0 0 0 41 3 233 4 0 0 0 233 5 0 0 0 233 6 0 0 0 >>> pp(raw_marshaled) 91 2 0 0 0 169 3 233 1 0 0 0 233 2 0 0 0 233 3 0 0 0 169 3 233 4 0 0 0 233 5 0 0 0 233 6 0 0 0 -- ___ Python tracker <https://bugs.python.org/issue46900> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46900] marshal.dumps represents the same list object differently
William Dreese added the comment: You two are both correct, this is not a bug and is the intended functionality. > The difference between 41 and 169 is 128: This realization helps a ton. Thanks. -- ___ Python tracker <https://bugs.python.org/issue46900> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46900] marshal.dumps represents the same list object differently
Change by William Dreese : -- resolution: -> not a bug stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue46900> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14156] argparse.FileType for '-' doesn't work for a mode of 'rb'
William Woodruff added the comment: Nosying myself; this affects 3.9 and 3.10 as well. -- nosy: +yossarian versions: +Python 3.10, Python 3.9 ___ Python tracker <https://bugs.python.org/issue14156> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9763] Crashes upon run after syntax error encountered in OSX 10.5.8
New submission from William Barr : Steps for reproduction: 1. Open a new code window 2. Enter python code which contains a syntax error 3. F5 and attempt to run the file (This was done without saving first) 4. Close the syntax error dialog. 5. Fix the syntax error and try to F5 again without saving again. 6. IDLE will encounter an error and unexpectedly close. I'm reporting this after having tested this on 4 different OSX 10.5.8 machines. I'm not sure if other versions of Python are also susceptible to this as well. -- components: IDLE messages: 115491 nosy: Webs961 priority: normal severity: normal status: open title: Crashes upon run after syntax error encountered in OSX 10.5.8 type: crash versions: Python 3.1 ___ Python tracker <http://bugs.python.org/issue9763> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10365] IDLE Crashes on File Open Dialog when code window closed before other file opened
New submission from William Barr : Steps for reproduction: 1. Open IDLE (Python 3.1.2) 2. Open a .py file 3. With the code window (not the shell window) in focus, Ctrl + O to bring up the open file dialog. Do not select a file or press open. 4. Close the code window. 5. Select a file and try to open it. 6. The IDLE process will terminate. This test was performed on Windows 7 Professional 32-bit as well as Windows XP Professional 32-bit. Python 3.1.2 (r312:79149, Mar 21 2010, 00:41:52) [MSC v.1500 32 bit (Intel)] on win32 -- components: IDLE, Windows messages: 120793 nosy: william.barr priority: normal severity: normal status: open title: IDLE Crashes on File Open Dialog when code window closed before other file opened type: crash versions: Python 3.1 ___ Python tracker <http://bugs.python.org/issue10365> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10365] IDLE Crashes on File Open Dialog when code window closed before other file opened
William Barr added the comment: Ok. I'll see if I can get some protection around that then. I did test the issue with 2.7, and I didn't find it. The window didn't open, but it didn't generate an exception that would kill the IDLE process. -- ___ Python tracker <http://bugs.python.org/issue10365> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11369] Add caching for the isEnabledFor() computation
New submission from William Hart : I recently started using logging extensively in the Coopr software, and I ran into some performance issues when logging was used within frequently used kernels in the code. After profiling, it became clear that the performance of the logging package could be improved by simply caching the value of the Logger.isEnabledFor() method. I've created a draft version of this cachine mechanism based on a snapshot of logging that I took from Python 2.7.1. This is currently hosted in pytuilib.logging, though I'd love to see this migrate into the Python library (see https://software.sandia.gov/trac/pyutilib/browser/pyutilib.logging/trunk/pyutilib/logging). Basically, I did the following: 1. Added a counter to the Manager class (status) that is incremented whenever the manager object has its level set 2. Add a counter to the Logger class (level_status) that represents the value of the manger status when the Logger's cache was last updated 3. Rework the isEnabledFor() method to update the cache if the logger status is older than the manager status. I moved the present isEnabledFor logic into the _isEnabledFor() method for simplicity. The attached file shows the diffs. Note that there were a few other diffs due to an effort to make pyutilib.logging work on Python 2.5-2.7. --Bill -- components: Library (Lib) files: logging__init__diffs.txt messages: 129851 nosy: William.Hart priority: normal severity: normal status: open title: Add caching for the isEnabledFor() computation versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3 Added file: http://bugs.python.org/file20967/logging__init__diffs.txt ___ Python tracker <http://bugs.python.org/issue11369> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11369] Add caching for the isEnabledFor() computation
William Hart added the comment: Vinay: Yes, the bulk of the time was spent in getEffectiveLevel(). Since that method is only called by isEnabledFor(), it made sense to cache isEnabledFor(). But, that probably reflects the characteristics of my use of logging. I never call getEffectiveLevel() otherwise, so caching isEnabledFor() minimized the total cost of logging. I think you could sensibly argue for caching getEffectiveLevel(). I had somewhat shallow logger hierarchies (3-4 deep). The real problem was that this was being called in a kernel of my code ... so even that additional cost was noteworthy. I think it's smart to focus on Python 3.3. As it happens, my colleagues did some explicit caching of logging information that made pyutilib.logging redundant. When you don't think you need this as a reference implementation, I'll delete it. --Bill On Wed, Mar 2, 2011 at 2:25 AM, Vinay Sajip wrote: > > Vinay Sajip added the comment: > > Bill, > > Thanks for the suggestion and the patch. It's a good idea, though I wonder > whether you found that the bulk of the time spent in isEnabledFor() was > actually spent in getEffectiveLevel()? That's the one which loops through a > logger's ancestors looking for a level which is actually set, so intuitively > it would take a while - especially for deeper levels in the logging > hierarchy. If so (which I suspect to be the case, but it would be good to > have you confirm it), a better solution may be to cache the effective level. > > Roughly how deep are/were your logger hierarchies in the situation where > you experienced performance problems? > > I'm happy to look at caching effective level for Python 3.3: The 2.X > branches are now closed for additions other than bugs and security issues. > > -- > assignee: -> vinay.sajip > > ___ > Python tracker > <http://bugs.python.org/issue11369> > ___ > -- Added file: http://bugs.python.org/file20997/unnamed ___ Python tracker <http://bugs.python.org/issue11369> ___Vinay:Yes, the bulk of the time was spent in getEffectiveLevel(). Since that method is only called by isEnabledFor(), it made sense to cache isEnabledFor(). But, that probably reflects the characteristics of my use of logging. I never call getEffectiveLevel() otherwise, so caching isEnabledFor() minimized the total cost of logging. I think you could sensibly argue for caching getEffectiveLevel(). I had somewhat shallow logger hierarchies (3-4 deep). The real problem was that this was being called in a kernel of my code ... so even that additional cost was noteworthy.I think it's smart to focus on Python 3.3. As it happens, my colleagues did some explicit caching of logging information that made pyutilib.logging redundant. When you don't think you need this as a reference implementation, I'll delete it. --BillOn Wed, Mar 2, 2011 at 2:25 AM, Vinay Sajip <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> wrote: Vinay Sajip <mailto:vinay_sa...@yahoo.co.uk";>vinay_sa...@yahoo.co.uk> added the comment: Bill, Thanks for the suggestion and the patch. It's a good idea, though I wonder whether you found that the bulk of the time spent in isEnabledFor() was actually spent in getEffectiveLevel()? That's the one which loops through a logger's ancestors looking for a level which is actually set, so intuitively it would take a while - especially for deeper levels in the logging hierarchy. If so (which I suspect to be the case, but it would be good to have you confirm it), a better solution may be to cache the effective level. Roughly how deep are/were your logger hierarchies in the situation where you experienced performance problems? I'm happy to look at caching effective level for Python 3.3: The 2.X branches are now closed for additions other than bugs and security issues. -- assignee:  -> vinay.sajip ___ Python tracker <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> <http://bugs.python.org/issue11369"; target="_blank">http://bugs.python.org/issue11369> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11369] Add caching for the isEnabledFor() computation
William Hart added the comment: Vinay: No, I haven't tried this in multi-threaded applications. You're correct that this would require locks around the global data. --Bill On Thu, Mar 10, 2011 at 3:16 AM, Vinay Sajip wrote: > > Vinay Sajip added the comment: > > Bill, > > I was looking at this patch again, and I'm not sure about thread safety. > The correctness of the caching depends on manager.status, which is state > which is potentially shared across threads. There are no interlocks around > it, so with the patch as it stands, ISTM it's possible in a multi-threaded > application to get stale information. Has your patch been used in > multi-threaded applications? > > -- > > ___ > Python tracker > <http://bugs.python.org/issue11369> > ___ > -- Added file: http://bugs.python.org/file21107/unnamed ___ Python tracker <http://bugs.python.org/issue11369> ___Vinay:No, I haven't tried this in multi-threaded applications. You're correct that this would require locks around the global data.--BillOn Thu, Mar 10, 2011 at 3:16 AM, Vinay Sajip <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> wrote: Vinay Sajip <mailto:vinay_sa...@yahoo.co.uk";>vinay_sa...@yahoo.co.uk> added the comment: Bill, I was looking at this patch again, and I'm not sure about thread safety. The correctness of the caching depends on manager.status, which is state which is potentially shared across threads. There are no interlocks around it, so with the patch as it stands, ISTM it's possible in a multi-threaded application to get stale information. Has your patch been used in multi-threaded applications? -- ___ Python tracker <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> <http://bugs.python.org/issue11369"; target="_blank">http://bugs.python.org/issue11369> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11636] fh is not defined in npyio.py fromregex
New submission from William Dawson : NameError: global name 'fh' is not defined File "/Users/williamdawson/Programs/fat_wip.py", line 263, in header_gal = readheader(gal_cat) File "/Users/williamdawson/Programs/tools.py", line 96, in readheader [('column',numpy.int16),('name','S10')]) File "/Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/numpy/lib/npyio.py", line 972, in fromregex Note that this code works perfectly fine with the python 2.6 EPD release. -- components: None files: npyio.py messages: 131758 nosy: William.Dawson priority: normal severity: normal status: open title: fh is not defined in npyio.py fromregex type: crash versions: Python 2.7 Added file: http://bugs.python.org/file21338/npyio.py ___ Python tracker <http://bugs.python.org/issue11636> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11369] Add caching for the isEnabledFor() computation
William Hart added the comment: Understood! FYI, we worked around this caching issue explicitly in our code. This wound up being simpler than supporting a hacked version of the logger. Thanks for looking into this! On Mon, Apr 11, 2011 at 1:54 AM, Vinay Sajip wrote: > > Vinay Sajip added the comment: > > I'll regretfully have to mark this as wontfix, since adding threading > interlocks for correct operation in multi-threaded environments will negate > the performance benefit. > > -- > resolution: -> wont fix > status: open -> closed > > ___ > Python tracker > <http://bugs.python.org/issue11369> > ___ > -- Added file: http://bugs.python.org/file21618/unnamed ___ Python tracker <http://bugs.python.org/issue11369> ___Understood! FYI, we worked around this caching issue explicitly in our code. This wound up being simpler than supporting a hacked version of the logger.Thanks for looking into this! On Mon, Apr 11, 2011 at 1:54 AM, Vinay Sajip <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> wrote: Vinay Sajip <mailto:vinay_sa...@yahoo.co.uk";>vinay_sa...@yahoo.co.uk> added the comment: I'll regretfully have to mark this as wontfix, since adding threading interlocks for correct operation in multi-threaded environments will negate the performance benefit. -- resolution:  -> wont fix status: open -> closed ___ Python tracker <mailto:rep...@bugs.python.org";>rep...@bugs.python.org> <http://bugs.python.org/issue11369"; target="_blank">http://bugs.python.org/issue11369> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10365] IDLE Crashes on File Open Dialog when code window closed before other file opened
William Barr added the comment: Ok, attached is a patch that should make IDLE silently ignore this happening; upon looking into this, it was a rather trivial fix. The open function was waiting on the return input from the askopenfile call, during which the calling window was closed, setting the editwin parameter to None according to close. The patch just adds another try/except to catch the AttributeError raised when the non-extant editwin's flist is referenced. I did come up with a method to actually make it continue with the opening process (just save a copy of the editwin's flist before the askopenfile call, during which the editwin gets closed), but that seemed a bit kludgey and possibly dangerous;, however it *seems* to work without issue. I can upload that patch as well if anyone would care to review it in addition to the attached patch. -- keywords: +patch Added file: http://bugs.python.org/file19922/issue10365.patch ___ Python tracker <http://bugs.python.org/issue10365> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11082] ValueError: Content-Length should be specified
New submission from William Wu : I found this bug when I started to trying Python 3.2 release candidate 1. When using urllib.request.urlopen to handle HTTP POST, I got the error message: ValueError: Content-Length should be specified for iterable data of type 'foo=bar' I'll attach the patch and test case. -- components: Library (Lib) messages: 127646 nosy: William.Wu priority: normal severity: normal status: open title: ValueError: Content-Length should be specified type: behavior versions: Python 3.2 ___ Python tracker <http://bugs.python.org/issue11082> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11082] ValueError: Content-Length should be specified
Changes by William Wu : -- keywords: +patch Added file: http://bugs.python.org/file20633/test_urllib_request.patch ___ Python tracker <http://bugs.python.org/issue11082> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11082] ValueError: Content-Length should be specified
Changes by William Wu : Added file: http://bugs.python.org/file20634/urllib_request.patch ___ Python tracker <http://bugs.python.org/issue11082> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11082] ValueError: Content-Length should be specified
William Wu added the comment: If the POST data should be bytes which I also think reasonable, should urllib.parse.urlencode return bytes instead of str? >>> urllib.parse.urlencode({'foo': 'bar'}) 'foo=bar' >>> urllib.parse.urlencode({b'foo': b'bar'}) 'foo=bar' -- ___ Python tracker <http://bugs.python.org/issue11082> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11082] ValueError: Content-Length should be specified
William Wu added the comment: So, what's the decision to be taken? I'm willing to provide patches (if I need to), but I need to know *the reasonable behaviors*. :) -- ___ Python tracker <http://bugs.python.org/issue11082> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4982] Running python 3 as Non-admin User requests the Runtime to terminate in an unusual way.
New submission from William Stevenson : AMD64 msi install seemed to work fine when running python3.0 as admin but fails immediately as any other user. Command Line output: C:\Python30>python.exe Fatal Python error: Py_Initialize: can't initialize sys standard streams File "C:\Python25\Lib\encodings\__init__.py", line 120 raise CodecRegistryError,\ ^ SyntaxError: invalid syntax This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. C:\Python30> GUI output: --- Visual Studio Just-In-Time Debugger --- An unhandled win32 exception occurred in python.exe [2028]. Just-In-Time debugging this exception failed with the following error: No installed debugger has Just-In-Time debugging enabled. In Visual Studio, Just-In- Time debugging can be enabled from Tools/Options/Debugging/Just-In-Time. Check the documentation index for 'Just-in-time debugging, errors' for more information. --- OK --- -- components: Installation, Windows messages: 80070 nosy: yhvh severity: normal status: open title: Running python 3 as Non-admin User requests the Runtime to terminate in an unusual way. type: crash versions: Python 3.0 ___ Python tracker <http://bugs.python.org/issue4982> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4672] Distutils SWIG support blocks use of SWIG -outdir option
William Fulton added the comment: This error can be replicated on the command line with suitable quoting of the -outdir option: swig -c++ "-outdir ." You need to pass the options correctly by separating them out, so use: swig_opts=['-c++', '-...@hepmcincpath@', '-outdir', '.'] I suggest distutils is fixed to show the quotes it is effectively adding when displaying the command, so for the example Andy gave it should display: swigging ./hepmc.i to ./hepmc_wrap.cpp swig -python -c++ -I/home/andy/heplocal/include "-outdir ." -o ./hepmc_wrap.cpp ./hepmc.i The quotes would need adding for display when a user has a space in any of the options passed to SWIG (including the include_dirs etc). -- nosy: +postofficered ___ Python tracker <http://bugs.python.org/issue4672> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38552] Colored Prompt broken in REPL in Windows in 3.8
New submission from William Minchin : I have a Python startup file that colorizes my prompt. This worked on Python 3.7, but breaks on Python 3.8. I'm on Windows using Powershell 6.2.3, but `cmd` does the same. Maybe related, but the colorization also failed on Python 3.7 if a virtual environment was active. I am using `colorama` to provide the ANSI color codes. The startup files: ``` import sys import colorama colorama.init() _GREEN = colorama.Fore.GREEN _YELLOW = colorama.Fore.YELLOW _RESET = colorama.Style.RESET_ALL print("setting prompt, {}, {}, {}.".format(_GREEN, _YELLOW, _RESET)) sys.ps1 = "{}>>> {}".format(_GREEN, _RESET) sys.ps2 = "{}... {}".format(_YELLOW, _RESET) print() ``` -- components: Windows files: python_colored_prompt.png messages: 355097 nosy: MinchinWeb, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Colored Prompt broken in REPL in Windows in 3.8 type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48673/python_colored_prompt.png ___ Python tracker <https://bugs.python.org/issue38552> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38794] Setup: support linking openssl statically
William Woodruff added the comment: Not to butt in too much, but I have a related use case that would benefit from being able to statically link to OpenSSL: I have an environment in which dynamic modules are acceptable, but where the entire Python install benefits from being relocatable and used on different hosts with the same *basic* state (same glibc, etc). OpenSSL isn't one of those pieces of basic state (for bad reasons, but ones that I can't control), so having a Python distribution that links it statically would save me some amount of complexity. I realize this introduces a significant support burden, and that dealing with myraid OpenSSL configurations is already a pain. So I'm content to shoulder this as my own local build patch -- I'm only following up to note another use case that might benefit from a full-fledged `configure` option, should this ever get re-visited. -- nosy: +yossarian ___ Python tracker <https://bugs.python.org/issue38794> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38794] Setup: support linking openssl statically
William Woodruff added the comment: Cheers! No promises about not using the hack, but I *will* promise not to complain if it doesn't work for me :-) -- ___ Python tracker <https://bugs.python.org/issue38794> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43661] api-ms-win-core-path-l1-1.0.dll, redux of 40740 (which has since been closed)
William Pickard added the comment: Python 3.9 does not support Windows 7, it's explicitly stated in the release notes of 3.9.0 -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43661> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43685] __call__ not being called on metaclass
William Pickard added the comment: This line is the cause of your issue: "new_cls: SingletonMeta = cast(SingletonMeta, type(name, bases, namespace))" More specifically, your call to type() actually erases all information about your meta class. If you did "type(S)", you would've seen "type" returned. Replace it with: "new_cls: SingletonMeta = super().__new__(name, bases, namespace)" -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43685> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38794] Setup: support linking openssl statically
William Woodruff added the comment: I don't think this is a productive or polite response. If you read the issue linked, you'll note that the other flag added (--with-openssl-rpath) is in furtherance of any *already* supported linking scenario (dynamic, with a non-system-default OpenSSL). The proposed change here is for an *new* linking scenario, i.e. statically linking OpenSSL. I don't blame the maintainers for not wanting to add another build configuration to their plate, especially for something like OpenSSL. It's up to us as end users to make those accommodations on our own, knowing that what we're doing isn't tested or directly supported. The resolution above is, in my book, the ideal one. Best, William On Sat, Apr 03, 2021 at 04:45:34PM +, Lukas Vacek wrote: > > Lukas Vacek added the comment: > > For the record, this would have been solved more than a year ago already. > > When this change was proposed more than a year ago it was rejected with > "There is no need to add more configure flags to build Python with a custom > OpenSSL installation. " yet now it's ok to add a new option > --with-openssl-rpath https://bugs.python.org/issue43466 ? > > And the first comment there, from python core dev nonetheless, is suggesting > static linking as well. Emm... this would have been solved year and half ago. > I would be happy to completely drop my proposed (and approved on gihub) > changes and implement it in a different way. > > The maintainer's attitude as demonstrated here can be really harmful in > open-source projects (many of us still remember eglibc fork back in the day) > but fortunately this is the first time I noticed such attitude among python > developers. > > Importantly the issue is resolved now (did it take a request from IBM's > customer to get this implemented ;-) ?) and hopefully a lesson learnt and > Christian will be more welcoming and less judgemental of outsiders' > contributions. > > -- > > ___ > Python tracker > <https://bugs.python.org/issue38794> > ___ -- ___ Python tracker <https://bugs.python.org/issue38794> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43776] Popen with shell=True yield mangled repr output
William Pickard added the comment: Actually, the problem is independent of the value of "shell", the __repr__ function from the initial PR that introduced it expects "args" to be a sequence and converts it to a list. -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43776> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15795] Zipfile.extractall does not preserve file permissions
Change by William Woodruff : -- nosy: +yossarian ___ Python tracker <https://bugs.python.org/issue15795> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43866] Installation files of the Python
William Pickard added the comment: Python, when installed for all users, installs to %ProgramFiles% (or %ProgramFiles(x86)% for 32-bit version on 64-bit Windows). The %LocalAppData% install is just for you... you didn't install it for everyone or you didn't provide Python with the Administrator privileges it needs for %ProgramFiles% (IE: No process elevation) -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43866> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44046] When writing to the Registry using winreg, it currently allows you to write ONLY to HKEY_CURRENT_USERS.
William Pickard added the comment: Do you mind ticking the box, "Run as Administrator" in the Compatibility tab for python.exe and try winreg again? -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue44046> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44046] When writing to the Registry using winreg, it currently allows you to write ONLY to HKEY_CURRENT_USERS.
William Pickard added the comment: Here's something you should know about Windows, even if a local account is in the Administrators group, it still has restrictions on what it can do, it just has the power to elevate itself without requiring login credentials (VIA UAC prompts). This group functions very similar to the sudoers group in Linux. I expect that disabling UAC only causes Windows to automatically approve them on Administrator accounts and deny on non-Administrator accounts for applications that explicitly require the prompt (Run as Administrator special flag). There exists a hidden deactivated account called Administrator in Windows that functions very similar to root in Linux. UAC prompts are to allow an application to run under a temporary Windows Logon session as this hidden account while using your logon session, aka elevation. -- ___ Python tracker <https://bugs.python.org/issue44046> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43804] "Building C and C++ Extensions on Windows" docs are very out-of-date
William Pickard added the comment: I'm quite familiar with MSVC's command line and I'm quite confused on what you mean "the above commands are specific to 32-bit Python" "/LD" is available for both 32-bit and 64-bit compilations, it implies "/MT" to the compiler and "/DLL" to the linker. "/I" is available for both 32-bit and 64-bit compilations. Python's lib files are named exactly the same between 32-bit and 64-bit versions. The only thing platform specific in MSVC is the compiler. Visual Studio's C/C++ build tools ships with 4 variants of MSVC: 2 32-bit versions, 1 for targeting 32-bit and the other for targeting 64-bit (32-bit native, 32-bit cross compile to 64-bit) The same is true for the 64-bit versions (64-bit native and 64-bit cross compile to 32-bit) Internally, these are known as: x86, x86_x64, x64, x64_x86 -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43804> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43804] "Building C and C++ Extensions on Windows" docs are very out-of-date
William Pickard added the comment: Then it appears you're using a version of the compiler that is built for building 32-bit exes/dlls, you need to use the x64 version. For this you need to start either the x86 Cross Tools console or the x64 native console (VS 2019) or use vcvarsall.cmd/Enter-VsDevShell (PowerShell) -- ___ Python tracker <https://bugs.python.org/issue43804> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43804] "Building C and C++ Extensions on Windows" docs are very out-of-date
William Pickard added the comment: Correction: You can use either VsDevCmd.bat/Enter-VsDevShell on VS versions that provide them (2017 and 2019 are known to include it), but to get the x64 tools you need to pass command line arguments (They default to x86 native tools). Otherwise you must use either vcvarsall.cmd and pass the appropriate command line arguments or vcvarsx86_amd64.bat/vcvars64.bat -- ___ Python tracker <https://bugs.python.org/issue43804> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38552] Colored Prompt broken in REPL in Windows in 3.8
William Minchin added the comment: I can't reproduce this today: Python 3.8.6 (or 3.9.5) with PowerShell 7.1.3 on Windows 10 with Windows Terminal. Maybe it got fixed by a bugfix release of Python 3.8? I'll close it for now. c.f. https://github.com/tartley/colorama/issues/233 -- stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue38552> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44185] mock_open file handle __exit__ does not call close
New submission from William Sjoblom : A common testing scenario is assuring that opened files are closed. Since unittest.mock.mock_open() can be used as a context manager, it would be reasonable to expect its __exit__ to invoke close so that one can easily assert that the file was closed, regardless of if the file was opened with a plain call to open or with a context manager. -- components: Library (Lib) messages: 394005 nosy: williamsjoblom priority: normal severity: normal status: open title: mock_open file handle __exit__ does not call close type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue44185> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44208] argparse: Accept option without needing to distinguish "-" from "_" in arg_string
New submission from William Barnhart : An issue I encountered recently with argparse was when I tried running a script with its argument changed from something like: ``` parser.add_argument('--please_work') ``` and was replaced with: ``` parser.add_argument('--please-work') ``` I have not ever seen anyone add an argument where a function has two arguments existing in a function, such as please_work and please-work. I think this is a pretty safe feature to implement, plus it enforces good practices. So if I wrote something such as: ``` parser = argparse.ArgumentParser(description="check this out") parser.add_argument('--please-work') ``` Then I could call the program using: ``` python3 test.py --please_work True ``` or: ``` python3 test.py --please-work True ``` -- components: Library (Lib) messages: 394135 nosy: wbarnha priority: normal severity: normal status: open title: argparse: Accept option without needing to distinguish "-" from "_" in arg_string type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue44208> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44208] argparse: Accept option without needing to distinguish "-" from "_" in arg_string
Change by William Barnhart : -- keywords: +patch pull_requests: +24899 stage: -> patch review pull_request: https://github.com/python/cpython/pull/26295 ___ Python tracker <https://bugs.python.org/issue44208> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44208] argparse: Accept option without needing to distinguish "-" from "_" in arg_string
William Barnhart added the comment: I'm glad someone else thinks it's a good idea. If you have some ideas for tests, I'd be happy to help write them in order to spare the inconvenience (unless the tests are that easy to write then by all means please do). I can appreciate why the core developers could be polarized. One side could say it's safer to just take better care of our Python scripts and have them in wrapper functions. But people don't always do this. In favor of the idea: This method can be used to spare developers from agony when a "_" is changed to a "-" when they aren't informed of the change. As a result, this can allow for code to remain usable since most developers are more interested in the letters of arguments and not so much special characters. But then again, a --help option could be all that's needed to fix this issue. I think a solution that satisfies both parties would be creating some safe guard for adding arguments that are named identically, containing "_" or "-" in the middle, and raising an exception when the regexes of the arguments are conflicting. Such as if I wrote: ``` parser.add_argument('--please_work') parser.add_argument('--please-work') ``` then an exception should be raised with an error for conflicting argument names. -- ___ Python tracker <https://bugs.python.org/issue44208> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44216] Bug in class method with optional parameter
William Pickard added the comment: This is not a bug but a side-affect at how defaulted parameters are stored. The rule of thumb is to never use mutable values as default values for parameters. When a method is created in the Python runtime, it checks if the signature has defaulted keyword arguments. If it does, it executes the expression to retrieve the value for the arguments and stores the results internally. When you go and execute the method with these arguments missing, Python retrieves a reference the the generated value and provides your method with that. This is your issue as you're modifying the same object with every call to the method. The proper way to do this is this: def do_something(self, a, b=None): b = b if b is not None else [] b.append(a) print('b contains', b) -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue44216> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39573] [C API] Make PyObject an opaque structure in the limited C API
William Pickard added the comment: MSVC by default disables method inlining (/Ob0) when '/Od' is specified on the command line while the optimization options specify '/Ob2'. -- ___ Python tracker <https://bugs.python.org/issue39573> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13788] os.closerange optimization
William Manley added the comment: Linux has a close_range syscall since v5.9 (Oct 2020): https://man7.org/linux/man-pages/man2/close_range.2.html -- nosy: +wmanley ___ Python tracker <https://bugs.python.org/issue13788> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45008] asyncio.gather should not "dedup" awaitables
New submission from William Fisher : asyncio.gather uses a dictionary to de-duplicate futures and coros. However, this can lead to problems when you pass an awaitable object (implements __await__ but isn't a future or coro). 1. Two or more awaitables may compare for equality/hash, but still expect to produce different results (See the RandBits class in gather_test.py) 2. If an awaitable doesn't support hashing, asyncio.gather doesn't work. Would it be possible for non-future, non-coro awaitables to opt out of the dedup logic? The attached file shows an awaitable RandBits class. Each time you await it, you should get a different result. Using gather, you will always get the same result. -- components: asyncio files: gather_test.py messages: 400309 nosy: asvetlov, byllyfish, yselivanov priority: normal severity: normal status: open title: asyncio.gather should not "dedup" awaitables type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50236/gather_test.py ___ Python tracker <https://bugs.python.org/issue45008> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45074] asyncio hang in subprocess wait_closed() on Windows, BrokenPipeError
New submission from William Fisher : I have a reproducible case where stdin.wait_closed() is hanging on Windows. This happens in response to a BrokenPipeError. The same code works fine on Linux and MacOS. Please see the attached code for the demo. I believe the hang is related to this debug message from the logs: DEBUG <_ProactorWritePipeTransport closing fd=632>: Fatal write error on pipe transport Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.9.6\x64\lib\asyncio\proactor_events.py", line 379, in _loop_writing f.result() File "C:\hostedtoolcache\windows\Python\3.9.6\x64\lib\asyncio\windows_events.py", line 812, in _poll value = callback(transferred, key, ov) File "C:\hostedtoolcache\windows\Python\3.9.6\x64\lib\asyncio\windows_events.py", line 538, in finish_send return ov.getresult() BrokenPipeError: [WinError 109] The pipe has been ended It appears that the function that logs "Fatal write error on pipe transport" also calls _abort on the stream. If _abort is called before stdin.close(), everything is okay. If _abort is called after stdin.close(), stdin.wait_closed() will hang. Please see issue #44428 for another instance of a similar hang in wait_closed(). -- components: asyncio files: wait_closed.py messages: 400810 nosy: asvetlov, byllyfish, yselivanov priority: normal severity: normal status: open title: asyncio hang in subprocess wait_closed() on Windows, BrokenPipeError type: behavior versions: Python 3.10, Python 3.9 Added file: https://bugs.python.org/file50250/wait_closed.py ___ Python tracker <https://bugs.python.org/issue45074> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45257] Compiling 3.8 branch on Windows attempts to use incompatible libffi version
New submission from William Proffitt : Wasn't sure where to file this. I built Python 3.8.12 for Windows recently from the latest bugfix source release in the cpython repository. One tricky thing came up I wanted to write-up in case it matters to someone else. The version of libffi in the cpython-bin-deps repository seems to be too new for Python 3.8.12 now. The script for fetching the external dependencies (PCBuild\get_externals.bat) on the 3.8 branch fetches whatever is newest on the libffi branch, and this led to it downloading files starting with "libffi-8" and the build complaining about being unable to locate "libffi-7". I managed to resolve this by manually replacing the fetched libffi in the externals directory with the one from this commit, the latest commit I could find where the filenames started with "libffi-7": https://github.com/python/cpython-bin-deps/commit/1cf06233e3ceb49dc0a73c55e04b1174b436b632 After that, I was able to successfully run "build.bat -e -p x64" in PCBuild and "build.bat -x64" in "Tools\msi\" and end up with a working build and a working installer. (Side note that isn't that important for me but maybe worth mentioning while I'm here: the uninstaller on my newly minted installer didn't seem to work at all and I had to manually resort to deleting registry keys to overwrite my previous attempted install.) -- components: Build, Installation, Windows, ctypes messages: 402318 nosy: paul.moore, proffitt, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Compiling 3.8 branch on Windows attempts to use incompatible libffi version type: compile error versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue45257> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45257] Compiling 3.8 branch on Windows attempts to use incompatible libffi version
William Proffitt added the comment: Ah yes, thank you Steve. I see the commit you're referencing is the cherry pick from upstream onto the 3.8 branch, but it's newer than the tag 3.8.12 I was using. Looks like I won't have to do anything if I wait until 3.8.13 before doing this again. This is good. -- resolution: -> fixed stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue45257> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45718] asyncio: MultiLoopWatcher has a race condition (Proposed work-around)
New submission from William Fisher : Summary: asyncio.MultiLoopChildWatcher has two problems that create a race condition. 1. The SIGCHLD signal handler does not guard against interruption/re-entry. 2. The SIGCHLD signal handler can interrupt add_child_handler's `self._do_waitpid(pid)`. This is a continuation of bpo-38323. That issue discussed two bugs. This issue proposes a work-around for one of them that may be useful in making build tests more reliable. I'm reserving discussion to the case of a single asyncio event loop on the main thread. (MultiLoopChildWatcher has a separate "signal-delivery-blocked" problem when used in an event loop that is not in the main thread as mentioned in bpo-38323.) Symptoms: Log messages that look like this: 1634935451.761 WARNING Unknown child process pid 8747, will report returncode 255 ... 1634935451.762 WARNING Child watcher got an unexpected pid: 8747 Traceback (most recent call last): File "/Users/runner/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/asyncio/unix_events.py", line 1306, in _do_waitpid loop, callback, args = self._callbacks.pop(pid) KeyError: 8747 Background: I've been working on a library to make calling asyncio subprocesses more convenient. As part of my testing, I've been stress testing asyncio using different child watcher policies. My CI build runs more than 200 tests each on macOS, Linux and FreeBSD. I've noticed a small percentage of sporadic failures using MultiLoopChildWatcher. My understanding of Python signal functions is that: 1. Upon receipt of a signal, the native "C" signal handler sets a flag that indicates the signal arrived. 2. The main thread checks the signal flags after each bytecode instruction. If a signal flag is set, Python saves the call stack, runs the signal handler on the main thread immediately, then pops the stack when it returns (assuming no exception raised by signal handler). 3. If you are in the middle of a signal handler running on the main thread and Python detects another signal flag, your signal handler can be interrupted. 4. Stacked signal handlers run in LIFO order. The last one that enters will run to completion without interruption. Explanation: I wrapped MultiLoopChildWatcher's sig_chld function in a decorator that logs when it is entered and exited. This clearly shows that _sig_chld is being re-entered. In the following log snippet, I'm running two commands in a pipeline "tr | cat". 1634935451.743 DEBUG process '/usr/bin/tr' created: pid 8747 ... 1634935451.746 DEBUG process '/bin/cat' created: pid 8748 ... 1634935451.761 DEBUG enter '_sig_chld' 20 1634935451.761 DEBUG enter '_sig_chld' 20 1634935451.761 WARNING Unknown child process pid 8747, will report returncode 255 1634935451.762 DEBUG process 8748 exited with returncode 0 1634935451.762 DEBUG exit '_sig_chld' 20 1634935451.762 WARNING Child watcher got an unexpected pid: 8747 Traceback (most recent call last): File "/Users/runner/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/asyncio/unix_events.py", line 1306, in _do_waitpid loop, callback, args = self._callbacks.pop(pid) KeyError: 8747 1634935451.763 WARNING Unknown child process pid 8748, will report returncode 255 1634935451.763 WARNING Child watcher got an unexpected pid: 8748 Traceback (most recent call last): File "/Users/runner/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/asyncio/unix_events.py", line 1306, in _do_waitpid loop, callback, args = self._callbacks.pop(pid) KeyError: 8748 1634935451.763 DEBUG exit '_sig_chld' 20 Here is the breakdown of what happens: 1. Pid 8747 exits and we enter _sig_chld #1. 2. sig_chld #1 calls os.waitpid which gives (pid, status) = (8747, 0). 3. Before sig_chld #1 has a chance to call `self._callbacks.pop(pid)`, it is interrupted. 4. sig_chld #2 calls os.waitpid for pid 8747. We get a ChildProcessError and then "Unknown child process pid 8747, will report returncode 255" 5. sig_chld #2 invokes the callback for pid 8747 saying the returncode=255. 6. sig_chld #2 continues to completion. It reaps pid 8748 normally. 7. sig_chld #1 picks up where it left off. We get an error when we try to pop the callback for 8747. 8. sig_chld #1 calls os.waitpid for pid 8748. This gives us failure messages because it was done by sig_chld #2. The issue of interruption can also happen in the case of running a single process. If the _sig_chld interrupts the call to `self._do_waitpid(pid)` in add_child_watcher, a similar interleaving can occur. Work-Around: In my tests, I patched MultiLoopChildWatcher and so far, it appears to be more reliable. In add_child_handler, I call raise_signal(SIGCHLD) so that all the work is done in the signal handler. class PatchedMultiLoopChildWatcher(asyncio.MultiLoopChildWatcher): "Test race conditi
[issue45718] asyncio: MultiLoopWatcher has a race condition (Proposed work-around)
William Fisher added the comment: Thanks, I will comment on bpo-38323 directly. -- resolution: -> duplicate stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue45718> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38323] asyncio: MultiLoopWatcher has a race condition (test_asyncio: test_close_kill_running() hangs on AMD64 RHEL7 Refleaks 3.x)
William Fisher added the comment: asyncio.MultiLoopChildWatcher has two problems that create a race condition. 1. The SIGCHLD signal handler does not guard against interruption/re-entry. 2. The SIGCHLD signal handler can interrupt add_child_handler's `self._do_waitpid(pid)`. Symptoms: Log messages that look like this: 1634935451.761 WARNING Unknown child process pid 8747, will report returncode 255 ... 1634935451.762 WARNING Child watcher got an unexpected pid: 8747 Traceback (most recent call last): File "/Users/runner/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/asyncio/unix_events.py", line 1306, in _do_waitpid loop, callback, args = self._callbacks.pop(pid) KeyError: 8747 Background: I've been working on a library to make calling asyncio subprocesses more convenient. As part of my testing, I've been stress testing asyncio using different child watcher policies. My CI build runs more than 200 tests each on macOS, Linux and FreeBSD. I've noticed a small percentage of sporadic failures using MultiLoopChildWatcher. My understanding of Python signal functions is that: 1. Upon receipt of a signal, the native "C" signal handler sets a flag that indicates the signal arrived. 2. The main thread checks the signal flags after each bytecode instruction. If a signal flag is set, Python saves the call stack, runs the signal handler on the main thread immediately, then pops the stack when it returns (assuming no exception raised by signal handler). 3. If you are in the middle of a signal handler running on the main thread and Python detects another signal flag, your signal handler can be interrupted. 4. Stacked signal handlers run in LIFO order. The last one that enters will run to completion without interruption. Explanation: I wrapped MultiLoopChildWatcher's sig_chld function in a decorator that logs when it is entered and exited. This clearly shows that _sig_chld is being re-entered. In the following log snippet, I'm running two commands in a pipeline "tr | cat". 1634935451.743 DEBUG process '/usr/bin/tr' created: pid 8747 ... 1634935451.746 DEBUG process '/bin/cat' created: pid 8748 ... 1634935451.761 DEBUG enter '_sig_chld' 20 1634935451.761 DEBUG enter '_sig_chld' 20 1634935451.761 WARNING Unknown child process pid 8747, will report returncode 255 1634935451.762 DEBUG process 8748 exited with returncode 0 1634935451.762 DEBUG exit '_sig_chld' 20 1634935451.762 WARNING Child watcher got an unexpected pid: 8747 Traceback (most recent call last): File "/Users/runner/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/asyncio/unix_events.py", line 1306, in _do_waitpid loop, callback, args = self._callbacks.pop(pid) KeyError: 8747 1634935451.763 WARNING Unknown child process pid 8748, will report returncode 255 1634935451.763 WARNING Child watcher got an unexpected pid: 8748 Traceback (most recent call last): File "/Users/runner/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/asyncio/unix_events.py", line 1306, in _do_waitpid loop, callback, args = self._callbacks.pop(pid) KeyError: 8748 1634935451.763 DEBUG exit '_sig_chld' 20 Here is the breakdown of what happens: 1. Pid 8747 exits and we enter _sig_chld #1. 2. sig_chld #1 calls os.waitpid which gives (pid, status) = (8747, 0). 3. Before sig_chld #1 has a chance to call `self._callbacks.pop(pid)`, it is interrupted. 4. sig_chld #2 calls os.waitpid for pid 8747. We get a ChildProcessError and then "Unknown child process pid 8747, will report returncode 255" 5. sig_chld #2 invokes the callback for pid 8747 saying the returncode=255. 6. sig_chld #2 continues to completion. It reaps pid 8748 normally. 7. sig_chld #1 picks up where it left off. We get an error when we try to pop the callback for 8747. 8. sig_chld #1 calls os.waitpid for pid 8748. This gives us failure messages because it was done by sig_chld #2. The issue of interruption can also happen in the case of running a single process. If the _sig_chld interrupts the call to `self._do_waitpid(pid)` in add_child_watcher, a similar interleaving can occur. Work-Around: In my tests, I patched MultiLoopChildWatcher and so far, it appears to be more reliable. In add_child_handler, I call raise_signal(SIGCHLD) so that all the work is done in the signal handler. class PatchedMultiLoopChildWatcher(asyncio.MultiLoopChildWatcher): "Test race condition fixes in MultiLoopChildWatcher." def add_child_handler(self, pid, callback, *args): loop = asyncio.get_running_loop() self._callbacks[pid] = (loop, callback, args) # Prevent a race condition in case signal was delivered before # callback added. signal.raise_signal(signal.SIGCHLD) @_serialize def _sig_chld(self, signum, frame): super()._sig_chld(signum, frame) _serialize is a dec
[issue45900] Type annotations needed for convenience functions in ipaddress module
New submission from William George : The convenience factory functions in the ipaddress module each return one of two types (IPv4Network vs IPv6Network, etc). Modern code wants to be friendly to either stack, and these functions are great at enabling that, but current implementation blocks type inference for most (all?) IDEs. Proposal is easy enough, specifying return type of e.g. `Union[IPv4Network, IPv6Network]` for these factory functions. I believe the rest of the public interface for this module is unambiguous enough that annotations aren't needed, but if others see value they could be added easily enough. For some of these there exists a version-independent base class that could be referenced instead of a union, but it's not clear to me how well IDEs will actually honor such an annotation referencing an internal class (single-underscore). My limited testing of that didn't work well and there's no such base class for the Interface classes anyway. PR for this incomming. -- components: Library (Lib) messages: 407005 nosy: pmoody, wrgeorge1983 priority: normal severity: normal status: open title: Type annotations needed for convenience functions in ipaddress module type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 ___ Python tracker <https://bugs.python.org/issue45900> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45900] Type annotations needed for convenience functions in ipaddress module
Change by William George : -- keywords: +patch pull_requests: +28015 stage: -> patch review pull_request: https://github.com/python/cpython/pull/29778 ___ Python tracker <https://bugs.python.org/issue45900> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26227] Windows: socket.gethostbyaddr(name) fails for non-ASCII hostname
William Dias added the comment: Shouldn't this issue be solved for Python 3.7.5? Or do I have to manually apply the patch? I have a windows 8.1 x64 PC whose hostname contains special characters. When creating a socket, the gethostbyaddr() method raises a UnicodeDecodeError: 'utf-8' codec can't decode byt 0xe1 in position 1. Let me know if you need more information. Thanks -- nosy: +williamdias type: -> crash versions: +Python 3.7 -Python 3.5, Python 3.6 ___ Python tracker <https://bugs.python.org/issue26227> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29435] Allow to pass fileobj to is_tarfile
William Woodruff added the comment: I'll take a stab at this. It looks like `Tarfile.open` takes an optional keyword that should make this straightforward. -- nosy: +yossarian ___ Python tracker <https://bugs.python.org/issue29435> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
New submission from William Chargin : The `gzip` module properly uses the user-specified compression level to control the underlying zlib stream compression level, but always writes metadata that indicates that the maximum compression level was used. Repro: ``` import gzip blob = b"The quick brown fox jumps over the lazy dog." * 32 with gzip.GzipFile("fast.gz", mode="wb", compresslevel=1) as outfile: outfile.write(blob) with gzip.GzipFile("best.gz", mode="wb", compresslevel=9) as outfile: outfile.write(blob) ``` Run this script, then run `wc -c *.gz` and `file *.gz`: ``` $ wc -c *.gz 82 best.gz 84 fast.gz 166 total $ file *.gz best.gz: gzip compressed data, was "best", last modified: Sun Jan 19 20:15:23 2020, max compression fast.gz: gzip compressed data, was "fast", last modified: Sun Jan 19 20:15:23 2020, max compression ``` The file sizes correctly reflect the difference, but `file` thinks that both archives are written at max compression. The error is that the ninth byte of the header in the output stream is hard-coded to `\002` at Lib/gzip.py:260 (as of 558f07891170), which indicates maximum compression. The correct value to indicate maximum speed is `\004`. See RFC 1952, section 2.3.1: <https://tools.ietf.org/html/rfc1952> Using GNU `gzip(1)` with `--fast` creates the same output file as the one emitted by the `gzip` module, except for two bytes: the metadata and the OS (the ninth and tenth bytes). -- components: Library (Lib) files: repro.py messages: 360268 nosy: wchargin priority: normal severity: normal status: open title: gzip metadata fails to reflect compresslevel versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48853/repro.py ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
Change by William Chargin : -- type: -> behavior ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
William Chargin added the comment: (The commit reference above was meant to be git558f07891170, not a Mercurial reference. Pardon the churn; I'm new here. :-) ) -- ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
Change by William Chargin : -- keywords: +patch pull_requests: +17470 stage: needs patch -> patch review pull_request: https://github.com/python/cpython/pull/18077 ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
William Chargin added the comment: Sure, PR sent (pull_request17470). -- ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
William Chargin added the comment: PR URL, for reference: <https://github.com/python/cpython/pull/18077> -- ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18819] tarfile fills devmajor and devminor fields even for non-devices
Change by William Chargin : -- pull_requests: +17472 stage: -> patch review pull_request: https://github.com/python/cpython/pull/18080 ___ Python tracker <https://bugs.python.org/issue18819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18819] tarfile fills devmajor and devminor fields even for non-devices
Change by William Chargin : -- versions: +Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue18819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18819] tarfile fills devmajor and devminor fields even for non-devices
William Chargin added the comment: I've just independently run into this and sent a patch as a pull request. Happily, once this is fixed, the output of `tarfile` is bit-for-bit compatible with the output of GNU `tar(1)`. PR: <https://github.com/python/cpython/pull/18080> -- nosy: +wchargin ___ Python tracker <https://bugs.python.org/issue18819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29435] Allow to pass fileobj to is_tarfile
Change by William Woodruff : -- keywords: +patch pull_requests: +17482 stage: -> patch review pull_request: https://github.com/python/cpython/pull/18090 ___ Python tracker <https://bugs.python.org/issue29435> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39389] gzip metadata fails to reflect compresslevel
William Chargin added the comment: My pleasure; thanks for the triage and review! -- ___ Python tracker <https://bugs.python.org/issue39389> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29435] Allow to pass fileobj to is_tarfile
William Woodruff added the comment: Thanks to you to! -- ___ Python tracker <https://bugs.python.org/issue29435> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18819] tarfile fills devmajor and devminor fields even for non-devices
William Chargin added the comment: My pleasure. Is there anything else that you need from me to close this out? It looks like the PR is approved and in an “awaiting merge” state, but I don’t have access to merge it. -- ___ Python tracker <https://bugs.python.org/issue18819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40243] Unicode 3.2 numeric uses decimal_changed instead of numeric_changed
Change by William Meehan : -- components: Unicode nosy: ezio.melotti, vstinner, wmeehan priority: normal severity: normal status: open title: Unicode 3.2 numeric uses decimal_changed instead of numeric_changed type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue40243> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40243] Unicode 3.2 numeric uses decimal_changed instead of numeric_changed
Change by William Meehan : -- keywords: +patch pull_requests: +18812 stage: -> patch review pull_request: https://github.com/python/cpython/pull/19457 ___ Python tracker <https://bugs.python.org/issue40243> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42531] importlib.resources.path() raises TypeError for packages without __file__
New submission from William Schwartz : Suppose pkg is a package, it contains a resource r, pkg.__spec__.origin is None, and p = importlib.resources.path(pkg, r). Then p.__enter__() raises a TypeError in Python 3.7 and 3.8. (The problem has been fixed in 3.9). The error can be demonstrated by running the attached path-test.py. The tracebacks in 3.7 and 3.8 are nearly identical, so I'll just show the 3.8 traceback. 3.8.6 (default, Nov 20 2020, 18:29:40) [Clang 12.0.0 (clang-1200.0.32.27)] Traceback (most recent call last): File "path-test.py", line 19, in p.__enter__() # Kaboom File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/importlib/resources.py", line 196, in path package_directory = Path(package.__spec__.origin).parent File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pathlib.py", line 1041, in __new__ self = cls._from_parts(args, init=False) File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pathlib.py", line 682, in _from_parts drv, root, parts = self._parse_args(args) File "/usr/local/Cellar/python@3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pathlib.py", line 666, in _parse_args a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType The fix is super simple, as shown below. I'll submit this as a PR as well. diff --git a/Lib/importlib/resources.py b/Lib/importlib/resources.py index fc3a1c9cab..8d37d52cb8 100644 --- a/Lib/importlib/resources.py +++ b/Lib/importlib/resources.py @@ -193,9 +193,11 @@ def path(package: Package, resource: Resource) -> Iterator[Path]: _check_location(package) # Fall-through for both the lack of resource_path() *and* if # resource_path() raises FileNotFoundError. -package_directory = Path(package.__spec__.origin).parent -file_path = package_directory / resource -if file_path.exists(): +file_path = None +if package.__spec__.origin is not None: +package_directory = Path(package.__spec__.origin).parent +file_path = package_directory / resource +if file_path is not None and file_path.exists(): yield file_path else: with open_binary(package, resource) as fp: -- components: Library (Lib) files: path-test.py messages: 382297 nosy: William.Schwartz, brett.cannon priority: normal severity: normal status: open title: importlib.resources.path() raises TypeError for packages without __file__ type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49643/path-test.py ___ Python tracker <https://bugs.python.org/issue42531> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42531] importlib.resources.path() raises TypeError for packages without __file__
Change by William Schwartz : -- keywords: +patch pull_requests: +22477 stage: -> patch review pull_request: https://github.com/python/cpython/pull/23611 ___ Python tracker <https://bugs.python.org/issue42531> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42531] importlib.resources.path() raises TypeError for packages without __file__
William Schwartz added the comment: > If the issue has been fixed on Python 3.9 but not on 3.8, then it was likely > a redesign that enabled the improved behavior That appears to be the case: path() shares code with files(). > a redesign that won't be ported back to Python 3.8 and earlier. Nor should it. > In these situations, the best recommendation is often to just rely on > importlib_resources (the backport) for those older Python versions. I do not need the files() API or anything else added in 3.9 at this time. I just need the existing 3.7/3.8 functionality to work as documented. > have you considered using importlib_resources for Python 3.8 and earlier? > That would likely also address the issue and you could adopt it sooner. My application is sensitive to the size of the installed site-packages both in bytes and in just the number of packages. Yes, importlib_resources would very likely solve the problem reported in the OP. However I don't need the files() API, so adding an extra requirement feels like a heavy solution. > To some extent, the behavior you've described could be considered a bug or > could be considered a feature request (add support for path on packages that > have no __spec__.origin). I am not aware whether this limitation was by > design or incidental. I agree there should be a high bar for patching old versions, but I posit that the behavior is an unintentional bug. In particular, I believe the behavior contradicts the documentation. Below I link and quote relevant documentation. The function in question: > importlib.resources.path(package, resource)¶ > ...package is either a name or a module object which conforms to the > Package requirements. https://docs.python.org/3.8/library/importlib.html#importlib.resources.path So we jump to Package: > importlib.resources.Package > The Package type is defined as Union[str, ModuleType]. This means that > where the function describes accepting a Package, you can pass in either a > string or a module. Module objects must have a resolvable > __spec__.submodule_search_locations that is not None. https://docs.python.org/3.8/library/importlib.html#importlib.resources.Package The Package type restricts the types of modules based on __spec__.submodule_search_locations. This suggests to me that the original author thought about which __spec__s to accept and which to reject but chose not to say anything about __spec__.origin, which is documented as possibly being None: > class importlib.machinery.ModuleSpec(...) > ...module.__spec__.origin == module.__file__ Normally “origin” should > be set, but it may be None (the default) which indicates it is unspecified > (e.g. for namespace packages). https://docs.python.org/3.8/library/importlib.html#importlib.machinery.ModuleSpec.origin In particular, __spec__.origin *should* be None in certain situations: > __file__ > __file__ is optional The import system may opt to leave __file__ unset > if it has no semantic meaning (e.g. a module loaded from a database). https://docs.python.org/3.8/reference/import.html#__file__ Taken together, the foregoing passages describe an `import` API in which path() works for all modules that are packages (i.e., __spec__.submodule_search_locations is not None), and in which some packages' __spec__.origin is None. That path() fails in practice to support some packages is therefore a bug not a the absence of a feature. Regardless of whether PR 23611 is accepted, the test that it adds should be added to Python master to guard against regressions. I can submit that as a separate PR. Before I do that, do I need to create a new bpo ticket, or can I just reference this one? -- ___ Python tracker <https://bugs.python.org/issue42531> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42529] CPython DLL initialization routine failed from PYC cache file
William Pickard added the comment: You may need to inject a LoadLibraryExW detour into your python runtime before _jpype is loaded and output all the library names its requesting. You may need to detour all Load Library functions for maximum coverage. -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue42529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42529] CPython DLL initialization routine failed from PYC cache file
William Pickard added the comment: https://www.microsoft.com/en-us/research/project/detours/ -- ___ Python tracker <https://bugs.python.org/issue42529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42529] CPython DLL initialization routine failed from PYC cache file
William Pickard added the comment: I was just expecting only detours for LoadLibraryExW (and variants) to find out which dll failed. -- ___ Python tracker <https://bugs.python.org/issue42529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42529] CPython DLL initialization routine failed from PYC cache file
William Pickard added the comment: Msvcp140.dll from what I can find is part of the VS 2015 Redstributable package. -- ___ Python tracker <https://bugs.python.org/issue42529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42529] CPython DLL initialization routine failed from PYC cache file
William Pickard added the comment: I recommend first doing a capture of these functions first, incase Windows is routing some through them: LoadLibrary(Ex)(W|A) W is the Unicode variant while A is the Ascii variant. -- ___ Python tracker <https://bugs.python.org/issue42529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42531] importlib.resources.path() raises TypeError for packages without __file__
William Schwartz added the comment: @jaraco Did you have any other questions after my comments in msg382423? Do you think you or someone else could review PR 23611? Thanks! -- ___ Python tracker <https://bugs.python.org/issue42531> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42531] importlib.resources.path() raises TypeError for packages without __file__
William Schwartz added the comment: > For that, please submit a PR to importlib_resources and it will get synced to > CPython later. Will do once PR 23611 gets in shape. > Can you tell me more about the use-case that exhibited this undesirable > behavior? Using the [PyOxidizer] "freezer". It compiles Python together with frozen Python code or byte code. > That is, what loader is it that supports `open_binary` but not `is_resource` > and doesn't have a `__origin__`? PyOxidizer's [OxidizedImporter] importer. It [does not set `__file__`] (i.e., `__spec__.origin__ is None`). Its maintainer has resolved some ambiguities in the resources API contract (#36128) [differently from CPython], but I don't think that's related to the issue I ran into. The resource-related functionality of the importer is implemented here (extension module written in Rust): https://github.com/indygreg/PyOxidizer/blob/e86b2f46ed6b449bdb912900b0ac83576ad5ebe9/pyembed/src/importer.rs#L1078-L1269 [PyOxidizer]: https://pyoxidizer.readthedocs.io [OxidizedImporter]: https://pyoxidizer.readthedocs.io/en/v0.10.3/oxidized_importer.html [does not set `__file__`]: https://pyoxidizer.readthedocs.io/en/v0.10.3/oxidized_importer_behavior_and_compliance.html#file-and-cached-module-attributes [unsure about `ResourceReader` semantics]: https://pyoxidizer.readthedocs.io/en/v0.10.3/oxidized_importer_resource_files.html#resource-reader-support -- ___ Python tracker <https://bugs.python.org/issue42531> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42887] 100000 assignments of .__sizeof__ cause a segfault on del
William Pickard added the comment: Jumping in here to explain why '__class' doesn't crash when '__sizeof__' does: When '__class__' is fetched, it returns a new reference to the object's type. When '__sizeof__' is fetched on the otherhand, a new object is allocated on the heap ('types.MethodType') and is returned to the caller. This object also has a '__sizeof__' that does the same (as it's implemented on 'object'. So yes, you are exhausting the C runtime stack by de-allocating over a THOUSAND objects. You can see this happen by watching the memory usage of Python steadily climb. -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue42887> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42962] Windows: SystemError during os.kill(..., signal.CTRL_C_EVENT)
New submission from William Schwartz : I don't have an automated test at this time, but here's how to get os.kill to raise SystemError. I did this on Windows 10 version 20H2 (build 19042.746) with Pythons 3.7.7, 3.8.5, and 3.9.1. os_kill_impl at Modules/posixmodule.c:7833 does not appear to have been modified recently (but I haven't checked its transitive callees), so I imagine you can get the same behavior from os.kill on Python 3.10. 1. Open two consoles, A and B. (I'm using tabs in Windows Terminal, if that matters.) 2. In console A, type but DO NOT EXECUTE the following command: python -c"import os, signal; os.kill(, signal.CTRL_C_EVENT)" Move your cursor back before the comma. 3. In console B, create a process that does nothing but print its process identifier and wait long enough for you to type it in console A: python -c"import os, time; print(os.getpid()); time.sleep(60); print('exiting cleanly')" Copy or remember the printed PID. Hurry to step 4 before your 60 seconds expires! 4. In console A, type the PID from console B and execute the command. 5. In console B, confirm that the Python exited without printing "exiting cleanly". Oddly, `echo %errorlevel%` will print `0` rather than `-1073741510` (which is `2**32 - 0xc13a`, the CTRL_C_EVENT exit code), which is what it prints after `python -c"raise KeyboardInturrupt"`. 6. In console A, you should see the following traceback. OSError: [WinError 87] The parameter is incorrect The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in SystemError: returned a result with an error set -- components: Extension Modules, Windows messages: 385235 nosy: William.Schwartz, ncoghlan, paul.moore, petr.viktorin, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows: SystemError during os.kill(..., signal.CTRL_C_EVENT) type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue42962> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42962] Windows: SystemError during os.kill(..., signal.CTRL_C_EVENT)
William Schwartz added the comment: > In Windows, os.kill() is a rather confused function. I know how it feels. To be honest, I don't have an opinion about whether the steps I laid out ought to work. I just reported it because the SystemError indicates that a C-API function was returning non-NULL even after PyErr_Occurred() returns true. I think that's what you're referring to here... > it doesn't clear the error that was set as a result of the > GenerateConsoleCtrlEvent() call, which causes SystemError to be raised. ...but I lost you on where that's happening and why. Frankly, Windows IPC is not in my wheelhouse. >> Oddly, `echo %errorlevel%` will print `0` rather than `-1073741510` > > There's nothing odd about that. Here's why I thought it was odd. The following session is from the Windows Command shell inside *Command Prompt* (not Windows Terminal): C:\Users\wksch>python --version Python 3.9.1 C:\Users\wksch>python -c"import os, signal, time; os.kill(os.getpid(), signal.CTRL_C_EVENT); time.sleep(1)" Traceback (most recent call last): File "", line 1, in KeyboardInterrupt ^C C:\Users\wksch>echo %errorlevel% -1073741510 In the Windows Command shell inside *Windows Terminal* (not Command Prompt): C:\Users\wksch>python -c"import os, signal, time; os.kill(os.getpid(), signal.CTRL_C_EVENT); time.sleep(1)" C:\Users\wksch>echo %errorlevel% 0 C:\Users\wksch>python -c"raise KeyboardInterrupt" Traceback (most recent call last): File "", line 1, in KeyboardInterrupt ^C C:\Users\wksch>echo %errorlevel% -1073741510 -- ___ Python tracker <https://bugs.python.org/issue42962> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42962] Windows: SystemError during os.kill(..., signal.CTRL_C_EVENT)
William Schwartz added the comment: > Fixing the SystemError should be simple. Just clear an existing error if > TerminateProcess() succeeds. Should there be a `return NULL;` between these two lines? https://github.com/python/cpython/blob/e485be5b6bd5fde97d78f09e2e4cca7f363763c3/Modules/posixmodule.c#L7854-L7855 I'm not the best person to work on a patch for this. -- ___ Python tracker <https://bugs.python.org/issue42962> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42962] Windows: SystemError during os.kill(..., signal.CTRL_C_EVENT)
William Schwartz added the comment: >For a new process group, the cancel event is initially ignored, but the break >event is always handled. To enable the cancel event, the process must call >SetConsoleCtrlHandler(NULL, FALSE), such as via ctypes with >kernel32.SetConsoleCtrlHandler(None, False). I think the signal module should >provide a function to enable/disable Ctrl+C handling without ctypes, and >implicitly enable it when setting a new SIGINT handler. That's what Lib/test/win_console_handler.py:39 does. What I don't understand is why that's necessary. Isn't that what PyConfig.install_signal_handlers is supposed to do? Which brings me to how I ended up here in the first place: I wanted to write a test that PyConfig.install_signal_handlers is set in an embedded instance of Python I'm working with. In outline, the following test works on both Windows and macOS *except on Windows running under Tox*. @unittest.removeHandler def test_signal_handlers_installed(self): SIG = signal.SIGINT if sys.platform == 'win32': SIG = signal.CTRL_C_EVENT with self.assertRaises(KeyboardInterrupt): os.kill(os.getpid(), SIG) if sys.platform == 'win32': time.sleep(.1) # Give handler's thread time to join Using SetConsoleCtrlHandler if I detect that I'm running on Windows under Tox would, if I understand correctly, hide whether PyConfig.install_signal_handlers was set before the Python I'm running in started, right? (I know this isn't the right venue for discussing my embedding/testing problem. But maybe the use case informs the pending discussion of what to do about os.kill's semantics.) -- ___ Python tracker <https://bugs.python.org/issue42962> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43122] Python Launcher doesn't open a terminal window
William Pickard added the comment: That quick flash would be your terminal window if I have to guess (based on no Mac experience, but Windows). -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43122> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43181] Python macros don’t shield arguments
William Pickard added the comment: This feels like it's more of an issue with the C++ compiler you're using. (I can tell it's C++ because of template syntax) -- nosy: +WildCard65 ___ Python tracker <https://bugs.python.org/issue43181> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25095] test_httpservers hangs since Python 3.5
William Pickard added the comment: I'll get to it Saturday. -- ___ Python tracker <https://bugs.python.org/issue25095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25095] test_httpservers hangs since Python 3.5
William Pickard added the comment: I've made the changes you've requested. -- ___ Python tracker <https://bugs.python.org/issue25095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41085] Array regression test fails
New submission from William Pickard : Here's the verbose stack trace of the failing test: == FAIL: test_index (test.test_array.LargeArrayTest) -- Traceback (most recent call last): File "L:\GIT\cpython\lib\test\support\__init__.py", line 837, in wrapper return f(self, maxsize) File "L:\GIT\cpython\lib\test\test_array.py", line 1460, in test_index self.assertEqual(example.index(11), size+3) AssertionError: -2147483645 != 2147483651 -- components: Tests files: test_output.log messages: 372135 nosy: WildCard65 priority: normal severity: normal status: open title: Array regression test fails type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file49257/test_output.log ___ Python tracker <https://bugs.python.org/issue41085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41085] Array regression test fails
William Pickard added the comment: The only modification I made was to "rt.bat" to have the value of '-u' work properly. -- ___ Python tracker <https://bugs.python.org/issue41085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41085] [easy C] array.array.index() method downcasts Py_ssize_t to long
Change by William Pickard : -- keywords: +patch pull_requests: +20240 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21071 ___ Python tracker <https://bugs.python.org/issue41085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41085] [easy C] array.array.index() method downcasts Py_ssize_t to long
Change by William Pickard : -- resolution: -> fixed stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue41085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com