[issue29298] argparse fails with required subparsers, un-named dest, and empty argv
Greg added the comment: while waiting for a fix, would it be possible to document in the argparse documentation that the 'dest' parameter is required (at least temporarily) for add_subparsers()? (somewhere near file:///usr/share/doc/python/html/library/argparse.html#sub-commands) gratuitous diff: the pull request from 2017 would probably fix it. my diffs are here (from: Python 3.8.0 (default, Oct 23 2019, 18:51:26). (the pull request changes the utility '_get_action_name'; i wasn't sure of side-effects with other callers, so changed nearer the failure location.) *** new/argparse.py 2019-12-05 11:16:37.618985247 +0530 --- old/argparse.py 2019-10-24 00:21:26.0 +0530 *** *** 2017,2030 for action in self._actions: if action not in seen_actions: if action.required: ! ra = _get_action_name(action) ! if ra is None: ! if not action.choices == {}: ! choice_strs = [str(choice) for choice in action.choices] ! ra = '{%s}' % ','.join(choice_strs) ! else: ! ra = '' ! required_actions.append(ra) else: # Convert action default now instead of doing it before # parsing arguments to avoid calling convert functions --- 2017,2023 for action in self._actions: if action not in seen_actions: if action.required: ! required_actions.append(_get_action_name(action)) else: # Convert action default now instead of doing it before # parsing arguments to avoid calling convert functions -- nosy: +Minshall ___ Python tracker <https://bugs.python.org/issue29298> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43071] IDLE: Windows 7 - Trackpad two-finger vertical scrolling is not recognized
New submission from Greg : Up/down scrolling is not possible with a two-finger swipe on a trackpad. I'm using Lenovo's notably bad UltraNav drivers on Windows 7. Horizontal scrolling works just fine. PgUp and PgDn both behave as normal, as does ctrl + arrow keys. I'm having this issue with IDLE for 3.8.7, and 3.7.9, but not 2.7.3 (which just happens to be the last version I had installed). -- assignee: terry.reedy components: IDLE messages: 385968 nosy: Kritzy, terry.reedy priority: normal severity: normal status: open title: IDLE: Windows 7 - Trackpad two-finger vertical scrolling is not recognized type: behavior versions: Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue43071> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43071] IDLE: Windows 7 - Trackpad two-finger vertical scrolling is not recognized
Greg added the comment: That wasn't the case with https://bugs.python.org/issue34047 Was it not clear that I'm having this issue in (and only in) IDLE? Given that it's the interpreter bundled with python, it seems like it has *something* to do with it. -- ___ Python tracker <https://bugs.python.org/issue43071> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43071] IDLE: Windows 7 - Trackpad two-finger vertical scrolling is not recognized
Greg added the comment: I tested out tk_scroll.py (and tk_scroll2.py, for kicks) and I couldn't get that to scroll either. I tried both with and without the ttk line commented. To my shame, it looks like that means you're spot on, and that it's an issue between my machine and tcl/tk. Thanks for the help, and sorry to waste your time! -- ___ Python tracker <https://bugs.python.org/issue43071> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12706] timeout sentinel in ftplib and poplib documentation
Greg added the comment: Completely forgot about this, please take my patch and submit a pr On Fri, Dec 29, 2017 at 10:55 AM, Marcel Widjaja wrote: > > Marcel Widjaja added the comment: > > Greg, I wonder if you are planning to submit a PR for your patch? If not, > I'm plan to take your patch, make some minor adjustment and submit a PR. > > -- > nosy: +mawidjaj > > ___ > Python tracker > <https://bugs.python.org/issue12706> > ___ > -- ___ Python tracker <https://bugs.python.org/issue12706> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10503] os.getuid() documentation should be clear on what kind of uid it is referring
Greg added the comment: Here's a wording change in the documentation to clarify this. -- keywords: +patch nosy: +εσχατοκυριος Added file: http://bugs.python.org/file35514/mywork.patch ___ Python tracker <http://bugs.python.org/issue10503> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12706] timeout sentinel in ftplib and poplib documentation
Greg added the comment: In the definition of FTP.connect(), I've changed the code to actually use None as a lack-of-explicit-timeout sentinel instead of -999. For FTP and FTP_TLS, I've changed the documentation to reflect what the code is doing. -- keywords: +patch nosy: +εσχατοκυριος Added file: http://bugs.python.org/file35524/patch.patch ___ Python tracker <http://bugs.python.org/issue12706> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6484] No unit test for mailcap module
New submission from Greg : There is currently no test_mailcap or any other standalone unit test for the mailcap module. The only existing test is a self-test at the end of the module. I would like to be assigned to work on this patch. (Why am I assigning myself to write tests for a small, older module? I'm a complete noob to the Python-Dev community and I'm getting my feet wet with this. Let me know if you have any advice or if I'm doing something wrong.) -- components: Tests messages: 90516 nosy: gnofi severity: normal status: open title: No unit test for mailcap module type: feature request versions: Python 2.7, Python 3.2 ___ Python tracker <http://bugs.python.org/issue6484> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3972] Add Option to Bind to a Local IP Address in httplib.py
Greg added the comment: Did this ever happen? It seems like overkill in the non-Python sort of way to continue pointing people to over-riding classes and extending objects when such a small patch adds so powerful and useful a functionality to the library. -- nosy: +greg.hellings ___ Python tracker <http://bugs.python.org/issue3972> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3972] Add Option to Bind to a Local IP Address in httplib.py
Greg added the comment: Just looking at the indicated file in the 2.6.4 release tarball, it does not seem that it would apply cleanly. The line numbers do not apply properly anymore, though the edited lines themselves still appear to be unaffected. Without context diff in the original patch, it's difficult for me to asseess exactly which lines should be the affected ones. Unfortunately, I'm not in a position right now with my job to spend the time necessary to produce this as a patch since I'm on a timescale that requires equivalent functionality by Wednesday on production systems. Modifying it to apply cleanly to the latest versions of Python appears like it would be easy, if the internals of that file have not changed drastically in structure since 2.6.4. Documentation should be relatively straightforward as well, since the functionality the patch introduces is rather transparent. Unit tests are beyond my expertise to comment on. Would have loved to have seen this in the 2.7/3.1 series, as it would make my task much easier! I'll keep it in mind for my "off time" this holiday weekend. -- ___ Python tracker <http://bugs.python.org/issue3972> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3972] Add Option to Bind to a Local IP Address in httplib.py
Greg added the comment: For my own case, I have a machine with 50 IP addresses set and I need to run a script to grab data that will randomly select one of those IP addresses to use for its outgoing connection. That's something which needs to be selected at the socket level, as best I understand the issue, not on the routing end. -- ___ Python tracker <http://bugs.python.org/issue3972> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1690840] xmlrpclib methods submit call on __str__, __repr__
Greg Hazel added the comment: How about making ServerProxy a new-style class? _ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1690840> _ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1720250] PyGILState_Ensure does not acquires GIL
Greg Chapman added the comment: In my embedding, I use the following (adapting the example above): // initialize the Python interpreter Py_Initialize(); PyEval_InitThreads(); /* Swap out and return current thread state and release the GIL */ PyThreadState tstate = PyEval_SaveThread(); PyGILState_STATE gstate; gstate = PyGILState_Ensure(); PyRun_SimpleString("import random\n"); PyGILState_Release(gstate); You don't have to free the tstate returned by PyEval_SaveThread because it is the thread state of the main thread (as established during Py_Initialize), and so it will be freed during Python's shut-down. I think in general you should rarely need to call PyEval_ReleaseLock directly; instead use PyEval_SaveThread, the Py_BEGIN_ALLOW_THREADS macro, or PyGILState_Release (as appropriate). The documentation should probably say as much. -- nosy: +glchapman21 _ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1720250> _ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1506] func alloca inside ctypes lib needs #include on solaris
Greg Couch added the comment: A better solution would be to use the HAVE_ALLOCA and HAVE_ALLOCA_H defines that fficonfig.h provides to decide whether or not to include alloca.h. And in callproc.c whether or not to provide a workaround using malloc (I'm assuming non-gcc sparc compilers also support alloca for sparc/ffi.c, but I don't know for sure). -- nosy: +gregcouch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1506> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1506] func alloca inside ctypes lib needs #include on solaris
Greg Couch added the comment: Turns out callproc.c forgot to include after which conditionally includes alloca.h. So it's a one-line fix. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1506> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1516] make _ctypes work with non-gcc compilers
New submission from Greg Couch: To get _ctypes to sucessfully compile with native UNIX compilers (i.e., not gcc), several modifications need to be made: (1) use the equivalent of the Py_GCC_ATTRIBUTE macro for __attribute__ (in ffi.h), (2) include in callproc.c to conditionally include , and (3) modify distutils to know something about assembly language files. The attached patch is a starting point for the proper patch. It fixes bugs (1) and (2), but I was unable to figure out the last tweek to get distutils to work with gcc and native compilers. The problem with _ctypes comes from the use of gcc's libffi. And libffi uses assembly language source files for the various supported platforms and distutils blindly compiles the .S files. Native UNIX compilers want a .s suffix and if the files are renamed, distutils skips the file. I tried modifying distutils to compile .s files and give the '-x assembler-with-cpp' flag to gcc so gcc would still work, but the right tweek evaded me. So I'm hoping someone can take this and turn it into something better or make helpful suggestions (other than switching to gcc!). -- files: _ctypes.diffs messages: 57924 nosy: gregcouch severity: normal status: open title: make _ctypes work with non-gcc compilers versions: Python 2.6 Added file: http://bugs.python.org/file8819/_ctypes.diffs __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1516> __ _ctypes.diffs Description: Binary data ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1506] func alloca inside ctypes lib needs #include on solaris
Greg Couch added the comment: That's a disappointment. has the right logic in it already. Perhaps it should be copied to callproc.c. I'm less concerned about alloca not being there at all because it seems to be a pervasive extension in non-gcc compilers, but using malloc is fine too. Please go ahead and fix this as you see fit. I've started issue 1516 about using non-gcc compilers to compile _ctypes/libffi. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1506> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1516] make _ctypes work with non-gcc compilers
Greg Couch added the comment: The modications work on Tru64 and IRIX. cc has understood .s suffixes for a long time. You use cc instead of as because it knows how to run the C preprocessor (often /lib/cpp, but not on all systems). Looking at the Solaris cc manual page (via google), I see that its cc has the same .S and .s distinction that gcc has, so my patch will not work with Solaris either. IRIX has a separate issue in that it has libffi support for n32 binaries, but doesn't have the ffi_closure support, so while libffi compiles, _ctypes still fails to compile (same would be true if gcc were used). __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1516> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1526] DeprecationWarning in zipfile.py while zipping 113000 files
Greg Steuck added the comment: There may be a related issue that I still hit with 2.6.5. % cat /tmp/a.py import zipfile import os z = zipfile.ZipFile('/tmp/a.zip', 'w') open("/tmp/a", "w") os.utime("/tmp/a", (0,0)) z.write("/tmp/a", "a") % python -V Python 2.6.5 % uname -mo x86_64 GNU/Linux % uname -mor 2.6.32-gg426-generic x86_64 GNU/Linux % python /tmp/a.py /usr/lib/python2.6/zipfile.py:1047: DeprecationWarning: struct integer overflow masking is deprecated self.fp.write(zinfo.FileHeader()) /usr/lib/python2.6/zipfile.py:1047: DeprecationWarning: 'H' format requires 0 <= number <= 65535 self.fp.write(zinfo.FileHeader()) /usr/lib/python2.6/zipfile.py:1123: DeprecationWarning: struct integer overflow masking is deprecated self.close() /usr/lib/python2.6/zipfile.py:1123: DeprecationWarning: 'H' format requires 0 <= number <= 65535 self.close() -- nosy: +gnezdo ___ Python tracker <http://bugs.python.org/issue1526> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12198] zipfile.py:1047: DeprecationWarning: 'H' format requires 0 <= number <= 65535
New submission from Greg Steuck : zipfile.py displays warning when trying to write files timestamped before 1980. % cat /tmp/a.py import zipfile import os z = zipfile.ZipFile('/tmp/a.zip', 'w') open("/tmp/a", "w") os.utime("/tmp/a", (0,0)) z.write("/tmp/a", "a") % python -V Python 2.6.5 % uname -mo x86_64 GNU/Linux % uname -mor 2.6.32-gg426-generic x86_64 GNU/Linux % python /tmp/a.py /usr/lib/python2.6/zipfile.py:1047: DeprecationWarning: struct integer overflow masking is deprecated self.fp.write(zinfo.FileHeader()) /usr/lib/python2.6/zipfile.py:1047: DeprecationWarning: 'H' format requires 0 <= number <= 65535 self.fp.write(zinfo.FileHeader()) /usr/lib/python2.6/zipfile.py:1123: DeprecationWarning: struct integer overflow masking is deprecated self.close() /usr/lib/python2.6/zipfile.py:1123: DeprecationWarning: 'H' format requires 0 <= number <= 65535 self.close() Similar to, but different from http://bugs.python.org/issue1526. Amaury Forgeot d'Arc says: The ZIP file format is unable to store dates before 1980. With version 3.2, your script even raises an exception. Please file this in a different issue. -- components: Library (Lib) messages: 137093 nosy: gnezdo priority: normal severity: normal status: open title: zipfile.py:1047: DeprecationWarning: 'H' format requires 0 <= number <= 65535 type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue12198> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11553] Docs for: import, packages, site.py, .pth files
Changes by Greg Słodkowicz : -- nosy: +Greg.Slodkowicz ___ Python tracker <http://bugs.python.org/issue11553> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly
Greg Brockman added the comment: I'll take another stab at this. In the attachment (assign-tasks.patch), I've combined a lot of the ideas presented on this issue, so thank you both for your input. Anyway: - The basic idea of the patch is to record the mapping of tasks to workers. I've added a protocol between the parent process and the workers that allows this to happen without adding a race condition between recording the task and the child dying. - If a child unexpectedly dies, the worker_handler pretends that all of the jobs currently assigned to it raised a RuntimeError. (Multiple jobs can be assigned to a single worker if the result handler is being slow.) - The guarantee I try to provide is that each job will be started at most once. There is enough information to instead ensure that each job is run exactly once, but in general whether that's acceptable or useful is really only known at the application level. Some notes: - I haven't implemented this for approach for the ThreadPool yet. - The test suite runs but occasionally hangs on shutting down the pool in Ask's tests in multiprocessing-tr...@82502-termination-trackjobs.patch. My experiments seem to indicate this is due to a worker dying while holding a queue lock. So I think a next step is to deal with workers dying while holding a queue lock, although this seems unlikely in practice. I have some ideas as to how you could fix this, if we decide it's worth trying. Anyway, please let me know what you think of this approach/sample implementation. If we decide that this seems promising, I'd be happy to built it out further. -- Added file: http://bugs.python.org/file18513/assign-tasks.patch ___ Python tracker <http://bugs.python.org/issue9205> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9607] Test file 'test_keyword.py' submission for use with keyword.py
New submission from Greg Malcolm : 'keyword.py' didn't have any tests, so I wrote some. Most of the tests are are for the main() method, which self-populates the keywords section of keyword.py with keywords taken from a grammar file, 'Python/graminit.c'. The main() method allows you to choose the grammar file and the target file, so I've written the tests such that the actual keyword.py does not have to modify itself. Most of the tests generate dummy keyword.py and graminit.c files for parsing. They are all deleted in the tearUp() stages of each test. I've timed the tests. In total they take approximately 3 seconds so you may want to tag some of them as "slow". Also I've only tested on the mac, so someone may want to check it runs ok on Windows. Most of the patch was written at the PyOhio 2010 Sprints. Thanks go to David Murray for advice given while working on it. -- components: Tests files: test_keyword.patch keywords: patch messages: 113942 nosy: gregmalcolm, r.david.murray priority: normal severity: normal status: open title: Test file 'test_keyword.py' submission for use with keyword.py type: behavior versions: Python 3.2 Added file: http://bugs.python.org/file18537/test_keyword.patch ___ Python tracker <http://bugs.python.org/issue9607> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly
Greg Brockman added the comment: Thanks for looking at it! Basically this patch requires the parent process to be able to send a message to a particular worker. As far as I can tell, the existing queues allow the children to send a message to the parent, or the parent to send a message to one child (whichever happens to win the race; not a particular one). I don't love introducing one queue per child either, although I don't have a sense of how much overhead that would add. Does the problem make sense/do you have any ideas for an alternate solution? -- ___ Python tracker <http://bugs.python.org/issue9205> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9023] distutils relative path errors
Greg Hazel added the comment: The python setup script is for the python module, which is in a subdirectory of the C library project. I am not going to move setup.py to the root directory just to work around a a distutils bug. This distutils bug could cause it to overwrite files in other directories, since it blindly adds relative paths to the build directory. This is clearly broken. I've changed my code to use os.path.abspath() while I wait for a fix. -- ___ Python tracker <http://bugs.python.org/issue9023> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9023] distutils relative path errors
Greg Hazel added the comment: > Éric Araujo added the comment: >> I've changed my code to use os.path.abspath() while I wait for a >> fix. > Does this means that your code works with paths that go to the parent > directory? I don’t know if it’s right to allow that (I mean this literally: > I’m not the main maintainer and I don’t have much packaging experience). Yes, my code works with the os.path.abspath() change, since it creates the entire absolute path directory structure inside the build directory. This is -- ___ Python tracker <http://bugs.python.org/issue9023> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2236] Distutils' mkpath implementation ignoring the "mode" parameter
Greg Ward added the comment: I'm unassigning this since I no longer know how to commit changes to Python. Sorry, I just haven't kept track over the years, I don't follow python-dev anymore, and I could not find documentation explaining where I should commit what sort of changes. -- assignee: gward -> tarek ___ Python tracker <http://bugs.python.org/issue2236> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8296] multiprocessing.Pool hangs when issuing KeyboardInterrupt
Changes by Greg Brockman : -- nosy: +gdb ___ Python tracker <http://bugs.python.org/issue8296> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly
Greg Brockman added the comment: Hmm, a few notes. I have a bunch of nitpicks, but those can wait for a later iteration. (Just one style nit: I noticed a few unneeded whitespace changes... please try not to do that, as it makes the patch harder to read.) - Am I correct that you handle a crashed worker by aborting all running jobs? If so: - Is this acceptable for your use case? I'm fine with it, but had been under the impression that we would rather this did not happen. - If you're going to the effort of ACKing, why not record the mapping of tasks to workers so you can be more selective in your termination? Otherwise, what does the ACKing do towards fixing this particular issue? - I think in the final version you'd need to introduce some interthread locking, because otherwise you're going to have weird race conditions. I haven't thought too hard about whether you can get away with just catching unexpected exceptions, but it's probably better to do the locking. - I'm getting hangs infrequently enough to make debugging annoying, and I don't have time to track down the bug right now. Why don't you strip out any changes that are not needed (e.g. AFAICT, the ACK logic), make sure there aren't weird race conditions, and if we start converging on a patch that looks right from a high level we can try to make it work on all the corner cases? -- ___ Python tracker <http://bugs.python.org/issue9205> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9205] Parent process hanging in multiprocessing if children terminate unexpectedly
Greg Brockman added the comment: Ah, you're right--sorry, I had misread your code. I hadn't noticed the usage of the worker_pids. This explains what you're doing with the ACKs. Now, the problem is, I think doing it this way introduces some races (which is why I introduced the ACK from the task handler in my most recent patch). What happens if: - A worker removes a job from the queue and is killed before sending an ACK. - A worker removes a job from the queue, sends an ACK, and then is killed. Due to bad luck with the scheduler, the parent cleans the worker before the parent has recorded the worker pid. You're now reading from self._cache in one thread but writing it in another. What happens if a worker sends a result and then is killed? Again, I haven't thought too hard about what will happen here, so if you have a correctness argument for why it's safe as-is I'd be happy to hear it. Also, I just noted that your current way of dealing with child deaths doesn't play well with the maxtasksperchild variable. In particular, try running: """ import multiprocessing def foo(x): return x multiprocessing.Pool(1, maxtasksperchild=1).map(foo, [1, 2, 3, 4]) """ (This should be an easy fix.) -- ___ Python tracker <http://bugs.python.org/issue9205> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9607] Test file 'test_keyword.py' submission for use with keyword.py
Changes by Greg Malcolm : Removed file: http://bugs.python.org/file18537/test_keyword.patch ___ Python tracker <http://bugs.python.org/issue9607> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9607] Test file 'test_keyword.py' submission for use with keyword.py
Greg Malcolm added the comment: Thanks for the feedback David! I've replaced the old patch with a new version that uses Popen/sys.executable as suggested. - Greg -- Added file: http://bugs.python.org/file18769/test_keyword_v2.patch ___ Python tracker <http://bugs.python.org/issue9607> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9884] The 4th parameter of method always None or 0 on x64 Windows.
Changes by Greg Hazel : -- nosy: +ghazel ___ Python tracker <http://bugs.python.org/issue9884> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9325] Add an option to pdb/trace/profile to run library module as a script
Changes by Greg Słodkowicz : -- nosy: +Greg.Slodkowicz ___ Python tracker <http://bugs.python.org/issue9325> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9325] Add an option to pdb/trace/profile to run library module as a script
Greg Słodkowicz added the comment: Following Nick's advice, I extended runpy.run_module to accept an extra parameter to be used as replacement __main__ namespace. Having this, I can make this temporary __main__ accessible in main() in modules like trace/profile/pdb even if module execution fails with an exception. The problem is that it's visible only in the calling function but not in the global namespace. One way to make it accessible for post mortem debugging would be to create the replacement __main__ module in the global namespace and then pass as a parameter to main(), but this seems clumsy. So maybe the way to go is to have runpy store last used __main__, sys.exc_info() style. In this case, would this be the correct way to store it in runpy: try: import threading except ImportError: temp_main = None else: local_storage = threading.local() local_storage.temp_main = None temp_main = local_storage.temp_main ? -- ___ Python tracker <http://bugs.python.org/issue9325> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9325] Add an option to pdb/trace/profile to run library module as a script
Greg Słodkowicz added the comment: Thanks, Nick. Before your last comment, I haven't looked much into Pdb, instead focusing on profile.py and trace.py because they looked like simpler cases. I think the approach with CodeRunner objects would work just fine for profile and trace but Pdb uses run() inherited from Bdb. In order to make it work with a CodeRunner object, it seems run() would have to be reimplemented in Pdb (effectively becoming a 'runCodeRunner()'), and we could probably do without _runscript(). Is that what you had in mind? -- ___ Python tracker <http://bugs.python.org/issue9325> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3754] cross-compilation support for python build
Changes by Greg Hellings : -- nosy: +Greg.Hellings ___ Python tracker <http://bugs.python.org/issue3754> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3754] cross-compilation support for python build
Greg Hellings added the comment: Current patch errors with the following message: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/manual/lib -L/binary/lib -L/manual/lib -L/binary/lib Parser/acceler.o Parser/grammar1.o Pars er/listnode.o Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o Parser/firstsets.o Parser/grammar.o Parser/pgen.o Objects/obmalloc.o Python/dynamic_annotations.o Python/mysnprintf.o Python/pyctype.o Parser/tokenizer_pgen.o Parser/printgrammar.o Parser/parsetok_pgen.o Parser/pgenmain .o -lintl -lpthread -o Parser/pgen.exe ./Grammar/Grammar ./Include/graminit.h ./Python/graminit.c ./Grammar/Grammar: line 18: single_input:: command not found ./Grammar/Grammar: line 18: simple_stmt: command not found ./Grammar/Grammar: line 18: compound_stmt: command not found ./Grammar/Grammar: line 19: syntax error near unexpected token `NEWLINE' ./Grammar/Grammar: line 19: `file_input: (NEWLINE | stmt)* ENDMARKER' make: *** [Parser/pgen.stamp] Error 2 -- ___ Python tracker <http://bugs.python.org/issue3754> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6454] Add "example" keyword argument to optparse constructor
Greg Ward added the comment: > I understood Greg’s reply to mean that there was no need for an examples > keyword if simple paragraph splitting was added. Right, but optparse has been superseded by argparse. So my opinion is even less important than it was before 2.7. -- ___ Python tracker <http://bugs.python.org/issue6454> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11248] Tails of generator get lost under zip()
New submission from Greg Kochanski : When you have a generator as an argument to zip(), code after the last yield statement may not get executed. The problem is that zip() stops after it gets _one_ exception, i.e. when just one of the generators has finished. As a result, if there were any important clean-up code at the end of a generator, it will not be executed. Caches may not get flushed, et cetera. At the least, this is a documentation bug that needs to be pointed out in both zip() and the definition of a generator(). More realistically, it is a severe wart on the language, because it violates the programmer's reasonable expectation that a generator executes until it falls off the end of the function. It means that a generator becomes conceptually nasty: you cannot predict what it will do based just on an inspection of the code and the code it calls. Likely, the same behavior happens in itertools, too. -- components: None files: bug312.py messages: 128842 nosy: gpk-kochanski priority: normal severity: normal status: open title: Tails of generator get lost under zip() type: behavior versions: Python 2.6 Added file: http://bugs.python.org/file20794/bug312.py ___ Python tracker <http://bugs.python.org/issue11248> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11248] Tails of generator get lost under zip()
Greg Kochanski added the comment: (a) It is not documented for the symmetric (4, 4) case where the two generators are of equal length. (b) Even for the asymmetric case, it is not documented in such a way that people are likely to see the implications. (c) Documented or not, it's still a wart. -- ___ Python tracker <http://bugs.python.org/issue11248> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11248] Tails of generator get lost under zip()
Greg Kochanski added the comment: Yes, the current behaviour makes sense from a language designer's viewpoint, and maybe even from the user's viewpoint (if the user thinks about it a carefully). But, that's not the point of a computer language. The whole reason we program in languages like python instead of asm is to match the behaviour of the silicon to human capabilities and expectations. So, documentation needs to go beyond the minimum from which an expert could deduce the system behaviour. It needs to point out unexpected things that a competent programmer might miss, even if they could potentially have deduced that unexpected behaviour. The trouble here is that the syntax of a generator is so much like a function that it's easy to think of it as being as safe and simple as a function. It's not: the "yield" statement lets a lot of external complexity leak in that's not relevant to a function (unless you're writing multithreaded code). So, the documentation needs to help the user avoid such problems. -- ___ Python tracker <http://bugs.python.org/issue11248> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11248] Tails of generator get lost under zip()
Changes by Greg Kochanski : -- resolution: invalid -> status: closed -> open ___ Python tracker <http://bugs.python.org/issue11248> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11248] Tails of generator get lost under zip()
Greg Kochanski added the comment: The code (bug312.py) was not submitted as a "pattern", but rather as an example of a trap into which it is easy to fall, at least for the 99% of programmers who are users of the language rather than its implementers. The basic difference is that while one can write a function that is guaranteed to execute to the end of its body[*]; one cannot do that with a generator function. This point ought to be made in the documentation. [* Neglecting SIGKILL and perhaps a few abnormal cases.] The current documentation emphasizes the analogy to functions (which can be misleading) and (in section 6.8) explictly says that the normal behaviour of a generator function is to run all the way to completion. -- ___ Python tracker <http://bugs.python.org/issue11248> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3423] DeprecationWarning message applies to wrong context with exec()
New submission from Greg Hazel <[EMAIL PROTECTED]>: exec()ing a line which causes a DeprecationWarning causes the warning to quote the file exec() occurs in instead of the string. Demonstration of the issue: http://codepad.org/aMTYQgN5 -- components: None messages: 70129 nosy: ghazel severity: normal status: open title: DeprecationWarning message applies to wrong context with exec() versions: Python 2.5 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3423> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3889] Demo/parser/unparse.py
New submission from Greg Darke <[EMAIL PROTECTED]>: When the unparse demo is run on a file containing a 'from x import y' statement, it incorrectly outputs it as 'from x import , y'. The attached patch fixes this. -- components: Demos and Tools files: fix_import_from_bug.patch keywords: patch messages: 73331 nosy: gregdarke severity: normal status: open title: Demo/parser/unparse.py versions: Python 2.5 Added file: http://bugs.python.org/file11509/fix_import_from_bug.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3889> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1759845] subprocess.call fails with unicode strings in command line
Greg Couch <[EMAIL PROTECTED]> added the comment: We're having the same problem. My quick fix was to patch subprocess.py so the command line and executable are converted to the filesystem encoding (mbcs). -- nosy: +gregcouch Added file: http://bugs.python.org/file11674/Python-2.5.2-subprocess.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1759845> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4034] traceback attribute error
New submission from Greg Hazel <[EMAIL PROTECTED]>: Unrelated to this bug, I would like to have the ability to remove the reference to the frame from the traceback object. Specifically so that the traceback object could be stored for a while without keeping all the locals alive as well. So, periodically I test to see if python allows that. Python 2.6 gave some strange results compared to 2.5.2: Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> try: ... x = dskjfds ... except: ... import sys ... t, v, tb = sys.exc_info() ... >>> tb >>> dir(tb) ['tb_frame', 'tb_lasti', 'tb_lineno', 'tb_next'] >>> tb.tb_frame >>> tb.tb_frame = None Traceback (most recent call last): File "", line 1, in TypeError: 'traceback' object has only read-only attributes (assign to .tb_frame) >>> Python 2.6 (r26:66721, Oct 2 2008, 11:35:03) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> try: ... x = lfdskf ... except: ... import sys ... t, v, tb = sys.exc_info() ... >>> tb >>> dir(tb) ['tb_frame', 'tb_lasti', 'tb_lineno', 'tb_next'] >>> tb.tb_frame >>> tb.tb_frame = None Traceback (most recent call last): File "", line 1, in AttributeError: 'traceback' object has no attribute 'tb_frame' >>> -- messages: 74282 nosy: ghazel severity: normal status: open title: traceback attribute error type: behavior versions: Python 2.6 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4034> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1565525] gc allowing tracebacks to eat up memory
Greg Hazel <[EMAIL PROTECTED]> added the comment: Or, being able to remove the references to the locals and globals from the traceback would be sufficient. Something like this: try: raise Exception() except: t, v, tb = sys.exc_info() tbi = tb while tbi: tbi.tb_frame.f_locals = None tbi.tb_frame.f_globals = None tbi = tbi.tb_next # now "tb" is cleaned of references ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1565525> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4034] traceback attribute error
Greg Hazel <[EMAIL PROTECTED]> added the comment: There seem to be some other exception type and string inconsistencies, but they are not new to Python 2.6 >>> tb.tb_frame = None Traceback (most recent call last): File "", line 1, in TypeError: 'traceback' object has only read-only attributes (assign to .tb_frame) >>> tb.tb_frame.f_locals = None Traceback (most recent call last): File "", line 1, in AttributeError: attribute 'f_locals' of 'frame' objects is not writable >>> tb.tb_frame.f_globals = None Traceback (most recent call last): File "", line 1, in TypeError: readonly attribute >>> dict.clear = "foo" Traceback (most recent call last): File "", line 1, in TypeError: can't set attributes of built-in/extension type 'dict' Should it be an AttributeError or TypeError? Should it be "read-only", readonly", "not writable" or "can't set"? Btw, here's the other ticket for the feature request: http://bugs.python.org/issue1565525 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4034> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7242] Forking in a thread raises RuntimeError
Greg Jednaszewski added the comment: I spent some time working on and testing a unit test as well. It's the same basic idea as Zsolt Cserna's, but with a slightly different approach. See 7242_unittest.diff. My unittest fails pre-patch and succeeds post-patch. However, I still have reservations about the patch. The existing test test_threading.ThreadJoinOnShutdown.test_3_join_in_forked_from_thread hangs with the patch in place. Vanilla 2.6.2 - test passes Vanilla 2.6.4 - test fails Patched 2.6.4 - test hangs Note: the code of the test_threading test is identical in all 3 cases. I'd feel more confident about the patch if this test didn't hang with the patch in place. -- Added file: http://bugs.python.org/file16381/7242_unittest.diff ___ Python tracker <http://bugs.python.org/issue7242> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7242] Forking in a thread raises RuntimeError
Greg Jednaszewski added the comment: The problem only seems to appear on Solaris 9 and earlier. I'll try to test the updated patch tonight or tomorrow and let you know what I find. -- ___ Python tracker <http://bugs.python.org/issue7242> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7242] Forking in a thread raises RuntimeError
Greg Jednaszewski added the comment: I tested the updated patch, and the new unit test passes on my Sol 8 sparc, but the test_threading test still hangs on my system. However, given that the test is skipped on several platforms and it does work on more relevant versions of Solaris, it's probably OK. It's possible that an OS bug is causing that particular hang. Plus, the original patch fixed the 'real world' scenario I was running into, so I'd like to see it get into the release candidate if you're OK with it. -- ___ Python tracker <http://bugs.python.org/issue7242> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8134] collections.defaultdict gives KeyError with format()
New submission from Greg Jednaszewski : Found on 2.6.2 and 2.6.4: I expect that printing an uninitialized variable from a defaultdict should work. In fact it does with old-style string formatting. However, when you try to do it with new-style string formatting, it raises a KeyError. >>> import collections >>> d = collections.defaultdict(int) >>> "{foo}".format(d) Traceback (most recent call last): File "", line 1, in KeyError: 'foo' >>> "%(foo)d" % d '0' -- components: Library (Lib) messages: 101025 nosy: jednaszewski severity: normal status: open title: collections.defaultdict gives KeyError with format() type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue8134> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8134] collections.defaultdict gives KeyError with format()
Greg Jednaszewski added the comment: Oops, thanks. I should have known that. However, should this work? This is what initially led me to file this ticket. My initial example was a bad one. >>> from collections import defaultdict >>> d = defaultdict(int) >>> d['bar'] += 1 >>> "{bar}".format(**d) '1' >>> "{foo}".format(**d) Traceback (most recent call last): File "", line 1, in KeyError: 'foo' -- ___ Python tracker <http://bugs.python.org/issue8134> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2294] Bug in Pickle protocol involving __setstate__
New submission from Greg Kochanski <[EMAIL PROTECTED]>: If we have a hierarchy of classes, and we use __getstate__/__setstate__, the wrong class' __setstate__ gets called. Possibly, this is a documentation problem, but here goes: Take two classes, A and B, where B is the child of A. Construct a B. Pickle it. Unpickle it, and you find that the __setstate__ function for A is called with the result produced by B.__getstate__(). This is wrong. An example follows: import pickle as P class A(object): def __init__(self, a): print 'A.__init__' self.a = a def __getstate__(self): print 'A.__getstate' return self.a def __setstate__(self, upstate): print 'A.__setstate', upstate self.a = upstate class B(A): def __init__(self, a, b): print 'B.__init__' A.__init__(self, a) self.b = b def __getstate__(self): print 'B.__getstate' return (A.__getstate__(self), self.b) def __setstate(self, upstate): # This never gets called! print 'B.__setstate', upstate A.__setstate__(self, upstate[0]) self.b = upstate[1] def __repr__(self): return '' % (self.a, self.b) q = B(1,2) print '---' r = P.loads(P.dumps(q, 0)) print 'q=', q print 'r=', r Now, run it: $ python foo.py B.__init__ A.__init__ --- B.__getstate A.__getstate A.__setstate (1, 2) q= r= Traceback (most recent call last): File "foo.py", line 44, in print 'r=', r File "foo.py", line 37, in __repr__ return '' % (self.a, self.b) AttributeError: 'B' object has no attribute 'b' $ Note that this problem doesn't get noticed in the common case where you simply pass __dict__ around from __getstate__ to __setstate__.However, it exists in many other use cases. -- components: Library (Lib) messages: 63559 nosy: gpk severity: normal status: open title: Bug in Pickle protocol involving __setstate__ type: behavior versions: Python 2.5 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2294> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2295] cPickle corner case - docs or bug?
New submission from Greg Kochanski <[EMAIL PROTECTED]>: If you attempt to cPickle a class, cPickle checks that it can get the identical class by importing it. If that check fails, it reports: Traceback (most recent call last): ... "/usr/local/lib/python2.5/site-packages/newstem2-0.12.3-py2.5-linux-i686.egg/newstem2/stepperclient.py", line 41, in send s = cPickle.dumps( args, cPickle.HIGHEST_PROTOCOL) cPickle.PicklingError: Can't pickle : it's not the same object as test_simple2.aModel $ Normally, this is probably a good thing. However, if you do an import using the "imp" module, via imp.load_module(name, fd, pn, desc), you get the "same" module containing the "same" classes, but everything is duplicated at different addresses. In other words, you get distinct class objects from what cPickle will find. Consequently, the when cPickle makes the "is" comparison between what you gave it and what it can find, it will fail and cause an error. In this case, the error is wrong. I know that the aModel classes come from the same file and are member-for-member the same. This may well be a documentation error: it needs to mention this test and note that classes in modules imported via imp are not picklable.Or, imp needs to note that its results are not picklable. Or both. Or, maybe it's something that should be fixed, though I'm not sure if there is a general solution that will always behave well. -- assignee: georg.brandl components: Documentation, Library (Lib) messages: 63560 nosy: georg.brandl, gpk severity: normal status: open title: cPickle corner case - docs or bug? type: behavior versions: Python 2.5 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2295> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1160] Medium size regexp crashes python
Greg Detre <[EMAIL PROTECTED]> added the comment: Dear all, I've just switched from linux to a mac, and I'm suddenly starting to experience this issue with a machine-generated regexp that I depend on. Are there any plans to fix this in a future version of python? Thank you, Greg -- nosy: [EMAIL PROTECTED] __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1160> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2693] IDLE doesn't work with Tk 8.5
New submission from Greg Couch <[EMAIL PROTECTED]>: IDLE and Tk 8.5 don't well work together for both Python 2.5 and 2.6a (SVN version). The reasons are related but different. In Python 2.5, you can't select any text in the IDLE window and whenever a calltip is to appear, you get a backtrace ending with "invalid literal for int() with base 10: '(72,'". That comes from an interaction between WidgetRedirector's dispatch function and _tkinter. The Text widget's bbox method returns a tuple of ints, the dispatch function isn't monitoring bbox, so it returns the tuple as is to _tkinter, where PythonCmd converts the tuple to a Python string, not a Tcl list, so when Tkinter sees the string, it can't convert to a tuple. The Python "2.6a2" SVN version of _tkinter fixes that bug but exposes others (Ikinter.py, tupleobject.c), so I've attached a simple patch for Python 2.5. The SVN version of idle appears to work, so this patch should only be on the 2.5 branch. -- components: IDLE, Tkinter files: Python-2.5.2-idlelib.patch keywords: patch messages: 65828 nosy: gregc severity: normal status: open title: IDLE doesn't work with Tk 8.5 versions: Python 2.5 Added file: http://bugs.python.org/file10112/Python-2.5.2-idlelib.patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2693> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2693] IDLE doesn't work with Tk 8.5
Greg Couch <[EMAIL PROTECTED]> added the comment: I wish I could be as cavalier about Tk 8.5. The last version of Tk 8.4 just came out and it really shows its age, especially on Mac OS X, and those are ~25% of our application's downloads. Since Python 2.6a2 is "not suitable for production use", that leaves us with patching 2.5. Backporting, the _tkinter and Tkinter changes, was not hard, but then we get "SystemError: Objects/tupleobject.c:89: bad argument to internal function" errors with both the 2.5 and the 2.6a2 idlelibs. Looking at the SVN log, it is not clear which patch to tupleobject.c fixed that problem (does anyone know?). So fixing WidgetRedirector.py to not screw up the string representation of tuples is the easiest solution to get idle to work with Tk 8.5. and Python 2.5 (you still would want the Tkinter.py changes for other reasons). A slightly more robust solution would be to use Tcl quoting: r = '{%s}' % '} {'.join(map(str, r)) But that has not been important in practice. __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2693> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2693] IDLE doesn't work with Tk 8.5
Greg Couch <[EMAIL PROTECTED]> added the comment: Starting over: The goal of this patch is to get Tk 8.5 to work with Python 2.5's Idle. It currently fails with a ValueError, "invalid literal for int() with base 10: '(72,'" (the 72 changes depending on what was typed in). The root cause of bug is due to an interaction between Tk 8.5 returning more results as Tcl lists, instances of Idle's WidgetRedirector class that wrap widget Tcl commands with the WidgetRedirector's dispatch method, and _tkinter's internal PythonCmd function that stringifies anything its gets from Python. What happens is that when a Python method is called on a redirected widget, the corresponding Tcl method is called using the WidgetRedirector's imposter widget command, which calls the WidgetRedirector's dispatch method from Tcl, which then invokes the original widget Tcl command, and if that command returns a Tcl list, _tkinter converts it to a Python tuple, the dispatch method returns the tuple into _tkinter, _tkinter stringifies it so it looks like a Python tuple representation instead of a Tcl list representation, returns it to Tkinter which tries to parse it like a Tcl list representation, and causes the ValueError. The correct fix is already in Python 2.6a2, which is changing Text class' index method in Tkinter.py to return a string, and changing _tkinter's PythonCmd to convert Python objects to equivalent Tcl objects. Unfortunately backporting those simple changes to Python 2.5 cause a "SystemError: Objects/tupleobject.c:89: bad argument to internal function". While that is worth further investigation, Python 2.6a2 doesn't have that problem and a simple alternative fix is available for Python 2.5, so that is for someone else to do. The alternative fix that works in Python 2.5 is to make sure that the Tcl list string representation is used for Python tuples that are returned to _tkinter's PythonCmd. Those changes are confined to the WidgetRedirector's dispatch method. Line 126 of WidgetRedirector.py: return self.tk.call((self.orig, operation) + args) is replaced with: result = self.tk.call((self.orig, operation) + args) if isinstance(result, tuple): # convert to string ourselves so we get a Tcl list # that can be converted back into a tuple by Tkinter result = '{%s}' % '} {'.join(map(str, result)) return result For Tk 8.4, the if clause is never invoked because Idle does not use any of the Tk 8.4 methods that return Tcl lists (luckily). In Tk 8.5, the additional quoting is only needed for the Tk text widget's tag names and tag ranges commands when spaces are used for tag names (explicitly not recommended), all other uses are lists of numbers. Since none of Idle's Text tags have spaces in them, that line can safely be replaced with: result = ' '.join(map(str, result)) __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2693> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1565525] gc allowing tracebacks to eat up memory
Greg Hazel added the comment: But a list of strings is not re-raisable. The co_filename, co_name, and such used to print a traceback are not dependent on the locals or globals. ___ Python tracker <http://bugs.python.org/issue1565525> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1565525] gc allowing tracebacks to eat up memory
Greg Hazel added the comment: STINNER Victor> Do you need the original traceback? Why not only raising the exception? If the exception was captured in one stack, and is being re-raised in another. It would be more useful to see the two stacks appended instead of just the place where it was re-raised (or the place where it was raised initially, which is what a string would get you - not to mention the inability to catch it). ___ Python tracker <http://bugs.python.org/issue1565525> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Change by Greg Price : -- pull_requests: +14969 pull_request: https://github.com/python/cpython/pull/15248 ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32771] merge the underlying data stores of unicodedata and the str type
Change by Greg Price : -- nosy: +Greg Price ___ Python tracker <https://bugs.python.org/issue32771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Greg Price added the comment: > I like to run pyflakes time to time on the Python code base. Please avoid > "import *" since it prevents pyflakes (and other code analyzers) to find bugs. Ah fair enough, thanks! Pushed that change to the next/current PR, GH-15248. -- ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Greg Price added the comment: > BTW: Since when do we use type annotations in Python's stdlib ? Hmm, interesting question! At a quick grep, it's in a handful of places in the stdlib: asyncio, functools, importlib. The earliest it appeared was in 3.7.0a4. It's in more places in the test suite, which I think is a closer parallel to this maintainer script in Tools/. The typing module itself is in the stdlib, so I don't see any obstacle to using it more widely. I imagine the main reason it doesn't appear more widely already is simply that it's new, and most of the stdlib is quite stable. -- ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Greg Price added the comment: > What is the minimal Python version for developing CPython? The system Python > 3 on current Ubuntu LTS (18.04) is 3.6, so I think it should not be larger. Ah, I think my previous message had an ambiguous parse: the earliest that *uses* of the typing module appeared in the stdlib was 3.7. The typing module has been around longer than that. I just checked and `python3.6 Tools/unicode/makeunicodedata.py` works fine, both at master and with GH-15248. I think it would be OK for doing development on CPython to require the latest minor version (i.e. 3.7) -- after all, if you're doing development, you're already building it, so you can always get a newer version than your system provides if needed. But happily the question is moot here, so I guess the place to discuss that further would be a new thread. -- ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Greg Price added the comment: > This is good. But the title mentioned dataclasses, and they are 3.7+. Ahh, sorry, I think now I understand you. :-) Indeed, when I switch to the branch with that change (https://github.com/gnprice/cpython/commit/2b4aec4dd -- it comes after the patch that's GH-15248, so I haven't yet sent it as a PR), then `python3.6 Tools/unicode/makeunicodedata.py` no longer works. I think this is fine. Most of all that's because this always works: ./python Tools/unicode/makeunicodedata.py Anyone who's going to be running that script will want to build a `./python` right afterward, in order to at least run the tests. So it doesn't seem like much trouble to do the build first and then run the script (and then a quick rebuild for the handful of changed files), if indeed the person doesn't already have a `./python` lying around. In fact `./python` is exactly what I used most of the time to run this script when I was developing these changes, simply because it seemed like the natural thing to do. -- ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32771] merge the underlying data stores of unicodedata and the str type
Greg Price added the comment: > Loading it dynamically reduces the memory footprint. Ah, this is a good question to ask! First, FWIW on my Debian buster desktop I get a smaller figure for `import unicodedata`: only 64 kiB. $ python Python 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 9888 kB >>> import unicodedata >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 9952 kB But whether 64 kiB or 160 kiB, it's much smaller than the 1.1 MiB of the whole module. Which makes sense -- there's no need to bring the whole thing in memory when we only import it, or generally to bring into memory the parts we aren't using. I wouldn't expect that to change materially if the tables and algorithms were built in. Here's another experiment: suppose we load everything that ast.c needs in order to handle non-ASCII identifiers. $ python Python 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 9800 kB >>> là = 3 >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 9864 kB So that also comes to 64 kiB. We wouldn't want to add 64 kiB to our memory use for no reason; but I think 64 or 160 kiB is well within the range that's an acceptable cost if it gets us a significant simplification or improvement to core functionality, like Unicode. -- ___ Python tracker <https://bugs.python.org/issue32771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32771] merge the underlying data stores of unicodedata and the str type
Greg Price added the comment: Speaking of improving functionality: > Having unicodedata readily accessible to the str type would also permit > higher a fidelity unicode implementation. For example, implementing > language-tailored str.lower() requires having canonical combining class of a > character available. This data lives only in unicodedata currently. Benjamin, can you say more about the behavior you have in mind here? I don't entirely follow. (Is or should there be an issue for it?) -- versions: +Python 3.9 -Python 3.8 ___ Python tracker <https://bugs.python.org/issue32771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Greg Price added the comment: > From my perspective, the main problem with using type annotations is that > there's nothing checking them in CI. Yeah, fair concern. In fact I think I'm on video (from PyCon 2018) warning everyone not to do that in their codebases, because what you really don't want is a bunch of annotations that have gradually accumulated falsehoods as the code has changed around them. Still, I think from "some annotations + no checking" the good equilibrium to land in "some annotations + checking", not "no annotations + no checking". (I do mean "some" -- I don't predict we'll ever go sweep all over adding them.) And I think the highest-probability way to get there is to let them continue to accumulate where people occasionally add them in new/revised code... because that holds a door open for someone to step up to start checking them, and then to do the work to make that part of CI. (That someone might even be me! But I can think of plenty of other likely folks to do it.) If we accumulated quite a lot of them and nobody had yet stepped up to make checking happen, I'd worry. But with the smattering we currently have, I think that point is far off. -- ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Change by Greg Price : -- pull_requests: +14985 pull_request: https://github.com/python/cpython/pull/15265 ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37848] More fully implement Unicode's case mappings
New submission from Greg Price : Splitting this out from #32771 for more specific discussion. Benjamin writes there that it would be good to: > implement the locale-specific case mappings of > https://www.unicode.org/Public/UCD/latest/ucd/SpecialCasing.txt and §3.13 of > the Unicode 12 standard in str.lower/upper/casefold. and adds that an implementation would require having available in the core the data on canonical combining classes, which is currently only in the unicodedata module. --- First, I'd like to better understand what functionality we have now and what else the standard describes. Reading https://www.unicode.org/Public/12.0.0/ucd/SpecialCasing.txt , I see * a bunch of rules that aren't language-specific * some other rules that are. I also see in makeunicodedata.py that we don't even parse the language-specific rules. Here's, IIUC, a demo of us correctly implementing the language-independent rules. One line in the data file reads: FB00; FB00; 0046 0066; 0046 0046; # LATIN SMALL LIGATURE FF And in fact the `lower`, `title`, and `upper` of `\uFB00` are those strings respectively: $ unicode --brief "$(./python -c \ 's="\ufb00"; print(" ".join((s.lower(), s.title(), s.upper(')" ff U+FB00 LATIN SMALL LIGATURE FF U+0020 SPACE F U+0046 LATIN CAPITAL LETTER F f U+0066 LATIN SMALL LETTER F U+0020 SPACE F U+0046 LATIN CAPITAL LETTER F F U+0046 LATIN CAPITAL LETTER F OK, great. --- Then here's something we don't implement. Another line in the file reads: 00CD; 0069 0307 0301; 00CD; 00CD; lt; # LATIN CAPITAL LETTER I WITH ACUTE IOW `'\u00CD'` should lowercase to `'\u0069\u0307\u0301'`, i.e.: i U+0069 LATIN SMALL LETTER I ̇ U+0307 COMBINING DOT ABOVE ́ U+0301 COMBINING ACUTE ACCENT ... but only in a Lithuanian (`lt`) locale. One question is: what would the right API for this be? I'm not sure I'd want `str.lower`'s results to depend on the process's current Unix locale... and I definitely wouldn't want to get that without some way of instead telling it what locale to use. (Either to use a single constant locale, or to use a per-user locale in e.g. a web application.) Perhaps `str.lower` and friends would take a keyword argument `locale`? Oh, one more link for reference: the said section of the standard is in this PDF: https://www.unicode.org/versions/Unicode12.0.0/ch03.pdf , near the end. And a related previous issue: #12736. -- components: Unicode messages: 349646 nosy: Greg Price, benjamin.peterson, ezio.melotti, lemburg, vstinner priority: normal severity: normal status: open title: More fully implement Unicode's case mappings versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37848] More fully implement Unicode's case mappings
Greg Price added the comment: Another previous discussion is #4610. -- ___ Python tracker <https://bugs.python.org/issue37848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32771] merge the underlying data stores of unicodedata and the str type
Greg Price added the comment: OK, I forked off the discussion of case-mapping as #37848. I think it's probably good to first sort out what we want, before returning to how to implement it (if it's agreed that changes are desired.) Are there other areas of functionality that would be good to add in the core, and require data that's currently only in unicodedata? -- ___ Python tracker <https://bugs.python.org/issue32771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37848] More fully implement Unicode's case mappings
Greg Price added the comment: > I believe that all locale specific things should be in the locale module, not > in the str class. The locale module is all about doing things with the current process-global Unix locale. I don't think that'd be an appropriate interface for this -- if it's worth doing, it's worth doing in such a way that the same web server process can handle requests for Turkish-, Lithuanian-, and Spanish-speaking users without having to reset a global variable for each one. > If a locale specific mapping is requested, this should be done > explicitly by e.g. providing a parameter to str.lower() / upper() / > title(). I like this design. I said "locale" above, but that wasn't quite right, I think -- the file says e.g. `tr`, not `tr_TR` and `tr_CY`, and it describes the identifiers as "language IDs". So perhaps str.lower(*, lang=None) ? And then "I".lower(lang="tr") == "ı" == "\N{Latin small letter dotless I}" -- ___ Python tracker <https://bugs.python.org/issue37848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37848] More fully implement Unicode's case mappings
Greg Price added the comment: > Maintaining Python is already expensive [...] There are already enough bugs > waiting for you to be fixed ;-) BTW I basically agree with this. I think this is not a high-priority issue, and I have my eye on some of those bugs. :-) I think the fact that it's per-*language* (despite my inaccurate phrasing in the OP), not per-locale, simplifies it some -- for example the whole `.UTF-8` vs `.utf8` thing doesn't appear. And in particular I think if/when someone decides to sit down and make an implementation of this, then if they take the time to carefully read and absorb the relevant pages of the standard... this is a feature where it's pretty feasible for the implementation to be a self-contained and relatively stable and low-bugs piece of code. And in general I think even if nobody implements it soon, it's useful to have an issue that can be pointed to for this feature, and especially so if the discussion clearly lays out what the feature involves and what different people's views are on the API. For example #18236 has been open for 6 years, but the discussion there was extremely helpful for me to understand it and work up a fix, after just being pointed to it by someone who'd searched the tracker on seeing me send in the doc fix GH-15019. -- ___ Python tracker <https://bugs.python.org/issue37848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37848] More fully implement Unicode's case mappings
Greg Price added the comment: (I should add that it was only after doing the reading that produced the OP that I had a clear idea what I thought the priority of the issue was -- before doing that work I didn't have a clear sense of the scope of what it affects. Based on that SpecialCasing.txt file as of Unicode 12.0.0, I believe the functionality we don't currently support is entirely about the handling of certain versions of the Latin letter I, as treated in Lithuanian, Turkish, and Azerbaijani. Though one function of this issue thread is that it would be a great place to point out if there's another component to it!) -- ___ Python tracker <https://bugs.python.org/issue37848> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36502] str.isspace() for U+00A0 and U+202F differs from document
Change by Greg Price : -- pull_requests: +15019 pull_request: https://github.com/python/cpython/pull/15296 ___ Python tracker <https://bugs.python.org/issue36502> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32771] merge the underlying data stores of unicodedata and the str type
Greg Price added the comment: > About the RSS memory, I'm not sure how Linux accounts the Unicode databases > before they are accessed. Is it like read-only memory loaded on demand when > accessed? It stands for "resident set size", as in "resident in memory"; and it only counts pages of real physical memory. The intention is to count up pages that the process is somehow using. Where the definition potentially gets fuzzy is if this process and another are sharing some memory. I don't know much about how that kind of edge case is handled. But one thing I think it's pretty consistently good at is not counting pages that you've nominally mapped from a file, but haven't actually forced to be loaded physically into memory by actually looking at them. That is: say you ask for a file (or some range of it) to be mapped into memory for you. This means it's now there in the address space, and if the process does a load instruction from any of those addresses, the kernel will ensure the load instruction works seamlessly. But: most of it won't be eagerly read from disk or loaded physically into RAM. Rather, the kernel's counting on that load instruction causing a page fault; and its page-fault handler will take care of reading from the disk and sticking the data physically into RAM. So until you actually execute some loads from those addresses, the data in that mapping doesn't contribute to the genuine demand for scarce physical RAM on the machine; and it also isn't counted in the RSS number. Here's a demo! This 262392 kiB (269 MB) Git packfile is the biggest file lying around in my CPython directory: $ du -k .git/objects/pack/pack-0e4acf3b2d8c21849bb11d875bc14b4d62dc7ab1.pack 262392 .git/objects/pack/pack-0e4acf3b2d8c21849bb11d875bc14b4d62dc7ab1.pack Open it for read -- adds 100 kiB, not sure why: $ python Python 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os, mmap >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 9968 kB >>> fd = >>> os.open('.git/objects/pack/pack-0e4acf3b2d8c21849bb11d875bc14b4d62dc7ab1.pack', >>> os.O_RDONLY) >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 10068 kB Map it into our address space -- RSS doesn't budge: >>> m = mmap.mmap(fd, 0, prot=mmap.PROT_READ) >>> m >>> len(m) 268684419 >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 10068 kB Cause the process to actually look at all the data (this takes about ~10s, too)... >>> sum(len(l) for l in m) 268684419 >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS:271576 kB RSS goes way up, by 261508 kiB! Oddly slightly less (by ~1MB) than the file's size. But wait, there's more. Drop that mapping, and RSS goes right back down (OK, keeps 8 kiB extra): >>> del m >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 10076 kB ... and then map the exact same file again, and it's *still* down: >>> m = mmap.mmap(fd, 0, prot=mmap.PROT_READ) >>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status") VmRSS: 10076 kB This last step is interesting because it's a certainty that the data is still physically in memory -- this is my desktop, with plenty of free RAM. And it's even in our address space. But because we haven't actually loaded from those addresses, it's still in memory only at the kernel's caching whim, and so apparently our process doesn't get "charged" or "blamed" for its presence there. In the case of running an executable with a bunch of data in it, I expect that the bulk of the data (and of the code for that matter) winds up treated very much like the file contents we mmap'd in. It's mapped but not eagerly physically loaded; so it doesn't contribute to the RSS number, nor to the genuine demand for scarce physical RAM on the machine. That's a bit long :-), but hopefully informative. In short, I think for us RSS should work well as a pretty faithful measure of the real memory consumption that we want to be frugal with. -- ___ Python tracker <https://bugs.python.org/issue32771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37758] unicodedata checksum-tests only test 1/17th of Unicode's codepoints
Change by Greg Price : -- nosy: +vstinner ___ Python tracker <https://bugs.python.org/issue37758> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37864] Correct and deduplicate docs on "printable" characters
New submission from Greg Price : While working on #36502 and then #18236 about the definition and docs of str.isspace(), I looked closely also at its neighbor str.isprintable(). It turned out that we have the definition of what makes a character "printable" documented in three places, giving two different definitions. The definition in the comment on `_PyUnicode_IsPrintable` is inverted, so that's an easy small fix. With that correction, the two definitions turn out to be equivalent -- but to confirm that, you have to go look up, or happen to know, that those are the only five "Other" categories and only three "Separator" categories in the Unicode character database. That makes it hard for the reader to tell whether they really are the same, or if there's some subtle difference in the intended semantics. I've taken a crack at writing some improved docs text for a single definition, borrowing ideas from the C comment as well as the existing docs text; and then pointing there from the other places we'd had definitions. PR coming shortly. -- components: Unicode messages: 349792 nosy: Greg Price, ezio.melotti, vstinner priority: normal severity: normal status: open title: Correct and deduplicate docs on "printable" characters versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue37864> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37864] Correct and deduplicate docs on "printable" characters
Change by Greg Price : -- keywords: +patch pull_requests: +15025 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15300 ___ Python tracker <https://bugs.python.org/issue37864> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36502] str.isspace() for U+00A0 and U+202F differs from document
Change by Greg Price : -- pull_requests: +15026 pull_request: https://github.com/python/cpython/pull/15301 ___ Python tracker <https://bugs.python.org/issue36502> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37758] unicodedata checksum-tests only test 1/17th of Unicode's codepoints
Change by Greg Price : -- pull_requests: +15027 pull_request: https://github.com/python/cpython/pull/15302 ___ Python tracker <https://bugs.python.org/issue37758> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37872] Move statics in Python/import.c to top of the file
Change by Greg Price : -- nosy: +Greg Price ___ Python tracker <https://bugs.python.org/issue37872> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37872] Move _Py_IDENTIFIER statics in Python/import.c to top of the file
Change by Greg Price : -- title: Move statics in Python/import.c to top of the file -> Move _Py_IDENTIFIER statics in Python/import.c to top of the file ___ Python tracker <https://bugs.python.org/issue37872> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37872] Move _Py_IDENTIFIER statics in Python/import.c to top of the file
Change by Greg Price : -- components: +Interpreter Core ___ Python tracker <https://bugs.python.org/issue37872> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36502] str.isspace() for U+00A0 and U+202F differs from document
Greg Price added the comment: Thanks Victor for the reviews and merges! (Unmarking 2.7, because https://docs.python.org/2/library/stdtypes.html seems to not have this issue.) -- versions: -Python 2.7 ___ Python tracker <https://bugs.python.org/issue36502> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35518] test_timeout uses blackhole.snakebite.net domain which doesn't exist anymore
Change by Greg Price : -- keywords: +patch pull_requests: +15063 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15349 ___ Python tracker <https://bugs.python.org/issue35518> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35518] test_timeout uses blackhole.snakebite.net domain which doesn't exist anymore
Greg Price added the comment: I ran across this test when looking at especially slow files in the test suite: it turns out that not only is this service currently down, but the snakebite.net domain still exists, and as a result the test can end up waiting 20-30s before learning that the hosts can't be found and the test gets skipped. I agree with Benjamin's and Victor's comments -- the best solution would be to recreate the test, ideally as something that anyone (anyone with Docker installed, perhaps?) can just run locally. For now I've just sent GH-15349 as a one-line fix to skip the test, with a remark pointing at this issue. It's already getting skipped 100% of the time thanks to the handy `support.transient_internet` mechanism -- this just makes the skip (a) explicit in the source code, and (b) a lot faster. :-) -- nosy: +Greg Price ___ Python tracker <https://bugs.python.org/issue35518> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass
Greg Price added the comment: (A bit easy to miss in the way this thread gets displayed, so to highlight in a comment: GH-15265 is up, following the 5 other patches which have now all been merged. That's the one that replaces the length-18 tuples with a dataclass.) -- ___ Python tracker <https://bugs.python.org/issue37760> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36375] PEP 499 implementation: "python -m foo" binds the main module as both __main__ and foo in sys.modules
Change by Greg Price : -- nosy: +Greg Price ___ Python tracker <https://bugs.python.org/issue36375> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37812] Make implicit returns explicit in longobject.c (in CHECK_SMALL_INT)
Change by Greg Price : -- pull_requests: +15140 pull_request: https://github.com/python/cpython/pull/15448 ___ Python tracker <https://bugs.python.org/issue37812> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37812] Make implicit returns explicit in longobject.c (in CHECK_SMALL_INT)
Greg Price added the comment: Thanks, Raymond, for the review on GH-15216! Shortly after posting this issue, I noticed a very similar story in CHECK_BINOP. I've just posted GH-15448 to similarly make returns explicit there. It basically consists of a number of repetitions of -CHECK_BINOP(self, other); +if (!PyLong_Check(self) || !PyLong_Check(other)) { +Py_RETURN_NOTIMPLEMENTED; +} with the names `self` and `other` varying from site to site. Though the expanded version isn't literally a `return` statement, I think it functions almost as well as one for the purposes that explicitness serves here, and much better so than a reference to `CHECK_BINOP`. -- ___ Python tracker <https://bugs.python.org/issue37812> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37936] gitignore file is too broad
New submission from Greg Price : There are a number of files that we track in the repo, but are nevertheless covered by `.gitignore`. This *mostly* doesn't change anything, because Git itself only cares what `.gitignore` has to say about files that aren't already tracked. But: * It affects any new files someone might add that are covered by the same unintentionally-broad patterns. In that case it'd be likely to cause some confused debugging into why Git wasn't seeing the file; or possibly loss of work, if the person didn't notice that the file had never been committed to Git. * More immediately, other tools that aren't Git but consult the Git ignore rules don't necessarily implement this wrinkle. In particular this is unfortunately a WONTFIX bug in ripgrep / `rg`: https://github.com/BurntSushi/ripgrep/issues/1127 . I learned of the `rg` bug (and, for that matter, refreshed myself on just how Git itself handles this case) after some confusion today where I was looking with for references to a given macro, thought I'd looked at all of them... and then later noticed through `git log -p -S` a reference in `PC/pyconfig.h` with no subsequent change deleting it. Turned out it was indeed there and I needed to take account of it. Here's the list of affected files: $ git ls-files -i --exclude-standard .gitignore Doc/Makefile Lib/test/data/README Modules/Setup PC/pyconfig.h Tools/freeze/test/Makefile Tools/msi/core/core.wixproj Tools/msi/core/core.wxs Tools/msi/core/core_d.wixproj Tools/msi/core/core_d.wxs Tools/msi/core/core_en-US.wxl Tools/msi/core/core_files.wxs Tools/msi/core/core_pdb.wixproj Tools/msi/core/core_pdb.wxs Tools/unicode/Makefile Fortunately this is not hard to fix. The semantics of `.gitignore` have a couple of gotchas, but once you know them it's not really any more complicated to get the behavior exactly right. And I've previously spent the hour or two to read up on it... and when I forget, I just consult my own short notes :), at the top of this file: https://github.com/zulip/zulip/blob/master/.gitignore I have a minimal fix which takes care of all the files above. I'll post that shortly, and I may also write up a more thorough fix that tries to make it easy not to fall into the same Git pitfall again. -- components: Build messages: 350355 nosy: Greg Price priority: normal severity: normal status: open title: gitignore file is too broad versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37936> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37936] gitignore file is too broad
Change by Greg Price : -- keywords: +patch pull_requests: +15143 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15451 ___ Python tracker <https://bugs.python.org/issue37936> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37812] Make implicit returns explicit in longobject.c (in CHECK_SMALL_INT)
Greg Price added the comment: > May I suggest directing your efforts towards fixing known bugs or > implementing requested features. Well, I would certainly be grateful for a review on my fix to #18236. ;-) There's also a small docs bug at GH-15301. I do think there's significant value in making code easier to read and less tricky. If the project continues to be successful for a long time to come, then that means the code will be read many, many more times than it's written. But one particular spot where it seems our experiences interestingly differ is: > They are a bit tedious to review and are eating up our time in the back and > forth. As a reviewer I generally find it much less work to review a change when it's intended to have no effect on the code's behavior. First, because it's easier to confirm no effect than to pin down what the effects are; then because the whole set of questions about whether the effects are desirable doesn't arise. As a result I often ask contributors (to Zulip, say) to split a change into a series of small pure refactors, followed by a very focused diff for the behavior change. So that's certainly background to my sending as many PRs that don't change any behavior as PRs that do. I actually have quite a number of draft changes built up over the last few weeks. I've held back on sending them all at once, partly because I've felt I have enough open PRs and I wanted to get a better sense of how reviews go. Perhaps I'll go pick out a couple more of them that are bugfixes, features, and docs to send next. (You didn't mention docs just now, but given the care I see you take in adding to them and in revising What's New, I think we agree that work there is valuable.) -- ___ Python tracker <https://bugs.python.org/issue37812> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37837] add internal _PyLong_FromUnsignedChar() function
Greg Price added the comment: Hmm, I'm a bit confused because: * Your patch at GH-15251 replaces a number of calls to PyLong_FromLong with calls to the new _PyLong_FromUnsignedChar. * That function, in turn, just calls PyLong_FromSize_t. * And that function begins: PyObject * PyLong_FromSize_t(size_t ival) { PyLongObject *v; size_t t; int ndigits = 0; if (ival < PyLong_BASE) return PyLong_FromLong((long)ival); // ... * So, it seems like after your patch we still end up calling PyLong_FromLong at each of these callsites, just after a couple more indirections than before. Given the magic of compilers and of hardware branch prediction, it wouldn't at all surprise me for those indirections to not make anything slower... but if the measurements are coming out *faster*, then I feel like something else must be going on. ;-) Ohhh, I see -- I bet it's that at _PyLong_FromUnsignedChar, the compiler can see that `is_small_int(ival)` is always true, so the whole function just turns into get_small_int. Whereas when compiling a call to PyLong_FromLong from some other file (other translation unit), it can't see that and can't make the optimization. Two questions, then: * How do the measurements look under LTO? I wonder if with LTO the linker is able to make the same optimization that this change helps the compiler make. * Is there a particular reason to specifically call PyLong_FromSize_t? Seems like PyLong_FromLong is the natural default (and what we default to in the rest of the code), and it's what this ends up calling anyway. -- nosy: +Greg Price ___ Python tracker <https://bugs.python.org/issue37837> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com