[issue46223] asyncio cause infinite loop during debug
New submission from aaron : When running code in debug mode, asyncio sometimes enter into infinite loop, shows as the following: ``` Current thread 0x7f1c15fc5180 (most recent call first): File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 58 in __repr__ File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 47 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 32 in format_cb File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 37 in _format_callbacks File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 76 in _future_repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 51 in _repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 61 in __repr__ File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 47 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 32 in format_cb File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 37 in _format_callbacks File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 76 in _future_repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 51 in _repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 61 in __repr__ File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/minic
[issue46223] asyncio cause infinite loop during debug
aaron added the comment: "When running code in debug mode" means we're debug the code. We have used both vscode and pycharm. Same result. -- ___ Python tracker <https://bugs.python.org/issue46223> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46223] asyncio cause infinite loop during debug
aaron added the comment: '@reprlib.recursive_repr' decorator to 'events.Handle.__repr__()' could you tell me which file should I change? and why? -- ___ Python tracker <https://bugs.python.org/issue46223> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8381] New Window Error
New submission from Aaron : When ever I try to open a new window or open a saved file in the IDLE (on a mac) it freezes. I am running snow leppord on a very new mac. -- components: IDLE messages: 102987 nosy: aaron.the.cow severity: normal status: open title: New Window Error type: crash ___ Python tracker <http://bugs.python.org/issue8381> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8381] IDLE 2.6 freezes on OS X 10.6
Aaron added the comment: I just used the biult in mac softwere -- ___ Python tracker <http://bugs.python.org/issue8381> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43306] Error in multiprocessing.Pool's initializer doesn't stop execution
Aaron added the comment: I ran into this bug answering this question on Stack Overflow: https://stackoverflow.com/questions/68890437/cannot-use-result-from-multiprocess-pool-directly I have minimized the code required to replicate the behavior, but it boils down to: when using "spawn" to create a multiprocessing pool, if an exception occurs during the bootstrapping phase of the new child or during the initialization function with any start method, it is just cleaned up, and another takes its place (which will also fail). This creates an infinite loop of creating child workers, workers exiting due to an exception, and re-populating the pool with new workers. ``` import multiprocessing multiprocessing.set_start_method("spawn") # bootstraping only problem with spawn def task(): print("task") if __name__ == "__main__": with multiprocessing.Pool() as p: p.apply(task) else: raise Exception("raise in child during bootstraping phase") # # or # import multiprocessing # multiprocessing.set_start_method("fork") # fork or spawn doesn't matter def task(): print("task") def init(): raise Exception("raise in child during initialization function") if __name__ == "__main__": with multiprocessing.Pool(initializer=init) as p: p.apply(task) ``` If Pool._join_exited_workers could determine if a worker exited before bootstrapping, or the initialization function completed, It would indicate a likely significant problem. I'm fine with exceptions in the worker target function not being re-raised in the parent, however it seems the Pool should stop trying if it's failing to create new workers. -- nosy: +athompson6735 versions: +Python 3.9 -Python 3.8 ___ Python tracker <https://bugs.python.org/issue43306> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43306] Error in multiprocessing.Pool's initializer doesn't stop execution
Aaron added the comment: What should the behavior be if an exception is raised in a pool worker during bootstrapping / initialization function execution? I think an exception should be raised in the process owning the Pool, and in the fix I'm tinkering around with I just raise a RuntimeError currently. I can see an argument also for raising different exceptions (or having different behavior) for bootstrapping error vs init function, but implementation is more complicated. My current implementation simply creates a lock in _repopulate_pool_static, acquires it, and waits for the worker function to release it. By polling every 100ms I also detect if the process exited before releasing the lock in which case I raise a Runtime error. I just started testing this implementation, but I'll provide it for anyone else who wants to test / comment. -- Added file: https://bugs.python.org/file50230/pool.py ___ Python tracker <https://bugs.python.org/issue43306> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4640] optparse doesn’t disallow adding one-dash long options (“-option”)
Aaron added the comment: I came across this bug report and was unable to reproduce the described behavior. I wrote a few test cases demonstrating that the behavior is indeed correct. It passes both against 2.5.2 (the version described in the report) and the lastest 2.7. The relevant code is line 602 of optparse.py in the function Option._set_opt_strings(). I believe this bug can be closed. -- keywords: +patch nosy: +hac.man Added file: http://bugs.python.org/file26338/test_optparse.py.diff ___ Python tracker <http://bugs.python.org/issue4640> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15882] _decimal.Decimal constructed from tuple
New submission from Aaron: I think I may have found a problem with the code that constructs Infinity from tuples in the C _decimal module. # pure python (3.x or 2.x) >>> decimal.Decimal( (0, (0, ), 'F')) Decimal('Infinity') # _decimal >>> decimal.Decimal( (0, (0, ), 'F')) Traceback (most recent call last): File "", line 1, in decimal.InvalidOperation: [] Also, there is no unit test coverage for constructing these special values from tuples either. I have provided some that pass with the existing pure python code and with the modifications to the _decimal C code. The unit tests can be applied to Python 2.7.x as well, if desired. They would go in the ExplicitConstructionTest.test_explicit_from_tuples() method. -- components: Extension Modules files: _decimal.diff keywords: patch messages: 170017 nosy: hac.man priority: normal severity: normal status: open title: _decimal.Decimal constructed from tuple versions: Python 3.3 Added file: http://bugs.python.org/file27144/_decimal.diff ___ Python tracker <http://bugs.python.org/issue15882> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15882] _decimal.Decimal constructed from tuple
Aaron added the comment: I did not encounter this in a regular application. I do use the decimal module, and was excited to see the adoption of a faster C version, so I was just reading through the code to see how it worked. I can't think of a situation where I would need to construct a decimal from a tuple and not a string or some other numeric type, though. For what it's worth, I think that as long as construction from tuples is supported, Decimal(d.as_tuple()) should work for all Decimal objects d. -- ___ Python tracker <http://bugs.python.org/issue15882> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2716] Reimplement audioop because of copyright issues
Aaron added the comment: The license from http://sox.sourcearchive.com/documentation/12.17.7/g711_8c-source.html /* * This source code is a product of Sun Microsystems, Inc. and is provided * for unrestricted use. Users may copy or modify this source code without * charge. * * SUN SOURCE CODE IS PROVIDED AS IS WITH NO WARRANTIES OF ANY KIND INCLUDING * THE WARRANTIES OF DESIGN, MERCHANTIBILITY AND FITNESS FOR A PARTICULAR * PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. * * Sun source code is provided with no support and without any obligation on * the part of Sun Microsystems, Inc. to assist in its use, correction, * modification or enhancement. * * SUN MICROSYSTEMS, INC. SHALL HAVE NO LIABILITY WITH RESPECT TO THE * INFRINGEMENT OF COPYRIGHTS, TRADE SECRETS OR ANY PATENTS BY THIS SOFTWARE * OR ANY PART THEREOF. * * In no event will Sun Microsystems, Inc. be liable for any lost revenue * or profits or other special, indirect and consequential damages, even if * Sun has been advised of the possibility of such damages. * * Sun Microsystems, Inc. * 2550 Garcia Avenue * Mountain View, California 94043 */ That seems compatible with Python's licensing, no? It seems like adding this license text to the file and also to the documentation in http://docs.python.org/license.html#licenses-and-acknowledgements-for-incorporated-software would make this a non-issue. Assessment of the module's contents and whether it should be rewritten or removed seems like a separate issue. I could write up a patch if people think that this would solve the problem. -- nosy: +hac.man ___ Python tracker <http://bugs.python.org/issue2716> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13212] json library is decoding/encoding when it should not
Aaron added the comment: I think it's worth pointing out that both Firefox and Chrome support the non-standard JSON that Python supports (serializing and deserializing basic types). I'm guessing that communicating with web browsers is the vast majority of JSON IPC. That is to say, supporting the de-facto standard implemented by web browsers may be better than adhering to the exact specifications of the RFC. Maybe someone else wants to check what IE, Safari, Opera, and the various phone browsers allow as that might influece the discussion one way or another. -- nosy: +hac.man ___ Python tracker <http://bugs.python.org/issue13212> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22719] os.path.isfile & os.path.exists but in while loop
New submission from Aaron: When using os.path.isfile() and os.path.exists() in a while loop under certain conditions, os.path.isfile() returns True for paths that do not actually exist. Conditions: The folder "C:\Users\EAARHOS\Desktop\Python Review" exists, as do the files "C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" and "C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py.bak". (Note that I also tested this on a path that contained no spaces, and got the same results.) Code: >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> while os.path.isfile(bak_path): ... bak_path += '.bak' ... if not os.path.isfile(bak_path): ... break Traceback (most recent call last): File "", line 3, in File "C:\Installs\Python33\Lib\genericpath.py", line 29, in isfile st = os.stat(path) ValueError: path too long for Windows >>> os.path.isfile(r"C:\Users\EAARHOS\Desktop\Python >>> Review\baseExcel.py.bak.bak") False >>> >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> while os.path.exists(bak_path): ... bak_path += '.bak' ... if not os.path.exists(bak_path): ... break Traceback (most recent call last): File "", line 3, in File "C:\Installs\Python33\Lib\genericpath.py", line 18, in exists st = os.stat(path) ValueError: path too long for Windows >>> os.path.exists(r"C:\Users\EAARHOS\Desktop\Python >>> Review\baseExcel.py.bak.bak") False >>> >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> os.path.isfile(bak_path), os.path.exists(bak_path) (True, True) >>> bak_path += '.bak' >>> os.path.isfile(bak_path), os.path.exists(bak_path) (True, True) >>> bak_path += '.bak' >>> os.path.isfile(bak_path), os.path.exists(bak_path) (True, True) >>> bak_path 'C:\\Users\\EAARHOS\\Desktop\\Python Review\\baseExcel.py.bak.bak' >>> temp = bak_path >>> os.path.isfile(temp), os.path.exists(temp) (True, True) >>> os.path.isfile('C:\\Users\\EAARHOS\\Desktop\\Python >>> Review\\baseExcel.py.bak.bak'), >>> os.path.exists('C:\\Users\\EAARHOS\\Desktop\\Python >>> Review\\baseExcel.py.bak.bak') (False, False) >>> On the other hand, this code works as expected: >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> while os.path.isfile(bak_path): ... temp = bak_path + '.bak' ... bak_path = temp ... >>> bak_path 'C:\\Users\\EAARHOS\\Desktop\\Python Review\\baseExcel.py.bak.bak' >>> >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> while os.path.exists(bak_path): ... temp = bak_path + '.bak' ... bak_path = temp ... >>> bak_path 'C:\\Users\\EAARHOS\\Desktop\\Python Review\\baseExcel.py.bak.bak' >>> -- components: Windows messages: 229936 nosy: hosford42, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.isfile & os.path.exists but in while loop type: behavior versions: Python 3.3 ___ Python tracker <http://bugs.python.org/issue22719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22719] os.path.isfile & os.path.exists bug in while loop
Changes by Aaron : -- title: os.path.isfile & os.path.exists but in while loop -> os.path.isfile & os.path.exists bug in while loop ___ Python tracker <http://bugs.python.org/issue22719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22719] os.path.isfile & os.path.exists bug in while loop
Aaron added the comment: Interesting. It continues to reuse the last one's stats once the path is no longer valid. >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=8162774324652726, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=29874, st_atime=1413389016, st_mtime=1413389016, st_ctime=1413388655) >>> bak_path += '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> bak_path += '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> bak_path += '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> bak_path += '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> On Fri, Oct 24, 2014 at 1:49 PM, eryksun wrote: > > eryksun added the comment: > > What do you get for os.stat? > > bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" > print(os.stat(bak_path)) > bak_path += '.bak' > print(os.stat(bak_path)) > bak_path += '.bak' > print(os.stat(bak_path)) # This should raise FileNotFoundError > > -- > nosy: +eryksun > > ___ > Python tracker > <http://bugs.python.org/issue22719> > ___ > -- ___ Python tracker <http://bugs.python.org/issue22719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22719] os.path.isfile & os.path.exists bug in while loop
Aaron added the comment: If I use a separate temp variable, the bug doesn't show, but if I use the same variable, even with + instead of +=, it still happens. >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=8162774324652726, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=29874, st_atime=1413389016, st_mtime=1413389016, st_ctime=1413388655) >>> temp = bak_path + '.bak' >>> bak_path = temp >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> temp = bak_path + '.bak' >>> bak_path = temp >>> print(os.stat(bak_path)) Traceback (most recent call last): File "", line 1, in FileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\\Users\\EAARHOS\\Desktop\\Python Review\\baseExcel.py.bak.bak' >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" >>> bak_path = bak_path + '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> bak_path = bak_path + '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> bak_path = bak_path + '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> bak_path = bak_path + '.bak' >>> print(os.stat(bak_path)) nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, st_mtime=1413389088, st_ctime=1413388654) >>> On Fri, Oct 24, 2014 at 2:24 PM, Aaron wrote: > > Aaron added the comment: > > Interesting. It continues to reuse the last one's stats once the path is no > longer valid. > > >>> bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" > >>> print(os.stat(bak_path)) > nt.stat_result(st_mode=33206, st_ino=8162774324652726, st_dev=0, > st_nlink=1, st_uid=0, st_gid=0, st_size=29874, st_atime=1413389016, > st_mtime=1413389016, st_ctime=1413388655) > >>> bak_path += '.bak' > >>> print(os.stat(bak_path)) > nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, > st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, > st_mtime=1413389088, st_ctime=1413388654) > >>> bak_path += '.bak' > >>> print(os.stat(bak_path)) > nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, > st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, > st_mtime=1413389088, st_ctime=1413388654) > >>> bak_path += '.bak' > >>> print(os.stat(bak_path)) > nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, > st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, > st_mtime=1413389088, st_ctime=1413388654) > >>> bak_path += '.bak' > >>> print(os.stat(bak_path)) > nt.stat_result(st_mode=33206, st_ino=42502721483352490, st_dev=0, > st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1413389088, > st_mtime=1413389088, st_ctime=1413388654) > >>> > > On Fri, Oct 24, 2014 at 1:49 PM, eryksun wrote: > > > > > eryksun added the comment: > > > > What do you get for os.stat? > > > > bak_path = r"C:\Users\EAARHOS\Desktop\Python Review\baseExcel.py" > > print(os.stat(bak_path)) > > bak_path += '.bak' > > print(os.stat(bak_path)) > > bak_path += '.bak' > > print(os.stat(bak_path)) # This should raise FileNotFoundError > > > > -- > > nosy: +eryksun > > > > ___ > > Python tracker > > <http://bugs.python.org/issue22719> > > ___ > > > > -- > > ___ > Python tracker > <http://bugs.python.org/issue22719> > ___ > -- ___ Python tracker <http://bugs.python.org/issue22719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22719] os.path.isfile & os.path.exists bug in while loop
Aaron added the comment: Python 3.3.0, Windows 7, both 64 bit. Has it been resolved with the newer version, then? On Mon, Nov 3, 2014 at 11:15 PM, Zachary Ware wrote: > > Zachary Ware added the comment: > > Aaron, what version of Python are you using on what version of Windows? > Also, 32 or 64 bit on both? > > I can't reproduce this with any Python 3.3.6 or newer on 64-bit Windows > 8.1. > > -- > > ___ > Python tracker > <http://bugs.python.org/issue22719> > ___ > -- ___ Python tracker <http://bugs.python.org/issue22719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
New submission from Aaron Gokaslan : Hello, I am a maintainer with the PyBind11 project. We have been following the 3.11 development branch and have noticed an issue we are encountering with changes to the C-API. Particularly, we have an edge case in our overloading dispatch mechanism that we used to solve by inspecting the "self" argument in the co_varnames member of the python frame object: (https://github.com/pybind/pybind11/blob/a224d0cca5f1752acfcdad8e37369e4cda42259e/include/pybind11/pybind11.h#L2380). However, in the new struct, the co_varnames object can now be null. There also doesn't appear to be any public API to populate it on the C-API side. Accessing it via the "inspect" module still works, but that requires us to run a Python code snippit in a potentially very hot code path: (https://github.com/pybind/pybind11/blob/a224d0cca5f1752acfcdad8e37369e4cda42259e/include/pybind11/pybind11.h#L2408). As such, we were hoping that either there is some new API change we have missed, or if there is some way other modern (and hopefully somewhat stable way to access the API) so we can emulate the old behavior with the C-API. -- components: C API messages: 409100 nosy: Skylion007 priority: normal severity: normal status: open title: Get "self" args or non-null co_varnames from frame object with C-API type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: We didn't want to read colocalsplus directly because we were worried about the stability of that approach and the code complexity / readability. Also, I wasn't aware that colocalsplus would work or if that was lazily populated as well. The functions used in CPython to extract the args from colocalsplus do not seem to be public and would need to be reimplemented by PyBind11, right? That seems very brittle as try to support future Python versions and may break in the future. Having a somewhat stable C-API to query this information seems like it would be the best solution, but I am open to suggestions on how to best proceed. How would you all recommend PyBind11 proceed with supporting 3.11 if not a C-API addition? The PyBind11 authors want to resolve this before the API becomes too locked down for 3.11. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: `PyCodeObject_GetVariableName()` and `PyCodeObject_GetVariableKind()` work? - Some public-gettters such as these functions would be ideal. OOI, how do you cope with non-local self? - We only care about checking self to prevent an infinite recursion in our method dispatch code so I am not sure a non-local self would be applicable in this case? Correct me if I am wrong. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: I saw the latest Python 3.11 5A release notes on the frame API changes. Do the notes mean the only officially supported way of accessing co_varnames is now through the Python interface and the inspect module? By using PyObject_GetAttrString? Also, the documentation in the WhatsNew is a bit unclear as PyObject_GetAttrString(frame, "f_locals") doesn't work for PyFrameObject*, only PyObject* and it doesn't describe how to get the PyObject* version of FrameObject. The same problem also happens when trying to access the co_varnames field of the PyCodeObject*. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: The frame object I am referring to was: PyFrameObject *frame = PyThreadState_GetFrame(PyThreadState_Get()); This frame can not be used with PyObject_GetAttrString. Is there anyway to get the PyObject* associated with a PyFrameObject*? It seems weird that some functionality is just not accessible using the Stable ABI of PyThreadState_GetFrame . To elabroate: I was referring to the migration guide in the changelog btw: f_code: removed, use PyFrame_GetCode() instead. Warning: the function returns a strong reference, need to call Py_DECREF(). f_back: changed (see below), use PyFrame_GetBack(). f_builtins: removed, use PyObject_GetAttrString(frame, "f_builtins"). // this frame object actually has to be a PyObject*, the old one was a PyFrameObject* . Dropping this in does not work. f_globals: removed, use PyObject_GetAttrString(frame, "f_globals"). f_locals: removed, use PyObject_GetAttrString(frame, "f_locals"). f_lasti: removed, use PyObject_GetAttrString(frame, "f_lasti"). I tried importing sys._getframe(), but that gave an attribute error interestingly enough. Run a full code snippit here works: https://github.com/pybind/pybind11/blob/96b943be1d39958661047eadac506745ba92b2bc/include/pybind11/pybind11.h#L2429, but is really slow and we would like avoid having to rely on it. Not to mention relying on a function that is an starts with an underscore seems like it really should be avoided. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1057] Incorrect URL with webbrowser and firefox under Gnome
New submission from Aaron Bingham: Under Gnome, Firefox will open the wrong URL when launched by webbrowser. For example after running the following interactive session: [EMAIL PROTECTED]:~> python Python 2.5.1 (r251:54863, Jun 6 2007, 13:42:30) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import webbrowser >>> webbrowser.open('http://www.python.org') True Firefox attempts to open the URL file:///home/bingham/%22http://www.python.org%22. This is caused by a bug in the Python standard library's webbrowser module that only affects machines running Gnome. On Gnome, webbrowser runs the command gconftool-2 -g /desktop/gnome/url-handlers/http/command 2>/dev/null to find the web browser, which prints out a browser command line like /pkgs/Firefox/2.0/firefox "%s" The quotes around "%s" are preserved when passing the command-line arguments. The quotes prevent firefox from recognizing the URL and firefox falls back to treating it as a file name. The webbrowser module already handles extra quoting around the URL for the BROWSER environment variable and this same treatment should be applied to the result of gconftool-2. The BROWSER environment variable issue, now fixed, is described at http://bugs.python.org/issue1684254. The present issue was discussed in an Ubuntu bug report (https://bugs.launchpad.net/ubuntu/+source/python2.5/+bug/83974) but was not resolved. -- components: Library (Lib) messages: 55421 nosy: bingham severity: normal status: open title: Incorrect URL with webbrowser and firefox under Gnome type: behavior versions: Python 2.5 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1057> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1662581] the re module can perform poorly: O(2**n) versus O(n**2)
Aaron Swartz added the comment: Just a note for those who think this is a purely theoretical issue: We've been using the python-markdown module on our web app for a while, only to notice the app has been repeatedly going down. After tracking down the culprit, we found that a speech from Hamlet passed to one of the Markdown regular expressions caused this exponential behavior, freezing up the app. -- nosy: +aaronsw _ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1662581> _ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13106] Incorrect pool.py distributed with Python 2.7 windows 32bit
New submission from Aaron Staley : The multiprocess/pool.py distributed with the Python 2.7.2 Windows Installer is different from the one distributed with the 64 bit windows installer or source tarball - and is buggy. Specifically, see Pool._terminate_pool: def _terminate_pool(cls, taskqueue, inqueue, outqueue, pool, worker_handler, task_handler, result_handler, cache): # this is guaranteed to only be called once debug('finalizing pool') worker_handler._state = TERMINATE task_handler._state = TERMINATE taskqueue.put(None) # THIS LINE MISSING! Without that line, termination may deadlock during Pool._help_stuff_finish. The consequence to the user is the interpreter not shutting down. -- components: Windows messages: 144934 nosy: Aaron.Staley priority: normal severity: normal status: open title: Incorrect pool.py distributed with Python 2.7 windows 32bit versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue13106> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13106] Incorrect pool.py distributed with Python 2.7 windows 32bit
Aaron Staley added the comment: Never mind; looks like this functionality was moved to handle_workers. I had inadvertently been testing under a modified pool.py. Sorry for the inconvenience! -- resolution: -> invalid status: open -> closed ___ Python tracker <http://bugs.python.org/issue13106> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13060] allow other rounding modes in round()
Aaron Robson added the comment: When i run into I have to bodge around it in ways like the below code. I've only ever used round half up, has anyone here even used Bankers Rounding by choice before? For reference here are the other options: http://en.wikipedia.org/wiki/Rounding#Tie-breaking def RoundHalfUp(number): '''http://en.wikipedia.org/wiki/Rounding#Round_half_up 0.5 and above round up else round down. ''' trunc = int(number) fractionalPart = number - trunc if fractionalPart < 0.5: return trunc else: ceil = trunc + 1 return ceil -- nosy: +AaronR ___ Python tracker <http://bugs.python.org/issue13060> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13332] execfile fixer produces code that does not close the file
Changes by Aaron Meurer : -- nosy: +Aaron.Meurer ___ Python tracker <http://bugs.python.org/issue13332> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13666] datetime documentation typos
Aaron Maenpaa added the comment: This patch fixes the rzinfo typo as well as the GMT2 issue (GMT +2 should behave exactly the same as GMT +1 with regards to DST, it's base offset should simply be +2 hours instead of +1). This does not; however, address the comment about the first line of the tzinfo.utcoffset(). The fact that tzinfo.utcoffset() should return a timedelta or None is addressed later in the same paragraph, as such I'm not sure the proposed change is an improvement. -- keywords: +patch nosy: +zacherates Added file: http://bugs.python.org/file24161/issue13666.diff ___ Python tracker <http://bugs.python.org/issue13666> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13666] datetime documentation typos
Aaron Maenpaa added the comment: Looks like the issue of the first line of utcoffsect was also raised in issue 8810. -- ___ Python tracker <http://bugs.python.org/issue13666> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12005] modulo result of Decimal differs from float/int
Aaron Maenpaa added the comment: Here is a patch that adds an explination for the difference in the behaviour to the FAQ section of the Decimal documentation. -- keywords: +patch nosy: +zacherates Added file: http://bugs.python.org/file24162/issue12005.diff ___ Python tracker <http://bugs.python.org/issue12005> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13730] Grammar mistake in Decimal documentation
New submission from Aaron Maenpaa : In the sentance: "In contrast, numbers like 1.1 and 2.2 do not have an exact representations in binary floating point." there is a mismatch in number between "an" and "representations". I suggest removing "an" to make the whole thing plural. A patch is attached. -- assignee: docs@python components: Documentation files: plural.diff keywords: patch messages: 150813 nosy: docs@python, zacherates priority: normal severity: normal status: open title: Grammar mistake in Decimal documentation versions: Python 3.3 Added file: http://bugs.python.org/file24164/plural.diff ___ Python tracker <http://bugs.python.org/issue13730> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13731] Awkward phrasing in Decimal documentation
New submission from Aaron Maenpaa : The paragraph: "The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants." ... has some awkward phrasing to my ear. I've attached a patch with a proposed alternative. -- assignee: docs@python components: Documentation files: rephrase.diff keywords: patch messages: 150814 nosy: docs@python, zacherates priority: normal severity: normal status: open title: Awkward phrasing in Decimal documentation versions: Python 3.3 Added file: http://bugs.python.org/file24165/rephrase.diff ___ Python tracker <http://bugs.python.org/issue13731> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13587] Correcting the typos error in Doc/howto/urllib2.rst
Aaron Maenpaa added the comment: Here's a patch that makes the WWW-Authenticate headers in howto/urllib2 agree with rfc2617. -- keywords: +patch nosy: +zacherates Added file: http://bugs.python.org/file24166/issue13587.diff ___ Python tracker <http://bugs.python.org/issue13587> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13731] Awkward phrasing in Decimal documentation
Aaron Maenpaa added the comment: That's fine. I'm not particularly attached to that phrasing. The one thing I would push for is to add a comma to "... decimal is preferred in accounting applications which have strict equality invariants." ... since, as far as I can tell, "which have strict equality invariants" is supposed to be a parenthetical statement explaining why accounting applications prefer to use decimal arithmetic, rather than a constraints on the preference for decimal arithmetic to only those accounting applications that have "strict equality invariants". -- ___ Python tracker <http://bugs.python.org/issue13731> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13050] RLock support the context manager protocol but this is not documented
Aaron Maenpaa added the comment: Here is a patch that adds an note about using Locks, RLocks, Conditions, and Semaphores as context managers to each of their descriptions as well as a link to the "Using locks, conditions, and semaphores in the with statement" section. -- keywords: +patch nosy: +zacherates versions: +Python 3.3 Added file: http://bugs.python.org/file24167/issue13050.diff ___ Python tracker <http://bugs.python.org/issue13050> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13731] Awkward phrasing in Decimal documentation
Aaron Maenpaa added the comment: I can understand what was meant. You're welcome to close the issue. Sorry for the nitpick. -- ___ Python tracker <http://bugs.python.org/issue13731> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12534] Tkinter doesn't support property attributes
New submission from Aaron Stevens : When using Tkinter in Python 2.6.6, it is impossible to use the new-style properties, as the base classes (Misc, Pack, Place, and Grid) do not use the new style classes. It is easily fixed by changing the class declarations, i.e.: class Misc: becomes class Misc(object): -- components: Tkinter messages: 140148 nosy: bheklilr priority: normal severity: normal status: open title: Tkinter doesn't support property attributes type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue12534> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12534] Tkinter doesn't support property attributes
Aaron Stevens added the comment: I forgot add that this is a problem only when inheriting from a Tkinter widget, such as a Frame. -- ___ Python tracker <http://bugs.python.org/issue12534> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12611] 2to3 crashes when converting doctest using reduce()
Changes by Aaron Meurer : -- nosy: +Aaron.Meurer ___ Python tracker <http://bugs.python.org/issue12611> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12613] itertools fixer fails
Changes by Aaron Meurer : -- nosy: +Aaron.Meurer ___ Python tracker <http://bugs.python.org/issue12613> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12616] zip fixer fails on zip()[:-1]
Changes by Aaron Meurer : -- nosy: +Aaron.Meurer ___ Python tracker <http://bugs.python.org/issue12616> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12611] 2to3 crashes when converting doctest using reduce()
Aaron Meurer added the comment: Vladimir will need to confirm how to reproduce this exactly, but here is corresponding SymPy issue: http://code.google.com/p/sympy/issues/detail?id=2605. The problem is with the sympy/ntheory/factor_.py file at https://github.com/sympy/sympy/blob/sympy-0.7.1.rc1/sympy/ntheory/factor_.py#L453 (linking to the file from our release candidate, as a workaround is likely to be pushed to master soon). Vladimir, can you confirm that this particular version of the file reproduces the problem? -- status: pending -> open ___ Python tracker <http://bugs.python.org/issue12611> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12664] Path variable - Windows installer
New submission from Aaron Robson : One of the main barriers to getting a working development environment for me was having to discover that I needed, learn about and find out how to set up the Path variable in Windows. I propose an option in the installer (perhaps even set to be on by default) to allow the user to choose if they want it to be added to the list or not. My apologies if this has been raised before (my searches in the tracker didn't turn up any similar problems), I am quite new to the issue tracker however. -- components: Installation messages: 141468 nosy: AaronR priority: normal severity: normal status: open title: Path variable - Windows installer type: feature request versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2 ___ Python tracker <http://bugs.python.org/issue12664> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3561] Windows installer should add Python and Scripts directories to the PATH environment variable
Changes by Aaron Robson : -- nosy: +AaronR ___ Python tracker <http://bugs.python.org/issue3561> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12006] strptime should implement %V or %u directive from libc
Changes by Aaron Robson : -- nosy: +AaronR ___ Python tracker <http://bugs.python.org/issue12006> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12942] Shebang line fixer for 2to3
New submission from Aaron Meurer : As suggested in this thread in the Python porting list (http://mail.python.org/pipermail/python-porting/2011-September/000231.html), it would be nice if 2to3 had a fixer that translated shebang lines from #! /usr/bin/env python to #! /usr/bin/env python3 Also relevant here is the draft PEP 394 (http://www.python.org/dev/peps/pep-0394/), which apparently is likely to be accepted. -- components: 2to3 (2.x to 3.0 conversion tool) messages: 143749 nosy: Aaron.Meurer priority: normal severity: normal status: open title: Shebang line fixer for 2to3 ___ Python tracker <http://bugs.python.org/issue12942> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2090] __import__ with fromlist=
Aaron Sterling added the comment: FWIW, I also get this behavior on 2.6.5 and there are claims that it occurs on 2.6.4 and 3.1.1. see http://stackoverflow.com/questions/3745221/import-calls-init-py-twice/3745273#3745273 -- nosy: +Aaron.Sterling versions: +Python 2.6 -Python 2.7 ___ Python tracker <http://bugs.python.org/issue2090> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2090] __import__ with fromlist=
Changes by Aaron Sterling : -- versions: +Python 2.7, Python 3.1 ___ Python tracker <http://bugs.python.org/issue2090> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11314] Subprocess suffers 40% process creation overhead penalty
New submission from Aaron Sherman : I wrote some code a while back which used os.popen. I recently got a warning about popen being deprecated so I tried a test with the new subprocess module. In that test, subprocess.Popen appears to have a 40% process creation overhead penalty over os.popen, which really isn't small. It seems that the difference may be made up of some heavy mmap-related work that's happening in my version of python, and that might be highly platform specific, but the mmap/mremap/munmap calls being made in my sample subprocess code aren't being made at all in the os.popen equivalent. Now, before someone says, "process creation is trivial, so a 40% hit is acceptable because it's 40% of a trivial part of your execution time," keep in mind that many Python applications are heavily process-creation focused. In my case that means monitoring, but I could also imagine this having a substantial impact on Web services and other applications that spend almost all of their time creating child processes. For a trivial script, subprocess is fine as is, but for these demanding applications, subprocess represents a significant source of pain. Anyway my testing, results and conclusions are written up here: http://essays.ajs.com/2011/02/python-subprocess-vs-ospopen-overhead.html -- components: Library (Lib) messages: 129319 nosy: Aaron.Sherman priority: normal severity: normal status: open title: Subprocess suffers 40% process creation overhead penalty type: resource usage versions: Python 2.6, Python 2.7 ___ Python tracker <http://bugs.python.org/issue11314> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11314] Subprocess suffers 40% process creation overhead penalty
Aaron Sherman added the comment: "Python 3.2 has a _posixsubprocess: some parts of subprocess are implemented in C. Can you try it?" I don't have a Python 3 installation handy, but I can see what I can do tomorrow evening to get one set up and try it out. "disagree with the idea that spawning "exit 0" subprocesses is a performance critical operation ;)" How else would you performance test process creation overhead? By introducing as little additional overhead as possible, it's possible for me to get fairly close to measuring just the subprocess module's overhead. If you stop to think about it, though, this is actually a shockingly huge percent increase. In any process creation scenario I'm familiar with, its overhead should be so small that you could bump it up several orders of magnitude and still not compete with executing a shell and asking it to do anything, even just exit. And yet, here we are. 40% I understand that most applications won't be running massive numbers of external commands in parallel, and that's the only way this overhead will really matter (at least that I can think of). But in the two scenarios I mentioned (monitoring and Web services such as CGI, neither of which is particularly rare), this is going to make quite a lot of difference, and if you're going to deprecate os.popen, I would think that making sure your proposed replacement was at least nearly as performant would be standard procedure, no? "I think your analysis is wrong. These mmap() calls are for anonymous memory, most likely they are emitted by the libc's malloc() to get some memory from the kernel. In other words they will be blazingly fast." The mremap might be a bit of a performance hit, but it's true that these calls should not be substantially slowing execution... then again, they might indicate that there's substantial amounts of work being done for which memory allocation is required, and as such may simply be a symptom of the actual problem. -- ___ Python tracker <http://bugs.python.org/issue11314> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11314] Subprocess suffers 40% process creation overhead penalty
Aaron Sherman added the comment: "That's why I asked for absolute numbers for the overhead difference." Did you not follow the link in my first post? I got pretty detailed, there. "os.popen just calls the popen(3) library call, which just performs a fork/execve and some dup/close in between. subprocess.Popen is implemented in Python, so it doesn't come as a surprise that it's slower in your example." Well, of course. I don't think I was ever trying to claim that os.popen vs. subprocess without a shell was going to compare favorably. I'm not advocating os.popen, here, I'm just trying to figure out where this massive overhead is coming from. I think the answer is just, "pure Python is fundamentally slower, and that's not a surprise." Now, if the 3.x subprocess work that was described here, gets back-ported into 2.x and is included with future releases, that will definitely serve to improve the situation, and might well render much of this moot (testing will tell). However, I do think that doing the performance testing before deprecating the previous interface would have been a good idea... -- ___ Python tracker <http://bugs.python.org/issue11314> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11314] Subprocess suffers 40% process creation overhead penalty
Aaron Sherman added the comment: I think it's still safe to say that high performance applications which need to create many hundreds or thousands of children (e.g. large monitoring systems) will still need another solution that isn't subprocess. That being said, you're right that no one is going to care about the extra overhead of subprocess in a vacuum, and most applications fork one or a very small number of processes at a time and typically end up waiting for orders of magnitude more time for their output than they spend creating the process in the first place. As I said when I opened this issue, I wasn't terribly concerned with most applications. That being said, I can't really submit a full-blown monitoring system against this bug, so the best I could do would something that did lots of os.popens or subprocess.Popens in parallel in a contrived way and see how the performance scales as I tune "lots" to different values. Sadly, the time I have for doing that needs to be spent writing other code, so I'll leave this closed and let someone else raise the issue in the future if they run into it. I can always build a dispatcher in C and communicate with it via IPC to get around the immediate concern of scalability. -- ___ Python tracker <http://bugs.python.org/issue11314> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10776] os.utime returns an error on NTFS-3G partition
New submission from Aaron Masover : I'm working with Anki (http://ankisrs.net/) on a linux NTFS-3G partition. Anki requires access to modification times in order to handle its backup files. This works fine on my ext3 partition, but on an NTFS partition accessed with NTFS-3G an error is returned: you...@yinghuochong:/storage/文件/anki/decks$ python -c 'import shutil,os; shutil.copyfile(u"\u6f22\u5b57.anki", "new.anki"); os.utime("new.anki", None)' you...@yinghuochong:/storage/文件/anki/decks$ python -c 'import shutil,os; shutil.copyfile(u"\u6f22\u5b57.anki", "new.anki"); os.utime("new.anki", (1293402264,1293402264))' Traceback (most recent call last): File "", line 1, in OSError: [Errno 1] Operation not permitted: 'new.anki' Note that passing numbers into os.utime returns an error. -- components: IO messages: 124684 nosy: Aaron.Masover priority: normal severity: normal status: open title: os.utime returns an error on NTFS-3G partition versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue10776> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10776] os.utime returns an error on NTFS-3G partition
Aaron Masover added the comment: The Anki author suggested that it was a python bug. However, that example command works on a drive set with different permissions, so this looks more like an NTFS-3G bug. -- status: open -> closed ___ Python tracker <http://bugs.python.org/issue10776> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2228] Imaplib speedup patch
Aaron Kaplan added the comment: Let me clarify. Offlineimap used to ship a modified version of imaplib in its distribution, but eventually the author decided he no longer wanted to maintain his imaplib fork, so he dropped it and went with stock imaplib (at a significant performance penalty). The patch I submitted here is the difference between the forked imaplib circa 2007 and the upstream version it was forked from. The current version of offlineimap is not relevant to this issue, because it no longer contains any imaplib code. -- ___ Python tracker <http://bugs.python.org/issue2228> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2909] struct.Struct.unpack to return a namedtuple for easier attribute access
New submission from Aaron Gallagher <[EMAIL PROTECTED]>: With the advent of collections.namedtuple, I thought that having a counterpart in the struct module would make having to deal with unpacked data much easier. Per suggestion, this extends the behavior of _struct.Struct rather than a separate NamedStruct class. The regexp might not be immediately obvious as to what the format required is. The format string is represented like this: "attribute_name1(attribute_format) attribute_name2(attribute_format2)" and so on. Formats given in parentheses without an attribute name are allowed, so that byte order and pad bytes can be specified. For example: "(!) x1(h) x2(h) (2x) y1(h) y2(h)" Suggestions and criticism are welcome. I think it would simplify using the struct module a lot to be able to have named attributes like this. -- components: Library (Lib) files: named-struct.patch keywords: patch messages: 67038 nosy: habnabit severity: normal status: open title: struct.Struct.unpack to return a namedtuple for easier attribute access versions: Python 3.0 Added file: http://bugs.python.org/file10368/named-struct.patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2909> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2909] struct.Struct.unpack to return a namedtuple for easier attribute access
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: Okay, here's a new version of my patch. Instead of replacing the default functionality of struct.Struct, this patch now adds the functionality to a separate class called NamedStruct, so as to not break backwards compatibility. The coding style has been revised, and it now also raises a more descriptive error if the regex fails to parse. Also included: a unit test. -- versions: +Python 2.6 Added file: http://bugs.python.org/file10374/named-struct2.patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2909> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3119] pickle.py is limited by python's call stack
New submission from Aaron Gallagher <[EMAIL PROTECTED]>: Currently, pickle.py in the stdlib is limited by the python call stack. For deeply recursive data structures, the default recursion limit of 1000 is not enough. The patch attached modifies pickle.py to instead use a deque object as a call stack. Pickler.save and other methods that increase the recursion depth are now generators which may yield either another generator or None, where yielding a generator adds it to the call stack. -- components: Library (Lib) files: pickle.patch keywords: patch messages: 68262 nosy: habnabit severity: normal status: open title: pickle.py is limited by python's call stack type: behavior versions: Python 2.6, Python 3.0 Added file: http://bugs.python.org/file10638/pickle.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3119> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2480] eliminate recursion in pickling
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: I've provided an alternate implementation of this that works with very minimal modification to pickle.py. See issue 3119 for the patch. -- nosy: +habnabit ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2480> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3119] pickle.py is limited by python's call stack
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: Ah, I didn't know that a list would be as fast for appending and popping. I knew that lists were optimized for .append() and .pop(), but I didn't know that a list would be just as fast as a deque if it was just used as a stack. And I'll be happy to write unit tests if it can be pointed out to me how exactly they can be written. Should it just test to make sure pickling a deeply nested object hierarchy can be pickled without raising a RuntimeError? I tried to make this as transparent as possible of a change. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3119> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3119] pickle.py is limited by python's call stack
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: Alright, sorry this took so long. Hopefully this can still be included in 3.0. Included is a patch that no longer uses collections.deque and also adds a test case to test/test_pickle.py. The test catches RuntimeError and fails the unittest. I didn't see anything that would behave like the opposite of assertRaises except letting the exception propagate. Added file: http://bugs.python.org/file11168/pickle2.patch ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3119> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4092] inspect.getargvalues return type not ArgInfo
New submission from Aaron Brady <[EMAIL PROTECTED]>: Python 2.6 (r26:66721, Oct 2 2008, 11:35:03) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> type( inspect.getargvalues( inspect.currentframe() ) ) Docs say: inspect.getargvalues(frame) ... Changed in version 2.6: Returns a named tuple ArgInfo(args, varargs, keywords, locals). The code defines an ArgInfo type, but doesn't instantiate it in the return, as shown here: return args, varargs, varkw, frame.f_locals -- components: Library (Lib) messages: 74595 nosy: castironpi severity: normal status: open title: inspect.getargvalues return type not ArgInfo type: behavior versions: Python 2.6 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4092> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7972] Have sequence multiplication call int() or return NotImplemented so that it can be overridden with __rmul__
New submission from Aaron Meurer : This works in Python 2.5 but not in Python 2.6. If you do [0]*5, it gives you [0, 0, 0, 0, 0]. I tried getting this to work with SymPy's Integer class, so that [0]*Integer(5) would return the same, but unfortunately, the sequence multiplication doesn't seem to return NotImplemented properly allowing it to be overridden in __rmul__. Overridding in regular __mul__ of course works fine. From sympy/core/basic.py (modified): # This works fine @_sympifyit('other', NotImplemented) def __mul__(self, other): if type(other) in (tuple, list) and self.is_Integer: return int(self)*other return Mul(self, other) # This has no affect. @_sympifyit('other', NotImplemented) def __rmul__(self, other): if type(other) in (tuple, list, str) and self.is_Integer: return other*int(self) return Mul(other, self) In other words, with the above, Integer(5)*[0] works, but [0]*Integer(5) raises TypeError: can't multiply sequence by non-int of type 'Integer' just as it does without any changes. See also my branch at github with these changes http://github.com/asmeurer/sympy/tree/list-int-mul. Another option might be to just have the list.__mul__(self, other) try calling int(other). SymPy has not yet been ported to Python 3, so I am sorry that I cannot test if it works there. -- messages: 99629 nosy: Aaron.Meurer severity: normal status: open title: Have sequence multiplication call int() or return NotImplemented so that it can be overridden with __rmul__ type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue7972> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4453] MSI installer shows error message if "Compile .py files to bytecode" option is selected
Aaron Thomas added the comment: I can verify this will all versions of Windows 7, and the versions of python 32 and 64 bit. I install this at my work to many machines, and every one of them crashes when trying to 'compile py scripts to bytecode' during install. I have to left click the setup file after installation (installation fails), and choose 'install' in the windows context menu, then choose 'repair python' for the installation to complete 'successfully'. -- nosy: +Aaron.Thomas versions: +Python 2.5, Python 2.6 ___ Python tracker <http://bugs.python.org/issue4453> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8465] Backreferences vs. escapes: a silent failure solved
New submission from Aaron Sherman : I tested this under 2.6 and 3.1. Under both, the common mistake that I'm sure many others have made, and which cost me quite some time today was: re.sub(r'(foo)bar', '\1baz', 'foobar') It's obvious, I'm sure, to many reading this that the second "r" was left out before the replacement spec. It's probably obvious that this is going to happen quite a lot, and there are many edge cases which are equally baffling to the uninitiated (e.g. \8, \418 and \) In order to avoid this, I'd like to request that such usage be deprecated, leaving only numeric escapes of the form matched by r'\\[0-7][0-7][0-7]?(?!\d)' as valid, non-deprecated uses (e.g. \01 or \111 are fine). Let's look at what that would do: Right now, the standard library uses escape sequences with \n where n is a single digit in a handful of places like sndhdr.py and difflib.py. These are certainly not widespread enough to consider this a common usage, but certainly those few would have to change to add a leading zero before the digit. OK, so the specific requested feature is that \xxx produces a warning where xxx is: * any single digit or * any invalid sequence of two or three digits (e.g containing 8 or 9) or * any sequence of 4 or more digits ... guiding the user to the more explicit \01, \x01 or, if they intended a literal backslash, the r notation. If you wish to go a step further, I'd suggest adding a no-op escape \e such that: \41\e1 would print "!1". Otherwise, there's no clean way to halt the interpretation of a digit-based escape sequence. -- components: Regular Expressions, Unicode messages: 103640 nosy: Aaron.Sherman severity: normal status: open title: Backreferences vs. escapes: a silent failure solved type: feature request versions: Python 2.6, Python 3.1 ___ Python tracker <http://bugs.python.org/issue8465> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8465] Backreferences vs. escapes: a silent failure solved
Aaron Sherman added the comment: Matthew, thank you for replying. I still think the primary issue is the potential for confusion between single digit escapes and backreferences, and the ease with which they could be addressed, but to cover what you said: Quote: the normal way to handle "\41" + "1" is "\0411" That might be the way dictated by the limitations of escape expansion as it is now, but it's entirely non-intuitive and seems more like the "exciting" edge cases (and obfuscated code opportunities) in other languages than something Python would be proud of. With \41\e1 you would actually be able to tell, visually that the 1 does not get read by the code which reads the \41. This seems to me to be a serious win for maintainability. -- ___ Python tracker <http://bugs.python.org/issue8465> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1792] o(n*n) marshal.dumps performance for largish objects with patch
Aaron Watters added the comment: Facundo 1) the +1024 was an accelerator to jump up to over 1k at the first resize. I think it's a good idea or at least doesn't hurt. 2) Here is an example program: def test(): from marshal import dumps from time import time testString = "abc"*1 print "now testing" now = time() dump = dumps(testString) elapsed = time()-now print "elapsed", elapsed if __name__=="__main__": test() Here are two runs: the first with the old marshal and the second with the patched marshal. The second is better than 2* faster than the first. arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python_old mtest1.py now testing elapsed 4.13367795944 arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python mtest1.py now testing elapsed 1.7495341301 arw:/home/arw/test> The example that inspired this research was very complicated and involved millions of calls to dumps which caused a number of anomalies (system calls went berzerk for some reason, maybe paging). -- Aaron Watters On Jan 11, 2008 9:25 AM, Facundo Batista <[EMAIL PROTECTED]> wrote: > > Facundo Batista added the comment: > > Why not just double the size? The "doubling + 1024" address some > specific issue? If so, it should be commented. > > Also, do you have an example of a marshal.dumps() that suffers from this > issue? > > Thank you! > > -- > nosy: +facundobatista > > __ > Tracker <[EMAIL PROTECTED]> > <http://bugs.python.org/issue1792> > __ > Added file: http://bugs.python.org/file9124/unnamed __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1792> __Facundo1) the +1024 was an accelerator to jump up to over 1k at the first resize. I think it's a good idea or at least doesn't hurt.2) Here is an example program:def test(): from marshal import dumps from time import time testString = "abc"*1 print "now testing" now = time() dump = dumps(testString) elapsed = time()-now print "elapsed", elapsed if __name__=="__main__": test()Here are two runs: the first with the old marshal and the second with the patched marshal. The second is better than 2* faster than the first.arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python_old mtest1.pynow testingelapsed 4.13367795944arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python mtest1.pynow testingelapsed 1.7495341301arw:/home/arw/test> The example that inspired this research was very complicated and involved millions of calls to dumps which caused a number of anomalies (system calls went berzerk for some reason, maybe paging). -- Aaron WattersOn Jan 11, 2008 9:25 AM, Facundo Batista <mailto:[EMAIL PROTECTED]"> [EMAIL PROTECTED]> wrote:Facundo Batista added the comment:Why not just double the size? The "doubling + 1024" address some specific issue? If so, it should be commented.Also, do you have an example of a marshal.dumps() that suffers from thisissue?Thank you!--nosy: +facundobatista __Tracker <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]><http://bugs.python.org/issue1792"; target="_blank">http://bugs.python.org/issue1792 >__ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1792] o(n*n) marshal.dumps performance for largish objects with patch
New submission from Aaron Watters: Much to my surprise I found that one of my applications seemed to be running slow as a result of marshal.dumps. I think the culprit is the w_more(...) function, which grows the marshal buffer in 1k units. This means that a marshal of size 100k will have 100 reallocations and string copies. Other parts of Python (and java etc) have a proportional reallocation strategy which reallocates a new size based on the existing size. This mean a 100k marshal requires just 5 or so reallocations and string copies (n log n versus n**2 asymptotic performance). I humbly submit the following patch (based on python 2.6a0 source). I think it is a strict improvement on the existing code, but I've been wrong before (twice ;)). -- Aaron Watters -- components: Interpreter Core files: marshal.diff messages: 59710 nosy: aaron_watters severity: normal status: open title: o(n*n) marshal.dumps performance for largish objects with patch type: resource usage versions: Python 2.6 Added file: http://bugs.python.org/file9122/marshal.diff __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1792> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1792] o(n*n) marshal.dumps performance for largish objects with patch
Aaron Watters added the comment: also: I just modified the code to do iterations using increasingly large data sizes and I see the kind of very unpleasant behaviour for the old implementation (max time varies wildly from min time) that I saw in my more complex program. The new implementation doesn't have these problems. First the runs and then the modified code runs arw:/home/arw/test> arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python_old mtest1.pyold old 0 40 elapsed max= 2.28881835938e-05 min= 4.76837158203e-06 ratio= 4.8 1 160 elapsed max= 1.59740447998e-05 min= 9.05990600586e-06 ratio= 1.76315789474 2 640 elapsed max= 2.40802764893e-05 min= 2.19345092773e-05 ratio= 1.09782608696 3 2560 elapsed max= 8.79764556885e-05 min= 3.981590271e-05 ratio= 2.20958083832 4 10240 elapsed max= 0.000290155410767 min= 0.000148057937622 ratio= 1.95974235105 5 40960 elapsed max= 0.000867128372192 min= 0.00060510635376 ratio= 1.43301812451 6 163840 elapsed max= 0.00739598274231 min= 0.00339317321777 ratio= 2.17966554244 7 655360 elapsed max= 0.0883929729462 min= 0.0139379501343 ratio= 6.34189189189 8 2621440 elapsed max= 1.69851398468 min= 0.0547370910645 ratio= 31.0304028155 9 10485760 elapsed max= 9.98945093155 min= 0.213104963303 ratio= 46.875730986 10 41943040 elapsed max= 132.281101942 min= 0.834150075912 ratio= 158.581897625 arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python mtest1.py new new 0 40 elapsed max= 2.19345092773e-05 min= 5.00679016113e-06 ratio= 4.38095238095 1 160 elapsed max= 1.00135803223e-05 min= 9.05990600586e-06 ratio= 1.10526315789 2 640 elapsed max= 3.19480895996e-05 min= 1.28746032715e-05 ratio= 2.48148148148 3 2560 elapsed max= 5.69820404053e-05 min= 3.981590271e-05 ratio= 1.43113772455 4 10240 elapsed max= 0.000186920166016 min= 0.000138998031616 ratio= 1.34476843911 5 40960 elapsed max= 0.00355315208435 min= 0.000746965408325 ratio= 4.75678263645 6 163840 elapsed max= 0.0032649040 min= 0.00304794311523 ratio= 1.07118272841 7 655360 elapsed max= 0.0127630233765 min= 0.0122020244598 ratio= 1.04597588855 8 2621440 elapsed max= 0.0511522293091 min= 0.0484230518341 ratio= 1.05636112082 9 10485760 elapsed max= 0.198891878128 min= 0.187420129776 ratio= 1.06120873124 10 41943040 elapsed max= 0.758435964584 min= 0.729014158249 ratio= 1.04035834696 arw:/home/arw/test> Above high ratio numbers indicate strange and unpleasant performance variance. For iteration 7 and higher the old implementation has a much worse max time performance than the new one. Here is the test code: def test(): from marshal import dumps from time import time size = 10 for i in range(11): size = size*4 testString = "abc"*size #print "now testing", i, size minelapsed = None for j in range(11): now = time() dump = dumps(testString) elapsed = time()-now if minelapsed is None: minelapsed = elapsed maxelapsed = elapsed else: minelapsed = min(elapsed, minelapsed) maxelapsed = max(elapsed, maxelapsed) print i, size, "elapsed max=", maxelapsed, "min=", minelapsed, "ratio=", maxelapsed/minelapsed if __name__=="__main__": import sys print sys.argv[1] test() -- Aaron Watters On Jan 11, 2008 10:14 AM, Aaron Watters < [EMAIL PROTECTED]> wrote: > > Aaron Watters added the comment: > > Facundo > > 1) the +1024 was an accelerator to jump up to over 1k at the first resize. > I think it's a good idea or at least doesn't hurt. > > 2) Here is an example program: > > def test(): >from marshal import dumps >from time import time >testString = "abc"*1 >print "now testing" >now = time() >dump = dumps(testString) >elapsed = time()-now >print "elapsed", elapsed > > if __name__=="__main__": >test() > > Here are two runs: the first with the old marshal and the second with the > patched marshal. The second is > better than 2* faster than the first. > > arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python_old mtest1.py > now testing > elapsed 4.13367795944 > arw:/home/arw/test> ~/apache2/htdocs/pythonsrc/Python/python mtest1.py > now testing > elapsed 1.7495341301 > arw:/home/arw/test> > > The example that inspired this research was very complicated and involved > millions of calls to dumps > which caused a number of anomalies (system calls went berzerk for some > reason, maybe paging). > > -- Aaron Watters > > On Jan 11, 2008 9:25 AM, Facundo Batista <[EMAIL PROTECTED]> wrote: > > > > > Facundo Batista added the comment: > > > > Why not just double the size? The "
[issue1997] unicode and string compare should not cause an exception
New submission from Aaron Watters: As I understand it comparisons between two objects should always work. I get this at the interpreter prompt: Python 2.6a0 (trunk, Jan 11 2008, 11:40:59) [GCC 3.4.6 20060404 (Red Hat 3.4.6-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> unichr(0x) < chr(128) Traceback (most recent call last): File "", line 1, in UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128) >>> I think the fix for this case is to do something arbitrary but consistent if possible? -- components: Interpreter Core messages: 61976 nosy: aaron_watters severity: normal status: open title: unicode and string compare should not cause an exception type: behavior versions: Python 2.6 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1997> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1997] unicode and string compare should not cause an exception
Aaron Watters added the comment: Okay. I haven't looked but this should be well documented somewhere because I found it very surprising (it crashed a large run somewhere in the middle). In the case of strings versus unicode I think it is possible to hack around this by catching the exceptional case and comparing character by character -- treating out of band characters as larger than all unicode characters. I don't see why this would cause any problems at any rate. -- Aaron Watters On Feb 1, 2008 6:47 PM, Guido van Rossum <[EMAIL PROTECTED]> wrote: > > Guido van Rossum added the comment: > > > As I understand it comparisons between two objects should > > always work. > > Hi Aaron! Glad to see you're back. > > It used to be that way when you & Jim wrote the first Python book. :-) > > Nowadays, comparisons *can* raise exceptions. Marc-Andre has explained > why. In 3.0, this particular issue will go away due to a different > treatment of Unicode, but many more cases will raise TypeError when < is > used. == and != will generally work, though there are no absolute > guarantees. > > -- > nosy: +gvanrossum > resolution: -> rejected > status: open -> closed > > __ > Tracker <[EMAIL PROTECTED]> > <http://bugs.python.org/issue1997> > __ > Added file: http://bugs.python.org/file9348/unnamed __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1997> __Okay. I haven't looked but this should be well documentedsomewhere because I found it very surprising (it crashed a largerun somewhere in the middle).In the case of strings versus unicode I think it is possible to hack around this by catching the exceptional case andcomparing character by character -- treating out of bandcharacters as larger than all unicode characters. I don'tsee why this would cause any problems at any rate. -- Aaron WattersOn Feb 1, 2008 6:47 PM, Guido van Rossum <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> wrote: Guido van Rossum added the comment:> As I understand it comparisons between two objects should> always work.Hi Aaron! Glad to see you're back.It used to be that way when you & Jim wrote the first Python book. :-) Nowadays, comparisons *can* raise exceptions. Marc-Andre has explainedwhy. In 3.0, this particular issue will go away due to a differenttreatment of Unicode, but many more cases will raise TypeError when < is used. == and != will generally work, though there are no absoluteguarantees.--nosy: +gvanrossumresolution: -> rejectedstatus: open -> closed __Tracker <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]><http://bugs.python.org/issue1997"; target="_blank">http://bugs.python.org/issue1997> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2228] Imaplib speedup patch
New submission from Aaron Kaplan: In some versions of John Goergen's program offlineimap, he includes a copy of imaplib.py with the attached changes. It results in a speedup of more than 50% compared to using the stock imaplib.py. -- files: imaplib-patch messages: 63237 nosy: aaronkaplan severity: normal status: open title: Imaplib speedup patch type: resource usage Added file: http://bugs.python.org/file9597/imaplib-patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2228> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2573] Can't change the framework name on OS X builds
New submission from Aaron Gallagher <[EMAIL PROTECTED]>: There is currently no way in the configure script to specify an alternate name for Python.framework. If I want to completely separate versions of Python (e.g. for 3.0 alphas and/or Stackless), I have to manually edit configure.in and configure to change the framework name. It would be much more convenient if --with-framework could take an optional parameter of the framework name to use. -- components: Build, Macintosh messages: 65105 nosy: habnabit severity: normal status: open title: Can't change the framework name on OS X builds type: feature request versions: Python 2.6, Python 3.0 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2573> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2573] Can't change the framework name on OS X builds
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: Here's a framework that implements the necessary change. I'm not very good at autoconf, so it might need to be touched up. -- keywords: +patch Added file: http://bugs.python.org/file9977/framework.patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2573> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2573] Can't change the framework name on OS X builds
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: Okay, here's the same patch but now with Mac/Makefile.in patched. I changed all references to Python to the framework name, because I believe it won't work properly otherwise. Added file: http://bugs.python.org/file9979/framework2.patch __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue2573> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4376] Nested ctypes 'BigEndianStructure' fails
New submission from Aaron Brady <[EMAIL PROTECTED]>: Nested 'BigEndianStructure' fails in 2.5 and 2.6.: TypeError: This type does not support other endian Example and traceback in attached file. -- assignee: theller components: ctypes files: ng36.py messages: 76171 nosy: castironpi, theller severity: normal status: open title: Nested ctypes 'BigEndianStructure' fails type: compile error versions: Python 2.5, Python 2.6 Added file: http://bugs.python.org/file12091/ng36.py ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4376> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4579] .read() and .readline() differ in failing
Aaron Gallagher <[EMAIL PROTECTED]> added the comment: I can't reproduce this on python 2.5.1, 2.5.2, or 2.6.0 on Mac OS 10.5.4. Both .read() and .readline() raise an EBADF IOError. 3.0.0 fails in the same way. -- nosy: +habnabit ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4579> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4708] os.pipe should return inheritable descriptors (Windows)
New submission from Aaron Brady : os.pipe should return inheritable descriptors on Windows. Patch below, test attached. New pipe() returns descriptors, which cannot be inherited. However, their permissions are set correctly, so msvcrt.get_osfhandle and msvcrt.open_osfhandle can be used to obtain an inheritable handle. Docs should contain a note to the effect. 'On Windows, use msvcrt.get_osfhandle to obtain a handle to the descriptor which can be inherited. In a subprocess, use msvcrt.open_osfhandle to obtain a new corresponding descriptor.' --- posixmodule_orig.c 2008-12-20 20:01:38.296875000 -0600 +++ posixmodule_new.c 2008-12-20 20:01:54.359375000 -0600 @@ -6481,8 +6481,12 @@ HANDLE read, write; int read_fd, write_fd; BOOL ok; + SECURITY_ATTRIBUTES sAttribs; Py_BEGIN_ALLOW_THREADS - ok = CreatePipe(&read, &write, NULL, 0); + sAttribs.nLength = sizeof( sAttribs ); + sAttribs.lpSecurityDescriptor = NULL; + sAttribs.bInheritHandle = TRUE; + ok = CreatePipe(&read, &write, &sAttribs, 0); Py_END_ALLOW_THREADS if (!ok) return win32_error("CreatePipe", NULL); -- components: Library (Lib), Windows files: os_pipe_test.py messages: 78136 nosy: castironpi severity: normal status: open title: os.pipe should return inheritable descriptors (Windows) type: behavior versions: Python 2.6, Python 2.7, Python 3.0, Python 3.1 Added file: http://bugs.python.org/file12408/os_pipe_test.py ___ Python tracker <http://bugs.python.org/issue4708> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4708] os.pipe should return inheritable descriptors (Windows)
Aaron Brady added the comment: This is currently accomplished in 'multiprocessing.forking' with a 'duplicate' function. Use (line #213): rfd, wfd = os.pipe() # get handle for read end of the pipe and make it inheritable rhandle = duplicate(msvcrt.get_osfhandle(rfd), inheritable=True) Definition (line #192). Should it be included in the public interface and documented, or perhaps a public entry point to it made? ___ Python tracker <http://bugs.python.org/issue4708> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44493] Missing terminated NUL in the length of sockaddr_un
Aaron Gallagher <_...@habnab.it> added the comment: sigh.. adding myself to nosy here too in the hope that this gets any traction -- nosy: +habnabit ___ Python tracker <https://bugs.python.org/issue44493> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38719] Surprising and possibly incorrect passing of InitVar to __post_init__ method of data classes
New submission from Aaron Ecay : I have discovered that InitVar's are passed in a surprising way to the __post_init__ method of python dataclasses. The following program illustrates the problem: = from dataclasses import InitVar, dataclass @dataclass class Foo: bar: InitVar[str] quux: InitVar[str] def __post_init__(self, quux: str, bar: str) -> None: print(f"bar is {bar}; quux is {quux}") Foo(bar="a", quux="b") = The output (on python 3.7.3 and 3.8.0a3) is (incorrectly): bar is b; quux is a This behavior seems like a bug to me, do you agree? I have not looked into the reason why it behaves this way, but I suspect that the InitVar args are passed positionally, rather than as key words, to __post_init__. This requires the order of arguments in the definition of __post_init__ to be identical to the order in which they are specified in the class. I would expect the arguments to be passed as keywords instead, which would remove the ordering dependency. If there is agreement that the current behavior is undesirable, I can look into creating a patch to change it. -- components: Library (Lib) messages: 356125 nosy: Aaron Ecay priority: normal severity: normal status: open title: Surprising and possibly incorrect passing of InitVar to __post_init__ method of data classes versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue38719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Change by Aaron Gallagher : -- nosy: +Aaron Gallagher nosy_count: 20.0 -> 21.0 pull_requests: +24811 stage: needs patch -> patch review pull_request: https://github.com/python/cpython/pull/26194 ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Aaron Gallagher <_...@habnab.it> added the comment: @daniel.urban I'm attempting to move this patch along, but since the contributing process has changed in the years since your patch, you'll need to sign the CLA. Are you interested in picking this back up at all? I haven't been given any indication of how to proceed if I'm doing this on your behalf, but hopefully the core team will enlighten us. -- nosy: +habnabit ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Aaron Gallagher <_...@habnab.it> added the comment: @daniel.urban would you kindly resubmit your patch as a PR to the cpython repo? I've learned out-of-band from someone else that putting patches on bpo is considered obsolete. you can use the PR I've submitted (https://github.com/python/cpython/pull/26194) and reset the author. I'd be happy to do it myself (giving you a branch that's all set up, so all you need to do is click the 'new PR' button) if you tell me what to set the author to. -- ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40199] Invalid escape sequence DeprecationWarnings don't trigger by default
Aaron Gallagher <_...@habnab.it> added the comment: This is definitely not windows-specific. On macos: $ python3.9 Python 3.9.4 (default, Apr 5 2021, 01:47:16) [Clang 11.0.0 (clang-1100.0.33.17)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> '\s' '\\s' -- nosy: +habnabit ___ Python tracker <https://bugs.python.org/issue40199> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44455] compileall should exit nonzero for nonexistent directories
New submission from Aaron Meurer : $ ./python.exe -m compileall doesntexist Listing 'doesntexist'... Can't list 'doesntexist' $ echo $? 0 It's standard for a command line tool that processes files to exit nonzero when given a directory that doesn't exist. -- messages: 396087 nosy: asmeurer priority: normal severity: normal status: open title: compileall should exit nonzero for nonexistent directories ___ Python tracker <https://bugs.python.org/issue44455> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16959] rlcompleter doesn't work if __main__ can't be imported
Aaron Meurer added the comment: A quick glance at the source shows that it still imports __main__ at the top-level. I have no idea how legitimate it is that the App Engine (used to?) makes it so that __main__ can't be imported. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue16959> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44603] REPL: exit when the user types exit instead of asking them to explicitly type exit()
Aaron Meurer added the comment: When talking about making exit only work when typed at the interpreter, something to consider is the confusion that it can cause when there is a mismatch between the interactive interpreter and noninteractive execution, especially for novice users. I've seen beginner users add exit() to the bottom of Python scripts, presumably because the interpreter "taught" them that you have to end with that. Now imagine someone trying to use exit as part of control flow if input("exit now? ") == "yes": exit Unless exit is a full blown keyword, that won't work. And the result is yet another instance in the language where users become confused if they run across it, because it isn't actually consistent in the language model. There are already pseudo-keywords in the language, in particular, super(), but that's used to implement something which would be impossible otherwise. Exiting is not impossible otherwise, it just requires typing (). But that's how everything in the language works. I would argue it's a good thing to reinforce the idea that typing a variable by itself with no other surrounding syntax does nothing. This helps new users create the correct model of the language in their heads. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue44603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17792] Unhelpful UnboundLocalError due to del'ing of exception target
Aaron Smith added the comment: I encountered the similar behavior unexpectedly when dealing with LEGB scope of names. Take the following example run under Python 3.9.2: def doSomething(): x = 10 del x print(x) x = 5 doSomething() This produces a UnboundLocalError at print(x) even though "x" can still be found in the global scope. Indeed if your add print(globals()) before the print(x) line, you can see "x" listed. By contrast, LEGB scope behavior works as expected in this example: def doSomething(): print(x) x = 5 doSomething() The former example yielding the UnboundLocalError when dealing with name scope feels like a bug that lines up with the original behavior described in this enhancement request, as I believe "x" is still a bounded name in the global scope, but was explicitly deleted from the local scope. -- nosy: +aaronwsmith ___ Python tracker <https://bugs.python.org/issue17792> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45473] Enum add "from_name" and "from_value" class methods
New submission from Aaron Koch : Documentation: https://docs.python.org/3/library/enum.html#creating-an-enum Current behavior: SomeEnum[name] is used to construct an enum by name SomeEnum(value) is used to construct an enum by value Problem: As a user of enums, it is difficult to remember the mapping between parenthesis/square brackets and construct from name/construct from value. Suggestion: Add two class methods to Enum @classmethod def from_name(cls, name): return cls[name] @classmethod def from_value(cls, value): return cls(value) Benefits: This is an additive change only, it doesn't change any behavior of the Enum class, so there are no backwards compatibility issues. Adding these aliases to the Enum class would allow readers and writers of enums to interact with them more fluently and with less trips to the documentation. Using these aliases would make it easier to write the code you intended and to spot bugs that might arise from the incorrect use of from_name or from_value. -- messages: 403936 nosy: aekoch priority: normal severity: normal status: open title: Enum add "from_name" and "from_value" class methods type: enhancement ___ Python tracker <https://bugs.python.org/issue45473> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45473] Enum add "from_name" and "from_value" class methods
Aaron Koch added the comment: Are there any other names that you would contemplate besides `from_name` and `from_value`? My reading of your response indicates that you are fundamentally opposed to the addition of class methods, since they would limit the space of possible instance methods/members. Is that a fair reading? If it is not, would you be open to different method names? Do you agree with the fundamental issue that is identified: that the parenthesis/square bracket construction is difficult to read and makes implementation mistakes more likely? That it would be good to have some what to make it more explicit at both read and write time whether the enum is being constructed using the name or the value. One alternative to the class methods I might propose is to use a keyword argument in the __init__ function. SomeEnum(name="foo") SomeEnum(value="bar") This would also solve the stated problem, but I suspect that messing with the init function introduces more limitations to the class than the classmethod solution. -- ___ Python tracker <https://bugs.python.org/issue45473> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42109] Use hypothesis for testing the standard library, falling back to stubs
Change by Aaron Meurer : -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue42109> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Aaron Gallagher <_...@habnab.it> added the comment: I will note, Raymond, that I’ve wanted this for years before discovering this bpo issue, and I found it because you linked it on Twitter. ;) On Wed, Dec 8, 2021 at 19:08 Raymond Hettinger wrote: > > Raymond Hettinger added the comment: > > Another thought: Given that this tracker issue has been open for a decade > without resolution, we have evidence that this isn't an important problem > in practice. > > Arguably, people have been better off being nudged in another direction > toward better design or having been forced to be explicit about what method > is called and when. > > -- > > ___ > Python tracker > <https://bugs.python.org/issue14965> > ___ > -- ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36144] Dictionary addition. (PEP 584)
Aaron Hall added the comment: Another obvious way to do it, but I'm +1 on it. A small side point however - PEP 584 reads: > To create a new dict containing the merged items of two (or more) dicts, one > can currently write: > {**d1, **d2} > but this is neither obvious nor easily discoverable. It is only guaranteed to > work if the keys are all strings. If the keys are not strings, it currently > works in CPython, but it may not work with other implementations, or future > versions of CPython[2]. ... > [2] Non-string keys: https://bugs.python.org/issue35105 and > https://mail.python.org/pipermail/python-dev/2018-October/155435.html The references cited does not back this assertion up. Perhaps the intent is to reference the "cool/weird hack" dict(d1, **d2) (see https://mail.python.org/pipermail/python-dev/2010-April/099485.html and https://mail.python.org/pipermail/python-dev/2010-April/099459.html), which allowed any hashable keys in Python 2 but only strings in Python 3. If I see {**d1, **d2}, my expectations are that this is the new generalized unpacking and I currently expect any keys to be allowed, and the PEP should be updated to accurately reflect this to prevent future misunderstandings. -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue36144> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39854] f-strings with format specifiers have wrong col_offset
New submission from Aaron Meurer : This is tested in CPython master. The issue also occurs in older versions of Python. >>> ast.dump(ast.parse('f"{x}"')) "Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=-1, format_spec=None)]))], type_ignores=[])" >>> ast.dump(ast.parse('f"{x!r}"')) "Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=114, format_spec=None)]))], type_ignores=[])" >>> ast.parse('f"{x}"').body[0].value.values[0].value.col_offset 3 >>> ast.parse('f"{x!r}"').body[0].value.values[0].value.col_offset 1 The col_offset for the variable x should be 3 in both instances. -- messages: 363375 nosy: asmeurer priority: normal severity: normal status: open title: f-strings with format specifiers have wrong col_offset versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue39854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39820] Bracketed paste mode for REPL
Aaron Meurer added the comment: Related issue https://bugs.python.org/issue32019 -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue39820> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21821] The function cygwinccompiler.is_cygwingcc leads to FileNotFoundError under Windows 7
Aaron Meurer added the comment: Is find_executable() going to be extracted from distutils to somewhere else? It's one of those functions that is useful outside of packaging, and indeed, I've seen it imported in quite a few codes that aren't related to packaging. If so, the patch I mentioned could still be relevant for it (if it hasn't been fixed already). -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue21821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42819] readline 8.1 enables the bracketed paste mode by default
Aaron Meurer added the comment: Instead of enabling it by default, why not just keep it but emulate the old behavior by splitting and buffering the input lines? That way you still get some of the benefits of bracketed paste, i.e., faster pasting, but without the hard work of fixing the REPL to actually support native multiline editing + execing multiline statements (the broken "simple" design). -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue42819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39820] Bracketed paste mode for REPL: don't execute pasted command before ENTER is pressed explicitly
Aaron Meurer added the comment: To reiterate some points I made in the closed issues https://bugs.python.org/issue42819 and https://bugs.python.org/issue32019. A simple "fix" would be to emulate the non-bracketed paste buffering. That is, accept the input using bracketed paste, but split it line by line and send that to the REPL. That would achieve some of the benefits of bracketed paste (faster pasting), without having to change how the REPL works. For actually allowing multiline input in the REPL, one issue I see is that the so-called "single" compile mode is fundamentally designed around single line evaluation. To support proper multiline evaluation, it would need to break from this model (which in my opinion is over engineered). In one of my personal projects, I use a function along the lines of import ast def eval_exec(code, g=None, l=None, *, filename="", noresult=None): if g is None: g = globals() if l is None: l = g p = ast.parse(code) expr = None res = noresult if p.body and isinstance(p.body[-1], ast.Expr): expr = p.body.pop() code = compile(p, filename, 'exec') exec(code, g, l) if expr: code = compile(ast.Expression(expr.value), filename, 'eval') res = eval(code, g, l) return res This function automatically execs the code, but if the last part of it is an expression, it returns it (note that this is much more useful than simply printing it). Otherwise it returns a noresult marker (None by default). I think this sort of functionality in general would be useful in the standard library (much more useful than compile('single')), but even ignoring whether it should be a public function, this is the sort of thing that is needed for "proper" multiline execution in a REPL. Terry mentioned that idle supports multiline already. But I tried pasting a = 1 a into idle (Python 3.9), and I get the same "SyntaxError: multiple statements found while compiling a single statement" error, suggesting it still has the same fundamental limitation. Also, if it wasn't clear, I should note that this is independent of pasting. You can already write def func(): return 1 func() manually in the interpreter or IDLE and it will give a syntax error. -- ___ Python tracker <https://bugs.python.org/issue39820> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com