New submission from Serhiy Storchaka: Parsing keyword arguments is much more slow than parsing positional arguments. Parsing time can be larger that useful execution time.
$ ./python -m timeit "b'a:b:c'.split(b':', 1)" 1000000 loops, best of 3: 0.638 usec per loop $ ./python -m timeit "b'a:b:c'.split(b':', maxsplit=1)" 1000000 loops, best of 3: 1.64 usec per loop The main culprit is that Python strings are created for every keyword name on every call. Proposed patch adds alternative API that caches keyword names as Python strings in special object. Argument Clinic is changed to use this API in generated file. An effect of the optimization: $ ./python -m timeit "b'a:b:c'.split(b':', maxsplit=1)" 1000000 loops, best of 3: 0.826 usec per loop Invocations of PyArg_ParseTupleAndKeywords() in non-generated code are kept, since API is not stable yet. Later I'm going to cache parsed format strings and speed up parsing positional arguments too. ---------- components: Interpreter Core files: faster_keyword_args_parse.patch keywords: patch messages: 270832 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Faster parsing keyword arguments type: performance versions: Python 3.6 Added file: http://bugs.python.org/file43794/faster_keyword_args_parse.patch _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27574> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com