Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
TL;DR version: I'm now +1 on a string-based PEP 563, with one relatively small quibble regarding the future flag's name. Putting that quibble first: could we adjust the feature flag to be either "from __future__ import lazy_annotations" or "from __future__ import str_annotations"? Every time I see "from __future__ import annotations" I think "But we've had annotations since 3.0, why would they need a future import?". Adding the "lazy_" or "str_" prefix makes the feature flag self-documenting: it isn't the annotations support that's new, it's the fact the interpreter will avoid evaluating them at runtime by treating them as implicitly quoted strings at compile time. See inline comments for clarifications on what I was attempting to propose in relation to thunks, and more details on why I changed my mind :) On 9 November 2017 at 14:16, Guido van Rossum wrote: > On Wed, Nov 8, 2017 at 5:49 PM, Nick Coghlan wrote: >> >> On 8 November 2017 at 16:24, Guido van Rossum wrote: >> > I also don't like the idea that there's nothing you can do with a thunk >> > besides calling it -- you can't meaningfully introspect it (not without >> > building your own bytecode interpreter anyway). >> >> Wait, that wasn't what I was suggesting at all - with thunks exposing >> their code object the same way a function does (i.e. as a `__code__` >> attribute), the introspection functions in `dis` would still work on >> them, so you'd be able to look at things like which variable names >> they referenced, thus granting the caller complete control over *how* >> they resolved those variable names (by setting them in the local >> namespace passed to the call). > > I understood that they would be translated to `lambda: `. It seems you > have a slightly more complex idea but if you're suggesting introspection > through dis, that's too complicated for my taste. Substituting in a lambda expression wouldn't work for the reasons you gave when you objected to that idea (there wouldn't be any way for typing.get_type_hints() to inject "vars(cls)" when evaluating the annotations for method definitions, and enabling a cell-based alternative would be a really intrusive change). >> This is why they'd have interesting potential future use cases as >> general purpose callbacks - every local, nonlocal, global, and builtin >> name reference would implicitly be an optional parameter (or a >> required parameter if the name couldn't be resolved as a nonlocal, >> global, or builtin). > > Yeah, but that's scope creep for PEP 563. Łukasz and I are interested in > gradually restricting the use of annotations to static typing with an > optional runtime component. We're not interested in adding different use > cases. (We're committed to backwards compatibility, but only until 4.0, with > a clear deprecation path.) Sorry, that was ambiguous wording on my part: the "potential future use cases" there related to thunks in general, not their use for annotations in particular. APIs like pandas.query are a more meaningful example of where thunks are potentially useful (and that's a problem I've been intermittently pondering since Fernando Perez explained it to me at SciPy a few years back - strings are an OK'ish workaround, but losing syntax highlighting, precompiled code object caching, and other benefits of real Python expressions means they *are* a workaround). >> Instead, thunks would offer all the same introspection features as >> lambda expressions do, they'd just differ in the following ways: >> >> * the parameter list on their code objects would always be empty >> * the parameter list for their __call__ method would always be "ns=None" >> * they'd be compiled without CO_OPTIMIZED (the same as a class namespace) >> * they'd look up their closure references using LOAD_CLASSDEREF (the >> same as a class namespace) > > I don't understand the __call__ with "ns-None" thing but I don't expect it > matters. It was an attempted shorthand for the way thunks could handle the method annotations use case in a way that regular lambda expressions can't: "thunk(vars(cls))" would be roughly equivalent to "exec(thunk.__code__, thunk.__globals__, vars(cls))", similar to the way class body evaluations works like "exec(body.__code__, body.__globals__, mcl.__prepare__())" That doesn't make a difference to your decision in relation to PEP 563, though. >> That leaves the door open to a future PEP that proposes thunk-based >> annotations as part of proposing thunks as a new low level delayed >> evaluation primitive. > > Sorry, that's not a door I'd like to leave open. At this point, I'd expect any successful PEP for the thunks idea to offer far more compelling use cases than type annotations - the key detail for me is that even if PEP 563 says "Lazy evaluation as strings means that type annotations do not support lexical closures", injecting attributes into class namespaces will still offer a way for devs to emulate closure references if they really want them. It's also
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 9 November 2017 at 02:17, Barry Warsaw wrote: > I suppose there are lots of ways to do this, but at least I’m pretty sure we > all agree that end users shouldn’t see DeprecationWarnings, while developers > should. Agreed. Most of the debate to me seems to be around who is an end user and who is a developer (and whether someone can be both at the same time). In my opinion, I am a developer of any code I write, but an end user of any code I get from others (whether that be a library or a full-blown application). However, the problem is that Python can't tell what code I wrote. Enabling warnings just for __main__ takes a conservative view of what counts as "code I wrote", while not being as conservative as the current approach (which is basically "assume I'm an end user unless I explicitly say I'm not"). My preference is to be conservative, so the proposed change is OK with me. Paul ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
Guido van Rossum wrote: I did not assume totally opaque -- but code objects are not very introspection friendly (and they have no strong compatibility guarantees). If I understand the proposal correctly, there wouldn't be any point in trying to introspect the lambdas/thunks/whatever. They're only there to provide a level of lazy evaluation. You would evaluate them and then introspect the returned data structure. -- Greg ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 8 November 2017 at 19:21, Antoine Pitrou wrote: The idea that __main__ scripts should get special treatment here is entirely gratuitous. When I'm writing an app in Python, very often my __main__ is just a stub that imports the actual functionality from another module to get the benefits of a pyc. So enabling deprecation warnings for __main__ only wouldn't result in me seeing any more warnings. -- Greg ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
09.11.17 04:08, Raymond Hettinger пише: On Nov 8, 2017, at 8:30 AM, Serhiy Storchaka wrote: Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all maintained versions ([1] and [2]). Despite their names they are private. I think that they are enough stable now and would be helpful in third-party code. Are there any objections against adding them to the stable C API? [3] I have mixed feeling about this. You and Victor seem to really like these macros, but they have been problematic for me. I'm not sure whether it is a conceptual issue or a naming issue, but the presence of these macros impairs my ability to read code and determine whether the refcounts are correct. I usually end-up replacing the code with the unrolled macro so that I can count the refs across all the code paths. If the problem is with naming, what names do you prefer? This already was bikeshedded (I insisted on discussing names before introducing the macros), but may now you have better ideas? The current code contains 212 usages of Py_SETREF and Py_XSETREF. Maybe 10% of them correctly used temporary variables before introducing these macros, and these macros just made the code shorter. But in the rest of cases the refcount was decremented before setting new value. Not always this caused a problem, but it is too hard to prove that using such code is safe in every concrete case. Unrolling all these invocations will make the code larger and more cumbersome, and it is hard to do automatically. The other issue is that when there are multiple occurrences of these macros for multiple variables, it interferes with my best practice of deferring all decrefs until the data structures are in a fully consistent state. Any one of these can cause arbitrary code to run. I greatly prefer putting all the decrefs at the end to increase my confidence that it is okay to run other code that might reenter the current code. Pure python functions effectively have this built-in because the locals all get decreffed at the end of the function when a return-statement is encountered. That practice helps me avoid hard to spot re-entrancy issues. I agree with you. If you need to set two or more attributes synchronously, Py_SETREF will not help you. This should be clearly explained in the documentation. Several subsequent Py_SETREFs may be an error. When I created my patches for using Py_SETREF I encountered several such cases and used different code for them. Maybe still there is not completely correct code, but in any case it is better now than before introducing Py_SETREF. But in many case you need to set only one attribute or different attributes are not tightly related. Lastly, I think we should have a preference to not grow the stable C API. Bigger APIs are harder to learn and remember, not so much for you and Victor who use these frequently, but for everyone else who has to lookup all the macros whose function isn't immediately self-evident. I agree with you again. But these macros are pretty helpful. They allow to write the safer code easy. And they are more used than any other C API addition in 3.5, 3.6, and 3.7. 3.7: Py_X?SETREF -- 216 Py_UNREACHABLE -- 65 Py_RETURN_RICHCOMPARE -- 15 PyImport_GetModule -- 28 PyTraceMalloc_(T|Unt)rack -- 9 PyOS_(BeforeFork|AfterFork_(Parent|Child)) -- 24 Py_tss_NEEDS_INIT -- 6 PyInterpreterState_GetID -- 3 3.6: PySlice_Unpack -- 22 PySlice_AdjustIndices -- 22 PyErr_SetImportErrorSubclass -- 3 PyErr_ResourceWarning -- 6 PyOS_FSPath -- 9 Py_FinalizeEx -- 21 Py_MEMBER_SIZE -- 4 3.5: PyCodec_NameReplaceErrors -- 3 PyErr_FormatV -- 6 PyCoro_New -- 3 PyCoro_CheckExact -- 14 PyModuleDef_Init -- 30 PyModule_FromDefAndSpec2? -- 10 PyModule_ExecDef -- 3 PyModule_SetDocString -- 4 PyModule_AddFunctions -- 3 PyNumber_MatrixMultiply -- 4 PyNumber_InPlaceMatrixMultiply -- 4 Py_DecodeLocale -- 25 Py_EncodeLocale -- 11 Some older macros: Py_STRINGIFY -- 15 Py_ABS -- 54 Py_MIN -- 66 Py_MAX - 56 The above number include declaration and definition, hence remove 2 or 3 per name for getting the number of usages. Only added in 3.4 Py_UNUSED beats them (291) because it is used in generated Argument Clinic code. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Clarifying Cygwin support in CPython
On Wed, Nov 8, 2017 at 5:28 PM, Zachary Ware wrote: > On Wed, Nov 8, 2017 at 8:39 AM, Erik Bray wrote: >> a platform--in particular it's not clear when a buildbot is considered >> "stable", or how to achieve that without getting necessary fixes >> merged into the main branch in the first place. > > I think in this context, "stable" just means "keeps a connection to > the buildbot master and doesn't blow up when told to build" :). As > such, I'm ready to get you added to the fleet whenever you are. "Doesn't blow up when told to build" is the tricky part, because there are a few tests that are known to cause the test suite process to hang until killed. It's not clear to me whether, even with the --timeout option, that the test runner will kill hanging processes (I haven't actually tried this though so I'll double-check, but I'm pretty sure it does not). So until at least those issues are resolved I'd be hesitate to call it "stable". Thanks, Erik ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
Recently, Oren Milman fixed multiple bugs when an __init__() method was called twice. IMHO Py_SETREF() was nicely used in __init__(): https://github.com/python/cpython/commit/e56ab746a965277ffcc4396d8a0902b6e072d049 https://github.com/python/cpython/commit/c0cabc23bbe474d542ff8a4f1243f4ec3cce5549 While it's possible to rewrite the code *correctly* without PY_SETREF(), it would be much more verbose. Here the fix remains a single line: - self->archive = filename; + Py_XSETREF(self->archive, filename); Victor 2017-11-09 3:08 GMT+01:00 Raymond Hettinger : > >> On Nov 8, 2017, at 8:30 AM, Serhiy Storchaka wrote: >> >> Macros Py_SETREF and Py_XSETREF were introduced in 3.6 and backported to all >> maintained versions ([1] and [2]). Despite their names they are private. I >> think that they are enough stable now and would be helpful in third-party >> code. Are there any objections against adding them to the stable C API? [3] > > I have mixed feeling about this. You and Victor seem to really like these > macros, but they have been problematic for me. I'm not sure whether it is a > conceptual issue or a naming issue, but the presence of these macros impairs > my ability to read code and determine whether the refcounts are correct. I > usually end-up replacing the code with the unrolled macro so that I can count > the refs across all the code paths. > > The other issue is that when there are multiple occurrences of these macros > for multiple variables, it interferes with my best practice of deferring all > decrefs until the data structures are in a fully consistent state. Any one > of these can cause arbitrary code to run. I greatly prefer putting all the > decrefs at the end to increase my confidence that it is okay to run other > code that might reenter the current code. Pure python functions effectively > have this built-in because the locals all get decreffed at the end of the > function when a return-statement is encountered. That practice helps me > avoid hard to spot re-entrancy issues. > > Lastly, I think we should have a preference to not grow the stable C API. > Bigger APIs are harder to learn and remember, not so much for you and Victor > who use these frequently, but for everyone else who has to lookup all the > macros whose function isn't immediately self-evident. > > > Raymond > > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
2017-11-09 3:08 GMT+01:00 Raymond Hettinger : > I greatly prefer putting all the decrefs at the end to increase my confidence > that it is okay to run other code that might reenter the current code. There are 3 patterns to update C attributes of an object: (1) Py_XDECREF(obj->attr); // can call Python code obj->attr = new_value; or (2) old_value = obj->attr; obj->attr = new_value; Py_XDECREF(old_value); // can call Python code or (3) old_value = obj->attr; obj->attr = new_value; ... // The assumption here is that nothing here ... // can call arbitrary Python code // Finally, after setting all other attributes Py_XDECREF(old_value); // can call Python code Pattern (1) is likely to be vulnerable to reentrancy issue: Py_XDECREF() can call arbitrary Python code indirectly by the garbage collector, while the object being modified contains a *borrowed* reference instead of a *strong* reference, or can even refer an object which was just destroyed. Pattern (2) is better: the object always keeps a strong reference, *but* the modified attribute can be inconsistent with other attributes. At least, you prevent hard crashes. Pattern (3) is likely the most correct way to write C code to implement a Python object... but it's harder to write such code correctly :-( You have to be careful to not leak a reference. If I understood correctly, the purpose of the Py_SETREF() macro is not to replace (3) with (2), but to fix all incorrect code written as (1). If I recall correctly, Serhiy modified a *lot* of code written as (1) when he implemented Py_SETREF(). > Pure python functions effectively have this built-in because the locals all > get decreffed at the end of the function when a return-statement is > encountered. That practice helps me avoid hard to spot re-entrancy issues. Except if you use a lock, all Python methods are written as (2): a different thread or a signal handler is likely to see the object as inconsistent, when accessed between two instructions modifying an object attributes. Example: def __init__(self, value): self.value = value self.double = value * 2 def increment(self): self.value += 1 # object inconsistent here self.double *= 2 The increment() method is not atomic: if the object is accessed at "# object inconsistent here", the object is seen in an inconsistent state. Victor ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
> On Nov 9, 2017, at 2:44 AM, Serhiy Storchaka wrote: > > If the problem is with naming, what names do you prefer? This already was > bikeshedded (I insisted on discussing names before introducing the macros), > but may now you have better ideas? It didn't really seem like a bad idea until after you swept through the code with 200+ applications of the macro and I saw how unclear the results were. Even code that I wrote myself is now harder for me to grok (for example, the macro was applied 17 times to already correct code in itertools). We used to employ a somewhat plain coding style that was easy to walk through, but the following examples seem opaque. I find it takes practice to look at any one of these and say that it is unequivocally correct (were the function error return arguments handled correctly, are the typecasts proper, at what point can a reentrant call occur, which is the source operand and which is the destination, is the macro using either of the operands twice, is the destination operand an allowable lvalue, do I need to decref the source operand afterwards, etc): Py_SETREF(((PyHeapTypeObject*)type)->ht_name, value) Py_SETREF(newconst, PyFrozenSet_New(newconst)); Py_XSETREF(c->u->u_private, s->v.ClassDef.name); Py_SETREF(*p, t); Py_XSETREF(self->lineno, PyTuple_GET_ITEM(info, 1)); Py_SETREF(entry->path, PyUnicode_EncodeFSDefault(entry->path)); Py_XSETREF(self->checker, PyObject_GetAttrString(ob, "_check_retval_")); Py_XSETREF(fut->fut_source_tb, _PyObject_CallNoArg(traceback_extract_stack)); Stylistically, all of these seem awkward and I think there is more to it than just the name. I'm not sure it is wise to pass complex inputs into a two-argument macro that makes an assignment and has a conditional refcount side-effect. Even now, one of the above looks to me like it might not be correct. Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. As a result of making the macros official, will the Python world have a net increase in complexity or decrease in complexity? My personal experience with the macros hasn't been positive. Perhaps everyone else thinks it's fine. If so, I won't stand in your way. Raymond ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
On Thu, 9 Nov 2017 04:22:20 -0800 Raymond Hettinger wrote: > > Probably, we're the wrong people to be talking about this. The proposal is > to make these macros part of the official API so that it starts to appear in > source code everywhere. The question isn't whether the above makes sense to > you and me; instead, it is whether other people can make heads or tails out > the above examples. Generally I would advocate anyone wanting to write a third-party C extension, but not very familiar with the C API and its quirks, use Cython instead. I'm not sure if that's an argument for the SETREF APIs to remain private or to become public :-) Regards Antoine. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
Hum, to give more context to the discussion, the two discussed macros
are documented this way:
#ifndef Py_LIMITED_API
/* Safely decref `op` and set `op` to `op2`.
*
* As in case of Py_CLEAR "the obvious" code can be deadly:
*
* Py_DECREF(op);
* op = op2;
*
* The safe way is:
*
* Py_SETREF(op, op2);
*
* That arranges to set `op` to `op2` _before_ decref'ing, so that any code
* triggered as a side-effect of `op` getting torn down no longer believes
* `op` points to a valid object.
*
* Py_XSETREF is a variant of Py_SETREF that uses Py_XDECREF instead of
* Py_DECREF.
*/
#define Py_SETREF(op, op2) \
do {\
PyObject *_py_tmp = (PyObject *)(op); \
(op) = (op2); \
Py_DECREF(_py_tmp); \
} while (0)
#define Py_XSETREF(op, op2) \
do {\
PyObject *_py_tmp = (PyObject *)(op); \
(op) = (op2); \
Py_XDECREF(_py_tmp);\
} while (0)
#endif /* ifndef Py_LIMITED_API */
Victor
2017-11-09 13:22 GMT+01:00 Raymond Hettinger :
>
>> On Nov 9, 2017, at 2:44 AM, Serhiy Storchaka wrote:
>>
>> If the problem is with naming, what names do you prefer? This already was
>> bikeshedded (I insisted on discussing names before introducing the macros),
>> but may now you have better ideas?
>
> It didn't really seem like a bad idea until after you swept through the code
> with 200+ applications of the macro and I saw how unclear the results were.
> Even code that I wrote myself is now harder for me to grok (for example, the
> macro was applied 17 times to already correct code in itertools).
>
> We used to employ a somewhat plain coding style that was easy to walk
> through, but the following examples seem opaque. I find it takes practice to
> look at any one of these and say that it is unequivocally correct (were the
> function error return arguments handled correctly, are the typecasts proper,
> at what point can a reentrant call occur, which is the source operand and
> which is the destination, is the macro using either of the operands twice, is
> the destination operand an allowable lvalue, do I need to decref the source
> operand afterwards, etc):
>
> Py_SETREF(((PyHeapTypeObject*)type)->ht_name, value)
> Py_SETREF(newconst, PyFrozenSet_New(newconst));
> Py_XSETREF(c->u->u_private, s->v.ClassDef.name);
> Py_SETREF(*p, t);
> Py_XSETREF(self->lineno, PyTuple_GET_ITEM(info, 1));
> Py_SETREF(entry->path, PyUnicode_EncodeFSDefault(entry->path));
> Py_XSETREF(self->checker, PyObject_GetAttrString(ob, "_check_retval_"));
> Py_XSETREF(fut->fut_source_tb,
> _PyObject_CallNoArg(traceback_extract_stack));
>
> Stylistically, all of these seem awkward and I think there is more to it than
> just the name. I'm not sure it is wise to pass complex inputs into a
> two-argument macro that makes an assignment and has a conditional refcount
> side-effect. Even now, one of the above looks to me like it might not be
> correct.
>
> Probably, we're the wrong people to be talking about this. The proposal is
> to make these macros part of the official API so that it starts to appear in
> source code everywhere. The question isn't whether the above makes sense to
> you and me; instead, it is whether other people can make heads or tails out
> the above examples. As a result of making the macros official, will the
> Python world have a net increase in complexity or decrease in complexity?
>
> My personal experience with the macros hasn't been positive. Perhaps
> everyone else thinks it's fine. If so, I won't stand in your way.
>
>
> Raymond
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
09.11.17 14:22, Raymond Hettinger пише: Stylistically, all of these seem awkward and I think there is more to it than just the name. I'm not sure it is wise to pass complex inputs into a two-argument macro that makes an assignment and has a conditional refcount side-effect. Even now, one of the above looks to me like it might not be correct. If you have found an incorrect code, please open an issue and provide a patch. But recently you have rewrote the correct code (Py_SETREF was not involved) in more complicated way [1] and have rejected my patch that gets rid of the duplication of this complicated code [2]. Please don't "fix" the code that is not broken. [1] https://bugs.python.org/issue26491 [2] https://bugs.python.org/issue31585 Probably, we're the wrong people to be talking about this. The proposal is to make these macros part of the official API so that it starts to appear in source code everywhere. The question isn't whether the above makes sense to you and me; instead, it is whether other people can make heads or tails out the above examples. As a result of making the macros official, will the Python world have a net increase in complexity or decrease in complexity? I afraid that these macros will be used in any case, even when they are not the part of an official C API, because they are handy. The main purpose of documenting them officially is documenting in what cases these macros are appropriate and make the code more reliable, and in what cases they are not enough and a more complex code should be used. This would be a lesson about correct replacing references. I didn't write this in the source comment because it was purposed for experienced Python core developers, and all usages under our control and passes a peer review. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 11/09/2017 01:49 AM, Greg Ewing wrote: >> On 8 November 2017 at 19:21, Antoine Pitrou wrote: >>> The idea that __main__ scripts should >>> get special treatment here is entirely gratuitous. > > When I'm writing an app in Python, very often my __main__ is > just a stub that imports the actual functionality from another > module to get the benefits of a pyc. So enabling deprecation > warnings for __main__ only wouldn't result in me seeing any > more warnings. IIUC, that would be as expected: you would see the warnings when running your test suite exercising that imported code (which should run with all warnings enabled), but not when running the app. Seems like a reasonable choice to me. Tres. -- === Tres Seaver +1 540-429-0999 [email protected] Palladion Software "Excellence by Design"http://palladion.com ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On Nov 9, 2017, at 07:27, Tres Seaver wrote: > IIUC, that would be as expected: you would see the warnings when running > your test suite exercising that imported code (which should run with all > warnings enabled), but not when running the app. > > Seems like a reasonable choice to me. I’m coming around to that view too. FWIW, I definitely do want to see the DeprecationWarnings in libraries I use, even if I didn’t write them. That let’s me help that package’s author identify them, maybe even provide a fix, and let’s me evaluate whether maybe some other library is better suited to my needs. It probably does strike the right balance to see that in my own test suite only. -Barry signature.asc Description: Message signed with OpenPGP ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] OrderedDict(kwargs) optimization?
Got it. Thanks! On Wednesday, November 8, 2017, INADA Naoki wrote: > > That'd be great for preserving kwargs' order after a pop() or a del? > > To clarify, order is preserved after pop in Python 3.6 (and maybe 3.7). > > There is discussion about breaking it to optimize for limited use cases, > but I don't think it's worth enough to discuss more until it demonstrates > real performance gain. > > > > Is there an opportunity to support a fast cast to OrderedDict from 3.6 > dict? > > Can it just copy .keys() into the OrderedDict linked list?Or is there > more overhead to the transition? > > https://bugs.python.org/issue31265 > > Regards, > > INADA Naoki > > ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
On Nov 8, 2017, at 23:57, Nick Coghlan wrote: > Putting that quibble first: could we adjust the feature flag to be > either "from __future__ import lazy_annotations" or "from __future__ > import str_annotations"? > > Every time I see "from __future__ import annotations" I think "But > we've had annotations since 3.0, why would they need a future > import?". +1 for lazy_annotations for the same reason. -Barry signature.asc Description: Message signed with OpenPGP ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
If we have to change the name I'd vote for string_annotations -- "lazy" has too many other connotations (e.g. it might cause people to think it's the thunks). I find str_annotations too abbreviated, and stringify_annotations is too hard to spell. On Thu, Nov 9, 2017 at 11:39 AM, Barry Warsaw wrote: > On Nov 8, 2017, at 23:57, Nick Coghlan wrote: > > > Putting that quibble first: could we adjust the feature flag to be > > either "from __future__ import lazy_annotations" or "from __future__ > > import str_annotations"? > > > > Every time I see "from __future__ import annotations" I think "But > > we've had annotations since 3.0, why would they need a future > > import?". > > +1 for lazy_annotations for the same reason. > > -Barry > > > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido% > 40python.org > > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The current dict is not an "OrderedDict"
On 08Nov2017 10:28, Antoine Pitrou wrote: On Wed, 8 Nov 2017 13:07:12 +1000 Nick Coghlan wrote: On 8 November 2017 at 07:19, Evpok Padding wrote: > On 7 November 2017 at 21:47, Chris Barker wrote: >> if dict order is preserved in cPython , people WILL count on it! > > I won't, and if people do and their code break, they'll have only themselves > to blame. > Also, what proof do you have of that besides anecdotal evidence ? ~27 calendar years of anecdotal evidence across a multitude of CPython API behaviours (as well as API usage in other projects). Other implementation developers don't say "CPython's runtime behaviour is the real Python specification" for the fun of it - they say it because "my code works on CPython, but it does the wrong thing on your interpreter, so I'm going to stick with CPython" is a real barrier to end user adoption, no matter what the language specification says. Yet, PyPy has no reference counting, and it doesn't seem to be a cause of concern. Broken code is fixed along the way, when people notice. I'd expect that this may be because that would merely to cause temporary memory leakage or differently timed running of __del__ actions. Neither of which normally affects semantics critical to the end result of most programs. However, code which relies on an ordering effect which works in the usual case but (often subtly) breaks in some unusual case can be hard to debug, because (a) recognising the salient error situation may be hard to do and (b) reasoning about the failure is difficult when the language semantics are not what you thought they were. I think the two situations are not as parallel as you think. Cheers, Cameron Simpson (formerly [email protected]) ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The current dict is not an "OrderedDict"
On Thu, Nov 9, 2017 at 1:46 PM, Cameron Simpson wrote: > On 08Nov2017 10:28, Antoine Pitrou wrote: >> >> On Wed, 8 Nov 2017 13:07:12 +1000 >> Nick Coghlan wrote: >>> >>> On 8 November 2017 at 07:19, Evpok Padding >>> wrote: >>> > On 7 November 2017 at 21:47, Chris Barker >>> > wrote: >>> >> if dict order is preserved in cPython , people WILL count on it! >>> > >>> > I won't, and if people do and their code break, they'll have only >>> > themselves >>> > to blame. >>> > Also, what proof do you have of that besides anecdotal evidence ? >>> >>> ~27 calendar years of anecdotal evidence across a multitude of CPython >>> API behaviours (as well as API usage in other projects). >>> >>> Other implementation developers don't say "CPython's runtime behaviour >>> is the real Python specification" for the fun of it - they say it >>> because "my code works on CPython, but it does the wrong thing on your >>> interpreter, so I'm going to stick with CPython" is a real barrier to >>> end user adoption, no matter what the language specification says. >> >> >> Yet, PyPy has no reference counting, and it doesn't seem to be a cause >> of concern. Broken code is fixed along the way, when people notice. > > > I'd expect that this may be because that would merely to cause temporary > memory leakage or differently timed running of __del__ actions. Neither of > which normally affects semantics critical to the end result of most > programs. It's actually a major problem when porting apps to PyPy. The common case is servers that crash because they rely on the GC to close file descriptors, and then run out of file descriptors. IIRC this is the major obstacle to supporting OpenStack-on-PyPy. NumPy is currently going through the process to deprecate and replace a core bit of API [1] because it turns out to assume a refcounting GC. -n [1] See: https://github.com/numpy/numpy/pull/9639 https://mail.python.org/pipermail/numpy-discussion/2017-November/077367.html -- Nathaniel J. Smith -- https://vorpus.org ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 10 November 2017 at 01:45, Barry Warsaw wrote: > On Nov 9, 2017, at 07:27, Tres Seaver wrote: > >> IIUC, that would be as expected: you would see the warnings when running >> your test suite exercising that imported code (which should run with all >> warnings enabled), but not when running the app. >> >> Seems like a reasonable choice to me. > > I’m coming around to that view too. FWIW, I definitely do want to see the > DeprecationWarnings in libraries I use, even if I didn’t write them. That > let’s me help that package’s author identify them, maybe even provide a fix, > and let’s me evaluate whether maybe some other library is better suited to my > needs. It probably does strike the right balance to see that in my own test > suite only. Right, this was my reasoning as well: if someone has gone to the trouble of factoring their code out into a support library, it's reasonable to expect that they'll also write at least a rudimentary test suite for that code. (The one case where that argument falls down is when they only have an integration test suite, and hence run their application in a subprocess, rather than directly in the test runner. However, that's a question for test frameworks to consider: the case can be made that test runners should be setting PYTHONWARNINGS in addition to setting the warning filter in the current process) By contrast, I have quite a bit of __main__-only code, and I routinely use the REPL to check the validity of snippets of code that I plan to use (or advise someone else to use). Those are the cases where the status quo sometimes trips me up, because I forget that I'm *not* getting deprecation warnings. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On Nov 8, 2017 16:12, "Nick Coghlan" wrote: On 9 November 2017 at 07:46, Antoine Pitrou wrote: > > Le 08/11/2017 à 22:43, Nick Coghlan a écrit : >> >> However, between them, the following two guidelines should provide >> pretty good deprecation warning coverage for the world's Python code: >> >> 1. If it's in __main__, it will emit deprecation warnings at runtime >> 2. If it's not in __main__, it should have a test suite > > Nick, have you actually read the discussion and the complaints people > had with the current situation? Most of them *don't* specifically talk > about __main__ scripts. I have, and I've also re-read the discussions regarding why the default got changed in the first place. Behaviour up until 2.6 & 3.1: once::DeprecationWarning Behaviour since 2.7 & 3.2: ignore::DeprecationWarning With test runners overriding the default filters to set it back to "once::DeprecationWarning". Is this intended to be a description of the current state of affairs? Because I've never encountered a test runner that does this... Which runners are you thinking of? The rationale for that change was so that end users of applications that merely happened to be written in Python wouldn't see deprecation warnings when Linux distros (or the end user) updated to a new Python version. It had the downside that you had to remember to opt-in to deprecation warnings in order to see them, which is a problem if you mostly use Python for ad hoc personal scripting. Proposed behaviour for Python 3.7+: once::DeprecationWarning:__main__ ignore::DeprecationWarning With test runners still overriding the default filters to set them back to "once::DeprecationWarning". This is a partial reversion back to the pre-2.7 behaviour, focused specifically on interactive use and ad hoc personal scripting. For ad hoc *distributed* scripting, the changed default encourages upgrading from single-file scripts to the zipapp model, and then minimising the amount of code that runs directly in __main__.py. I expect this will be a sufficient change to solve the specific problem I'm personally concerned by, so I'm no longer inclined to argue for anything more complicated. Other folks may have other concerns that this tweak to the default filters doesn't address - they can continue to build their case for more complex options using this as the new baseline behaviour. I think most people's concern is that we've gotten into a state where DeprecationWarning's are largely useless in practice, because no one sees them. Effectively the norm now is that developers (both the Python core team and downstream libraries) think they're following some sensible deprecation cycle, but often they're actually making changes without any warning, just they wait a year to do it. It's not clear why we're bothering through multiple releases -- which adds major overhead -- if in practice we aren't going to actually warn most people. Enabling them for another 1% of code doesn't really address this. As I mentioned above, it's also having the paradoxical effect of making it so that end-users are *more* likely to see deprecation warnings, since major libraries are giving up on using DeprecationWarning. Most recently it looks like pyca/cryptography is going to switch, partly as a result of this thread: https://github.com/pyca/cryptography/pull/4014 Some more ideas to throw out there: - if an envvar CI=true is set, then by default make deprecation warnings into errors. (This is an informal standard that lots of CI systems use. Error instead of "once" because most people don't look at CI output at all unless there's an error.) - provide some mechanism that makes it easy to have a deprecation warning that starts out as invisible, but then becomes visible as you get closer to the switchover point. (E.g. CPython might make the deprecation warnings that it issues be invisible in 3.x.0 and 3.x.1 but become visible in 3.x.2+.) Maybe: # in warnings.py def deprecation_warning(library_version, visible_in_version, change_in_version, msg, stacklevel): ... Then a call like: deprecation_warning(my_library.__version__, "1.3", "1.4", "This function is deprecated", 2) issues an InvisibleDeprecationWarning if my_library.__version__ < 1.3, and a VisibleDeprecationWarning otherwise. (The stacklevel argument is mandatory because the usual default of 1 is always wrong for deprecation warnings.) -n ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 10 November 2017 at 11:32, Nathaniel Smith wrote: > Is this intended to be a description of the current state of affairs? > Because I've never encountered a test runner that does this... Which runners > are you thinking of? Ah, you're right, pytest currently still requires individual developers to opt-in, rather than switching the defaults: https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings That's not the intention - we expect test runners to switch the defaults the same way unittest does: https://docs.python.org/3/library/unittest.html#unittest.TextTestRunner So that's likely part of the problem. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 10 November 2017 at 11:53, Nick Coghlan wrote: > On 10 November 2017 at 11:32, Nathaniel Smith wrote: >> Is this intended to be a description of the current state of affairs? >> Because I've never encountered a test runner that does this... Which runners >> are you thinking of? > > Ah, you're right, pytest currently still requires individual > developers to opt-in, rather than switching the defaults: > https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings > > That's not the intention - we expect test runners to switch the > defaults the same way unittest does: > https://docs.python.org/3/library/unittest.html#unittest.TextTestRunner Issue filed for pytest here: https://github.com/pytest-dev/pytest/issues/2908 I haven't checked nose2's behaviour. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default
Ethan Furman writes: > Suffering from DeprecationWarnings is not "being hosed". Having > your script/application/framework suddenly stop working because > nobody noticed something was being deprecated is "being hosed". OK, so suffering from DeprecationWarnings is not "being hosed". Nevertheless, it's a far greater waste of my time (supervising students in business and economics with ~50% annual turnover) than is "suddenly stop working", even though it only takes 1-5 minutes each time to explain how to do whatever seems appropriate. "Suddenly stopped working", in fact, hasn't happened to me yet in that environment. It's not hard to understand why: the student downloads Python, and doesn't upgrade within the life cycle of the software they've written. It becomes obsolete upon graduation, and is archived, never to be used again. I don't know how common this kind of environment is, so I can't say it's terribly important, but AIUI Python should be pleasant to use in this context. Unfortunately I have no time to contribute code or even useful ideas to the end of making it more likely that Those Who Can Do Something (a) see the DeprecationWarning and (b) are made sufficiently itchy that they actually scratch, and that Those Who Cannot Do Anything, or are limited to suggesting that something be done, not see it. So I'll shut up now, having contributed this user story. Steve -- Associate Professor Division of Policy and Planning Science http://turnbull/sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: [email protected] University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add Py_SETREF and Py_XSETREF to the stable C API
On 9 November 2017 at 22:35, Antoine Pitrou wrote: > On Thu, 9 Nov 2017 04:22:20 -0800 > Raymond Hettinger wrote: >> >> Probably, we're the wrong people to be talking about this. The proposal is >> to make these macros part of the official API so that it starts to appear in >> source code everywhere. The question isn't whether the above makes sense to >> you and me; instead, it is whether other people can make heads or tails out >> the above examples. > > Generally I would advocate anyone wanting to write a third-party C > extension, but not very familiar with the C API and its quirks, use > Cython instead. I'm not sure if that's an argument for the SETREF APIs > to remain private or to become public :-) I'm with Antoine on this - we should be pushing folks writing extension modules towards code generators like Cython, cffi, SWIG, and SIP, support libraries like Boost::Python, or safer languages like Rust (which can then be wrapped with cffi), rather than encouraging more bespoke C/C++ extensions modules with handcrafted refcount management. There's a reason the only parts of https://packaging.python.org/guides/packaging-binary-extensions/ that have actually been filled in are the ones explaining how to use a tool to write the extension module for you :) For me, that translates to being -1 on making these part of the public API - for code outside CPython, our message should consistently be "Do not roll your own refcount management, get a code generator or library to handle it for you". Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
On 10 November 2017 at 05:51, Guido van Rossum wrote: > If we have to change the name I'd vote for string_annotations -- "lazy" has > too many other connotations (e.g. it might cause people to think it's the > thunks). I find str_annotations too abbreviated, and stringify_annotations > is too hard to spell. Aye, I'd be fine with "from __future__ import string_annotations" - that's even more explicitly self-documenting than either of my suggestions. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
So... Łukasz? On Thu, Nov 9, 2017 at 6:11 PM, Nick Coghlan wrote: > On 10 November 2017 at 05:51, Guido van Rossum wrote: > > If we have to change the name I'd vote for string_annotations -- "lazy" > has > > too many other connotations (e.g. it might cause people to think it's the > > thunks). I find str_annotations too abbreviated, and > stringify_annotations > > is too hard to spell. > > Aye, I'd be fine with "from __future__ import string_annotations" - > that's even more explicitly self-documenting than either of my > suggestions. > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
Tres Seaver wrote: IIUC, that would be as expected: you would see the warnings when running your test suite exercising that imported code (which should run with all warnings enabled), but not when running the app. But then what benefit is there in turning on deprecation warnings automatically for __main__? -- Greg ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff
On 10 November 2017 at 14:34, Greg Ewing wrote: > Tres Seaver wrote: >> >> IIUC, that would be as expected: you would see the warnings when running >> your test suite exercising that imported code (which should run with all >> warnings enabled), but not when running the app. > > But then what benefit is there in turning on deprecation > warnings automatically for __main__? Not all code has test suites, most notably: - code entered at the REPL - personal automation scripts - single file Python scripts (as opposed to structured applications) The tests for these are generally either "Did it do what I wanted?" or else a dry-run mode where it prints out what it *would* have done in normal operation. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
On 11/9/2017 9:11 PM, Nick Coghlan wrote: On 10 November 2017 at 05:51, Guido van Rossum wrote: If we have to change the name I'd vote for string_annotations -- "lazy" has too many other connotations (e.g. it might cause people to think it's the thunks). I find str_annotations too abbreviated, and stringify_annotations is too hard to spell. Aye, I'd be fine with "from __future__ import string_annotations" - that's even more explicitly self-documenting than either of my suggestions. I think this is the best proposed so far. -- Terry Jan Reedy ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
I didn't follow the discussion on the PEP but I was surprised to read "from __future__ import annotations" in an example. Annotations exist since Python 3.0, why would Python 3.7 require a future for them? Well, I was aware of the PEP, but I was confused anyway. I really prefer "from __future__ import string_annotations" ! Victor Le 10 nov. 2017 03:14, "Nick Coghlan" a écrit : > On 10 November 2017 at 05:51, Guido van Rossum wrote: > > If we have to change the name I'd vote for string_annotations -- "lazy" > has > > too many other connotations (e.g. it might cause people to think it's the > > thunks). I find str_annotations too abbreviated, and > stringify_annotations > > is too hard to spell. > > Aye, I'd be fine with "from __future__ import string_annotations" - > that's even more explicitly self-documenting than either of my > suggestions. > > Cheers, > Nick. > > -- > Nick Coghlan | [email protected] | Brisbane, Australia > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > victor.stinner%40gmail.com > ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations
On 10 November 2017 at 16:42, Victor Stinner wrote: > I didn't follow the discussion on the PEP but I was surprised to read "from > __future__ import annotations" in an example. Annotations exist since Python > 3.0, why would Python 3.7 require a future for them? Well, I was aware of > the PEP, but I was confused anyway. > > I really prefer "from __future__ import string_annotations" ! At risk of complicating matters, I now see that this could be read as "annotations on strings", just as variable annotations are annotations on variable names, and function annotations are annotations on functions. If we decide we care about that possible misreading, then an alternative would be to swap the word order and use "from __future__ import annotation_strings". Cheers, Nick. P.S. I don't think this really matters either way, it just struck me that the reversed order might be marginally clearer, so it seemed worthwhile to mention it. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
