Re: [Python-Dev] "python.exe is not a valid Win32 app"
Hi all, On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton wrote: > Python 3.5 is not supported on windows XP. Upgrade your OS or > stick with 3.4 Maybe this information should be written down somewhere more official? I can't find it in any of these pages: https://www.python.org/downloads/windows/ https://www.python.org/downloads/release/python-350/ https://www.python.org/downloads/release/python-351/ https://docs.python.org/3/using/windows.html It is found on the following page, to which googling "python 3.5 windows XP" does not point: https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems Instead, the google query above returns various threads on stackoverflow and elsewhere where users wonder about that very question. A bientôt, Armin. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no)
Hi, I implemented more constant folding optimizations in my FAT Python project, but it looks like I made a subtle change in the Python semantic. Replacing "not x == y" with "x != y" changes the behaviour of Python. For example, this optimization breaks test_unittest because unittest.mock._Call implements __eq__() but not __ne__(). Is it expected that "not x.__eq__(y)" can be different than "x.__ne__(y)"? Is it part of the Python semantic? IMHO it's a bug in the unittest.mock module, but it's "acceptable" because "it just works" :-) So FAT Python must not replace "not x == y" with "x != y" to not break the code. Should Python emit a warning when __eq__() is implemented but not __ne__()? Should Python be modified to call "not __eq__()" when __ne__() is not implemented? For me, it can be an annoying and sublte bug, hard to track. Victor ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no)
Oh, I sent my email too quickly, I forgot to ask for other operations. Currently, FAT implements the following optimizations: * "not (x == y)" replaced with "x != y" * "not (x != y)" replaced with "x == y" * "not (x < y)" replaced with "x >= y" * "not (x <= y)" replaced with "x > y" * "not (x > y)" replaced with "x <= y" * "not (x >= y)" replaced with "x < y" * "not (x in y)" replaced with "x not in y" * "not (x not in y)" replaced with "x in y" * "not (x is y)" replaced with "x is not y" * "not (x is not y)" replaced with "x is y" I guess that the optimizations on "in" and "is" operators are fine, but optimizations on all other operations must be removed to not break the Python semantic. Python has also some funny objects like math.nan: >>> math.nan != math.nan True >>> math.nan == math.nan False >>> math.nan < math.nan False >>> math.nan > math.nan False >>> math.nan <= math.nan False >>> math.nan >= math.nan False >>> math.nan != 1.0 True >>> math.nan == 1.0 False >>> math.nan <= 1.0 False >>> math.nan < 1.0 False >>> math.nan >= 1.0 False >>> math.nan > 1.0 False So "not(math.nan < 1.0)" is different than "math.nan >= 1.0"... Victor 2015-12-15 14:04 GMT+01:00 Victor Stinner : > Hi, > > I implemented more constant folding optimizations in my FAT Python > project, but it looks like I made a subtle change in the Python > semantic. > > Replacing "not x == y" with "x != y" changes the behaviour of Python. > For example, this optimization breaks test_unittest because > unittest.mock._Call implements __eq__() but not __ne__(). > > Is it expected that "not x.__eq__(y)" can be different than > "x.__ne__(y)"? Is it part of the Python semantic? > > IMHO it's a bug in the unittest.mock module, but it's "acceptable" > because "it just works" :-) So FAT Python must not replace "not x == > y" with "x != y" to not break the code. > > Should Python emit a warning when __eq__() is implemented but not __ne__()? > > Should Python be modified to call "not __eq__()" when __ne__() is not > implemented? > > For me, it can be an annoying and sublte bug, hard to track. > > Victor ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no)
Hello,
the comparisons >=, <=, >, < cannot be optimized this way. Not every order
is a total order. For example, sets a = {1, 2} and b = {2, 3} are
incomparable (in the sense that both a >= b and a <= b is False), and it is
no pathology.
Regards, Adam Bartoš
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Urgent: Last call for the CfP of PythonFOSDEM 2016
Hi all Because the deadline is imminent and because we have only received some proposals, we have extended the current deadline. The new submission deadline is 2015-12-20. Call For Proposals == This is the official call for sessions for the Python devroom at FOSDEM 2016. FOSDEM is the Free and Open source Software Developers' European Meeting, a free and non-commercial two-day week-end that offers open source contributors a place to meet, share ideas and collaborate. It's the biggest event in Europe with +5000 hackers, +400 speakers. For this edition, Python will be represented by its Community. If you want to discuss with a lot of Python Users, it's the place to be! Important dates === * Submission deadlines: 2015-12-20 * Acceptance notifications: 2015-12-24 Practical = * The duration for talks will be 30 minutes, including presentations and questions and answers. * Presentation can be recorded and streamed, sending your proposal implies giving permission to be recorded. * A mailing list for the Python devroom is available for discussions about devroom organisation. You can register at this address: https://lists.fosdem.org/listinfo/python-devroom How to submit = All submissions are made in the Pentabarf event planning tool at https://penta.fosdem.org/submission/FOSDEM16 When submitting your talk in Pentabarf, make sure to select the Python devroom as the Track. Of course, if you already have a user account, please reuse it. Questions = Any questions, please sned an email to info AT python-fosdem DOT org Thank you for submitting your sessions and see you soon in Brussels to talk about Python. If you want to keep informed for this edition, you can follow our twitter account @PythonFOSDEM. * FOSDEM 2016: https://fosdem.org/2016 * Python Devroom: http://python-fosdem.org * Twitter: https://twitter.com/PythonFOSDEM Thank you so much, Stephane -- Stéphane Wirtel - http://wirtel.be - @matrixise ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Urgent: Last call for the CfP of PythonFOSDEM 2016
Hi all Because the deadline is imminent and because we have only received some proposals, we have extended the current deadline. The new submission deadline is 2015-12-20. Call For Proposals == This is the official call for sessions for the Python devroom at FOSDEM 2016. FOSDEM is the Free and Open source Software Developers' European Meeting, a free and non-commercial two-day week-end that offers open source contributors a place to meet, share ideas and collaborate. It's the biggest event in Europe with +5000 hackers, +400 speakers. For this edition, Python will be represented by its Community. If you want to discuss with a lot of Python Users, it's the place to be! Important dates === * Submission deadlines: 2015-12-20 * Acceptance notifications: 2015-12-24 Practical = * The duration for talks will be 30 minutes, including presentations and questions and answers. * Presentation can be recorded and streamed, sending your proposal implies giving permission to be recorded. * A mailing list for the Python devroom is available for discussions about devroom organisation. You can register at this address: https://lists.fosdem.org/listinfo/python-devroom How to submit = All submissions are made in the Pentabarf event planning tool at https://penta.fosdem.org/submission/FOSDEM16 When submitting your talk in Pentabarf, make sure to select the Python devroom as the Track. Of course, if you already have a user account, please reuse it. Questions = Any questions, please sned an email to info AT python-fosdem DOT org Thank you for submitting your sessions and see you soon in Brussels to talk about Python. If you want to keep informed for this edition, you can follow our twitter account @PythonFOSDEM. * FOSDEM 2016: https://fosdem.org/2016 * Python Devroom: http://python-fosdem.org * Twitter: https://twitter.com/PythonFOSDEM Thank you so much, Stephane -- Stéphane Wirtel - http://wirtel.be - @matrixise ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "python.exe is not a valid Win32 app"
In a message of Tue, 15 Dec 2015 11:46:03 +0100, Armin Rigo writes: >Hi all, > >On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton wrote: >> Python 3.5 is not supported on windows XP. Upgrade your OS or >> stick with 3.4 > >Maybe this information should be written down somewhere more official? > I can't find it in any of these pages: > >https://www.python.org/downloads/windows/ >https://www.python.org/downloads/release/python-350/ >https://www.python.org/downloads/release/python-351/ >https://docs.python.org/3/using/windows.html > >It is found on the following page, to which googling "python 3.5 >windows XP" does not point: > >https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems > >Instead, the google query above returns various threads on >stackoverflow and elsewhere where users wonder about that very >question. > > >A bientôt, > >Armin. I already asked for that, on the bug tracker but maybe I picked the wrong issue tracker for that request. So now I have made one here, too. https://github.com/python/pythondotorg/issues/867 Laura ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "python.exe is not a valid Win32 app"
On Tue, 15 Dec 2015 15:41:35 +0100, Laura Creighton wrote: > In a message of Tue, 15 Dec 2015 11:46:03 +0100, Armin Rigo writes: > >Hi all, > > > >On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton wrote: > >> Python 3.5 is not supported on windows XP. Upgrade your OS or > >> stick with 3.4 > > > >Maybe this information should be written down somewhere more official? > > I can't find it in any of these pages: > > > >https://www.python.org/downloads/windows/ > >https://www.python.org/downloads/release/python-350/ > >https://www.python.org/downloads/release/python-351/ > >https://docs.python.org/3/using/windows.html > > > >It is found on the following page, to which googling "python 3.5 > >windows XP" does not point: > > > >https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems That's too bad, since that's the official place such info appears. > >Instead, the google query above returns various threads on > >stackoverflow and elsewhere where users wonder about that very > >question. > > I already asked for that, on the bug tracker but maybe I picked the wrong > issue tracker for that request. > > So now I have made one here, too. > https://github.com/python/pythondotorg/issues/867 IMO the second is the right one...although the release managers sometimes adjust the web site, I think this is a web site issue and not a release management issue. I would think that we should have "supported versions" in the 'product description' for both Windows and OSX, but IMO the current way the releases are organized on the web site does not make that easy to achieve in a way that will be useful to end users. That said, I'm not sure whether or not there is a way we could add "supported versions" to the main docs that would make sense and be useful...your bugs.python.org issue would be useful for discussing that. --David ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no)
On Tue, Dec 15, 2015 at 8:04 AM, Victor Stinner wrote: > Is it expected that "not x.__eq__(y)" can be different than > "x.__ne__(y)"? Is it part of the Python semantic? In Numpy, `x != y` returns an array of bools, while `not x == y` creates an array of bools and then tries to convert it to a bool, which fails, because a non-singleton Numpy array is not allowed to be converted to a bool. But in the context of `if`, both `not x == y` and `x != y` will fail. >From the docs, on implementing comparison: https://docs.python.org/3/reference/datamodel.html#object.__ne__ """ By default, __ne__() delegates to __eq__() and inverts the result unless it is NotImplemented. There are no other implied relationships among the comparison operators, for example, the truth of (xhttps://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Python for android - successfully cross-compiled without patches
A lot of talks and patches around how to cross-compile python for andriod ... Dear python-dev@, I just want to say thanks to all of you for the high quality cross-platform code. Using alternative Android NDK named CrystaX (home page - https://www.crystax.net ) which provides high quality posix support in comparison with google's one we managed to cross-compile python 2.7 and 3.5 completely without any patches applied. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Third milestone of FAT Python
On Sat, Dec 04, 2015 at 7:49 AM, Victor Stinner
wrote:
> Versionned dictionary
> =
>
> In the previous milestone of FAT Python, the versionned dictionary was a
> new type inherited from the builtin dict type which added a __version__
> read-only (global "version" of dictionary, incremented at each change),
> a getversion(key) method (version of a one key) and it added support for
> weak references.
I was thinking (as an alternative to versioning dicts) about a
dictionary which would be able to return name/value pairs, which would
also be internally used by the dictionary. This would be way less
sensitive to irrelevant changes in the scope dictionary, but cost an
extra pointer to each key.
Here's how it would work:
pair = scope.item(name)
scope[name] = newval
assert pair.value is newval
assert pair.key is name
assert pair is scope.item(name)
# Alternatively, to only create pair objects when `item` is
called, have `==` compare the underlying pair.
del scope[name]
assert pair.key is None
# name-dicts can't have `None` keys
assert pair.value is None
# Alternatively, pair.value is scope.NULL
This dict will allow one to hold references to its entries (with the
caller promising not to change them, enforced by exceptions). You
won't have to keep looking up keys (unless the name is deleted), and
functions are allowed to change. For inlining, you can detect whether
the function has been redefined by testing the saved pair.value
against the saved function, and go into the slow path if needed (or
recompile the inlining).
I am not sure whether deleting from the dict and then readding the
same key should reuse the pair container. I think the only potential
issue for the Python version is memory use. There aren't going to be
THAT many names being deleted, right? So I say that deleted things in
the scope dict should not be removed from the inner dict. I predict
that this will simplify a lot of other things, especially when
deleting and readding the same name: if you save a pair, and it
becomes invalid, you don't have to do another lookup to make sure that
it's REALLY gone.
If memory is a real concern, deleted pairs can be weakrefed (and saved
in a second dict?) until they are reused. This way, pairs which aren't
saved by something outside will be removed.
For implementation, a Python implementation of the idea has probably
already been done. Here are some details:
- set: Internally store d._d[k] = k,v.
- get: Reject k if d._d[k].key is None. (Names must be strings.)
- del: Set d._d[k].key = None and .val = d.NULL to invalidate this entry.
For the CPython version, CPython's dict already stores its entries as
PyDictKeyEntry (hash, *key, *value), but those entries can move around
on resizing. Two possible implementations:
1. Fork dict to store {hash, *kv_pair}.
2. Use an inner dict (like in the Python suggestion) where values are
kv_pair. Write the indirection code in C, because scope dicts must be
fast.
For exposing a pair to Python code, here are two possibilities:
1. Make them Python objects in the first place.
2. Keep a second hash table in lockstep with the first (so that you
can do a lookup to find the index in the first, and then use that same
index with the second). In this table, store pair objects that have
been created. (They can be weakrefed, as before. Unless it's
impossible to weakref something you're returning?) This will save
memory for pairs that aren't ever exposed. If compact dictionaries are
implemented, the second hash table will be a second array instead.
To use this kind of scopedict, functions would have to store a list of
used names, which is memory overhead. But for what you're doing, some
overhead will be necessary anyway.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python for android - successfully cross-compiled without patches
Wow ! Awesome ! What specific ISA version(s) and/or device(s) have you tried ? On 12/15/15, Vitaly Murashev wrote: > A lot of talks and patches around how to cross-compile python for andriod > ... > > Dear python-dev@, > I just want to say thanks to all of you for the high quality cross-platform > code. > > Using alternative Android NDK named CrystaX (home page - > https://www.crystax.net ) which provides high quality posix support in > comparison with google's one we managed to cross-compile python 2.7 and 3.5 > completely without any patches applied. > -- Regards, Olemis - @olemislc Apache™ Bloodhound contributor http://issues.apache.org/bloodhound http://blood-hound.net Brython committer http://brython.info http://github.com/brython-dev/brython Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python for android - successfully cross-compiled without patches
Olemis Lang gmail.com> writes: > > Wow ! Awesome ! What specific ISA version(s) and/or device(s) have you tried ? > Hi Olemis, I'm Dmitry Moskalchuk, initial author and main contributor of CrystaX NDK. I could provide details if needed. Answering your question, I assume by ISA you mean "Instruction Set Architecture", isn't? We've running Python on ARMv7 (32-bit) and ARMv8 (64-bit) devices, as well as on x86 (32-bit) tablets. We'll run it on x86_64 and mips devices too with time. We'd like to include comprehensive testing of Python into process of automatic regression testing of CrystaX NDK and we'd be very appreciated if you or someone else could point us to documentation or examples how to do that. -- Dmitry Moskalchuk ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python for android - successfully cross-compiled without patches
On Tue, 15 Dec 2015 at 10:48 Dmitry Moskalchuk wrote: > Olemis Lang gmail.com> writes: > > > > > Wow ! Awesome ! What specific ISA version(s) and/or device(s) have you > tried ? > > > > Hi Olemis, > > I'm Dmitry Moskalchuk, initial author and main contributor of CrystaX NDK. > I could provide details if needed. > > Answering your question, I assume by ISA you mean > "Instruction Set Architecture", isn't? > > We've running Python on ARMv7 (32-bit) and ARMv8 (64-bit) devices, > as well as on x86 (32-bit) tablets. We'll run it on x86_64 and mips devices > too with time. > > We'd like to include comprehensive testing of Python into process of > automatic regression testing of CrystaX NDK and we'd be very appreciated > if you or someone else could point us to documentation or examples how to > do that. > If you want to run the CPython test suite you can look at https://docs.python.org/devguide/runtests.html . ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] async/await behavior on multiple calls
Howdy,
I'm experimenting with async/await in Python 3, and one very surprising
behavior has been what happens when calling `await` twice on an Awaitable.
In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python,
calling `await` multiple times results in all future results getting back
`None`. Here's a small example program:
async def echo_hi():
result = ''
echo_proc = await asyncio.create_subprocess_exec(
'echo', 'hello', 'world',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.DEVNULL)
result = await echo_proc.stdout.read()
await echo_proc.wait()
return result
async def await_twice(awaitable):
print('first time is {}'.format(await awaitable))
print('second time is {}'.format(await awaitable))
loop = asyncio.get_event_loop()
loop.run_until_complete(await_twice(echo_hi()))
This makes writing composable APIs using async/await in Python very
difficult since anything that takes an `awaitable` has to know that it
wasn't already awaited. Also, since the behavior is radically different
than in the other programming languages implementing async/await it makes
adopting Python's flavor of async/await difficult for folks coming from a
language where it's already implemented.
In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that
can be awaited multiple times and either returns the result or throws any
thrown exceptions. It doesn't appear that the Awaitable class in Python
has a `result` or `exception` field but `asyncio.Future` does.
Would it make sense to shift from having `await` functions return a `
*Future-like`* return object to returning a Future?
Thanks,
Roy
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
I think this goes back all the way to a debate we had when we were
discussing PEP 380 (which introduced 'yield from', on which 'await' is
built). In fact I believe that the reason PEP 380 didn't make it into
Python 2.7 was that this issue was unresolved at the time (the PEP author
and I preferred the current approach, but there was one vocal opponent who
disagreed -- although my memory is only about 60% reliable on this :-).
In any case, problem is that in order to implement the behavior you're
asking for, the generator object would have to somehow hold on to its
return value so that each time __next__ is called after it has already
terminated it can raise StopIteration with the saved return value. This
would extend the lifetime of the returned object indefinitely (until the
generator object itself is GC'ed) in order to handle a pretty obscure
corner case.
I don't know how long you have been using async/await, but I wonder if it's
possible that you just haven't gotten used to the typical usage patterns?
In particular, your claim "anything that takes an `awaitable` has to know
that it wasn't already awaited" makes me sound that you're just using it in
an atypical way (perhaps because your model is based on other languages).
In typical asyncio code, one does not usually take an awaitable, wait for
it, and then return it -- one either awaits it and then extracts the
result, or one returns it without awaiting it.
On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams wrote:
> Howdy,
>
> I'm experimenting with async/await in Python 3, and one very surprising
> behavior has been what happens when calling `await` twice on an Awaitable.
> In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python,
> calling `await` multiple times results in all future results getting back
> `None`. Here's a small example program:
>
>
> async def echo_hi():
> result = ''
> echo_proc = await asyncio.create_subprocess_exec(
> 'echo', 'hello', 'world',
> stdout=asyncio.subprocess.PIPE,
> stderr=asyncio.subprocess.DEVNULL)
> result = await echo_proc.stdout.read()
> await echo_proc.wait()
> return result
>
> async def await_twice(awaitable):
> print('first time is {}'.format(await awaitable))
> print('second time is {}'.format(await awaitable))
>
> loop = asyncio.get_event_loop()
> loop.run_until_complete(await_twice(echo_hi()))
>
> This makes writing composable APIs using async/await in Python very
> difficult since anything that takes an `awaitable` has to know that it
> wasn't already awaited. Also, since the behavior is radically different
> than in the other programming languages implementing async/await it makes
> adopting Python's flavor of async/await difficult for folks coming from a
> language where it's already implemented.
>
> In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that
> can be awaited multiple times and either returns the result or throws any
> thrown exceptions. It doesn't appear that the Awaitable class in Python
> has a `result` or `exception` field but `asyncio.Future` does.
>
> Would it make sense to shift from having `await` functions return a `
> *Future-like`* return object to returning a Future?
>
> Thanks,
> Roy
>
>
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>
--
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
Hi Roy and Guido, On 2015-12-15 3:08 PM, Guido van Rossum wrote: [..] I don't know how long you have been using async/await, but I wonder if it's possible that you just haven't gotten used to the typical usage patterns? In particular, your claim "anything that takes an `awaitable` has to know that it wasn't already awaited" makes me sound that you're just using it in an atypical way (perhaps because your model is based on other languages). In typical asyncio code, one does not usually take an awaitable, wait for it, and then return it -- one either awaits it and then extracts the result, or one returns it without awaiting it. I agree. Holding a return value just so that coroutine can return it again seems wrong to me. However, since coroutines are now a separate type (although they share a lot of code with generators internally), maybe we can change them to throw an error when they are awaited on more than one time? That should be better than letting them return `None`: coro = coroutine() await coro await coro # <- will raise RuntimeError I'd also add a check that the coroutine isn't being awaited by more than one coroutine simultaneously (another, completely different issue, more on which here: https://github.com/python/asyncio/issues/288). This was fixed in asyncio in debug mode, but ideally, we should fix this in the interpreter core. Yury ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python for android - successfully cross-compiled without patches
On 15/12/15 22:33, Brett Cannon wrote: > If you want to run the CPython test suite you can look at > https://docs.python.org/devguide/runtests.html . Thanks Brett, I'll look on it. -- Dmitry Moskalchuk signature.asc Description: OpenPGP digital signature ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
Both Yury's suggestions sounds reasonable. On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov wrote: > Hi Roy and Guido, > > On 2015-12-15 3:08 PM, Guido van Rossum wrote: > [..] >> >> >> I don't know how long you have been using async/await, but I wonder if >> it's possible that you just haven't gotten used to the typical usage >> patterns? In particular, your claim "anything that takes an `awaitable` has >> to know that it wasn't already awaited" makes me sound that you're just >> using it in an atypical way (perhaps because your model is based on other >> languages). In typical asyncio code, one does not usually take an awaitable, >> wait for it, and then return it -- one either awaits it and then extracts >> the result, or one returns it without awaiting it. > > > I agree. Holding a return value just so that coroutine can return it again > seems wrong to me. > > However, since coroutines are now a separate type (although they share a lot > of code with generators internally), maybe we can change them to throw an > error when they are awaited on more than one time? > > That should be better than letting them return `None`: > > coro = coroutine() > await coro > await coro # <- will raise RuntimeError > > > I'd also add a check that the coroutine isn't being awaited by more than one > coroutine simultaneously (another, completely different issue, more on which > here: https://github.com/python/asyncio/issues/288). This was fixed in > asyncio in debug mode, but ideally, we should fix this in the interpreter > core. > > Yury > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Third milestone of FAT Python
2015-12-15 12:23 GMT+01:00 Franklin? Lee : > I was thinking (as an alternative to versioning dicts) about a > dictionary which would be able to return name/value pairs, which would > also be internally used by the dictionary. This would be way less > sensitive to irrelevant changes in the scope dictionary, but cost an > extra pointer to each key. Do you have an estimation of the cost of the "extra pointer"? Impact on memory and CPU. dict is really a very important type for the performance of Python. If you make dict slower, I'm sure that Python overall will be slower. > del scope[name] > assert pair.key is None It looks tricky to keep the dict and the pair objects consistent, especially in term of atomaticity. You will need to keep a reference to the pair object in the dict entry, which will also make the dict larger (use more memory), right? > You won't have to keep looking up keys (unless the name is deleted), and > functions are allowed to change. For inlining, you can detect whether > the function has been redefined by testing the saved pair.value > against the saved function, and go into the slow path if needed (or > recompile the inlining). For builtin functions, I also need to detect when a key is created in the global namespace. How do you handle this case with pairs? > If memory is a real concern, deleted pairs can be weakrefed (and saved > in a second dict?) until they are reused. This way, pairs which aren't > saved by something outside will be removed. Supporting weak references also has a cost on the memory footprint... For FAT Python, not being able to detect quickly when a new key is created is a blocker point. Victor ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
Agreed. (But let's hear from the OP first.) On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov wrote: > Both Yury's suggestions sounds reasonable. > > On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov > wrote: > > Hi Roy and Guido, > > > > On 2015-12-15 3:08 PM, Guido van Rossum wrote: > > [..] > >> > >> > >> I don't know how long you have been using async/await, but I wonder if > >> it's possible that you just haven't gotten used to the typical usage > >> patterns? In particular, your claim "anything that takes an `awaitable` > has > >> to know that it wasn't already awaited" makes me sound that you're just > >> using it in an atypical way (perhaps because your model is based on > other > >> languages). In typical asyncio code, one does not usually take an > awaitable, > >> wait for it, and then return it -- one either awaits it and then > extracts > >> the result, or one returns it without awaiting it. > > > > > > I agree. Holding a return value just so that coroutine can return it > again > > seems wrong to me. > > > > However, since coroutines are now a separate type (although they share a > lot > > of code with generators internally), maybe we can change them to throw an > > error when they are awaited on more than one time? > > > > That should be better than letting them return `None`: > > > > coro = coroutine() > > await coro > > await coro # <- will raise RuntimeError > > > > > > I'd also add a check that the coroutine isn't being awaited by more than > one > > coroutine simultaneously (another, completely different issue, more on > which > > here: https://github.com/python/asyncio/issues/288). This was fixed in > > asyncio in debug mode, but ideally, we should fix this in the interpreter > > core. > > > > Yury > > ___ > > Python-Dev mailing list > > [email protected] > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > > https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com > > > > -- > Thanks, > Andrew Svetlov > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Third milestone of FAT Python
More thoughts. (Stealing your style of headers.)
Just store a pointer to value
=
Instead of having the inner dict store k_v pairs.
In C, the values in our hash tables will be:
struct refcell{
PyObject *value; // NULL if deleted
};
It's not necessary to store the key. I think I only had it so I could
mark it None in the Python implementation, to denote a deleted key.
But a deleted entry could just have `cell.value is ScopeDict.NULL` (C:
cell.value == NULL).
The scope dict will own all values which don't have exposed
references, and all exposed references (which own the value they
reference).
(Alternatively, store the value directly in the hash table. If
something asks for a reference to it, replace the value with a
PyObject that refers to it, and flag that entry in the auxilary hash
table.)
This might be what PyCellObject is, which is how I realized that I
didn't need the key: https://docs.python.org/3.5/c-api/cell.html
Deleting from scope
===
When deleting a key, don't remove the key from the inner dict, and
just set the reference to NULL.
Outside code should never get the raw C `refcell`, only a Python
object. This makes it possible to clean up unused references when the
dict expands or contracts: for each `refcell`, if it has no Pair
object or its Pair object is not referenced by anything else, and if
its value is NULL (i.e. deleted), don't store it in the new hash
table.
Get pairs before their keys are defined
===
When the interpreter compiles a function, it can request references
which _don't exist yet_. The scope dict would simply create the entry
in its inner dict and fill it in when needed. That means that each
name only needs to be looked up when a function is created.
scope = Scopedict()
f = scope.ref('f')
scope['f'] = float
f.value('NaN')
This would be a memory issue if many functions are created with typo'd
names. But
- You're not making a gigantic amount of functions in the first place.
- You'll eventually remove these unused entries when you resize the
inner dict. (See previous section.)
I was concerned about which scope would be responsible for creating
the entry, but it turns out that if you use a name in a function,
every use of that name has to be for the same scope. So the following
causes a NameError:
def f():
def g(x):
return abs(x)
for i in range(30):
print(g(i))
if i == 10:
def abs(x):
return "abs" + str(x)
if d == 20:
del abs
and you can tell which scope to ask for the reference during function
compilation.
Does this simplify closures?
(I haven't yet looked at Python's closure implementation.)
A refcell (C struct) will be exposed as a RefCell (PyObject), which
owns it. This means that RefCell is reference-counted, and if
something saved a reference to it, it will persist even after its
owning dict is deleted. Thus, when a scope dict is deleted, each
refcell without a RefCell object is deleted (and its value is
DecRef'd), and the other ones just have their RefCell object decrement
a reference.
Then closures are free: each inner function has refcounted references
to the cells that it uses, and it doesn't need to know whether its
parent is alive.
(The implementation of closures involves cell objects.)
Overhead
If inner functions are being created a lot, that's extra work. But I
guess you should expect a lot of overhead if you're doing such a
thing.
Readonly refs
=
It might be desirable to have refs that are allowed to write (e.g.
from `global` and `nonlocal`) and refs that aren't.
The CellObject would just hold a count for the number of writing refs,
separate from the number of refs. This might result in some
optimizations for values with no writing refs. For example, it's
possible to implement copying of dicts as a shallow copy if it's KNOWN
that any modification would result in a call to its set/del functions,
which would initiate a deep copy.
On Tue, Dec 15, 2015 at 3:29 PM, Victor Stinner
wrote:
> 2015-12-15 12:23 GMT+01:00 Franklin? Lee :
>> I was thinking (as an alternative to versioning dicts) about a
>> dictionary which would be able to return name/value pairs, which would
>> also be internally used by the dictionary. This would be way less
>> sensitive to irrelevant changes in the scope dictionary, but cost an
>> extra pointer to each key.
>
> Do you have an estimation of the cost of the "extra pointer"? Impact
> on memory and CPU. dict is really a very important type for the
> performance of Python. If you make dict slower, I'm sure that Python
> overall will be slower.
I'm proposing it as a subclass.
> It looks tricky to keep the dict and the pair objects consistent,
> especially in term of atomaticity. You will need to keep a reference
> to the pair object in
Re: [Python-Dev] Third milestone of FAT Python
2015-12-15 22:10 GMT+01:00 Franklin? Lee : > (Stealing your style of headers.) I'm using reStructured Text, it's not really a new style :-) > Overhead > > > If inner functions are being created a lot, that's extra work. But I > guess you should expect a lot of overhead if you're doing such a > thing. Sorry, I didn't read carefully your email, but I don't think that it's acceptable to make Python namespaces slower. In FAT mode, we need versionned dictionaries for module namespace, type namespace, global namespace, etc. >> Do you have an estimation of the cost of the "extra pointer"? Impact >> on memory and CPU. dict is really a very important type for the >> performance of Python. If you make dict slower, I'm sure that Python >> overall will be slower. > > I'm proposing it as a subclass. Please read the "Versionned dictionary" section of my email: https://mail.python.org/pipermail/python-dev/2015-December/142397.html I explained why using a subclass doesn't work in practice. Victor ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
I think there may be somewhat of a language barrier here. OP appears to be
mixing the terms of coroutines and futures. The behavior OP describes is
that of promised or async tasks in other languages.
Consider a JS promise that has been resolved:
promise.then(function (value) {...});
promise.then(function (value) {...});
Both of the above will execute the callback function with the resolved
value regardless of how much earlier the promise was resolved. This is not
entirely different from how Futures work in Python when using
'add_done_callback'.
The code example from OP, however, is showing the behaviour of awaiting a
coroutine twice rather than awaiting a Future twice. Both objects are
awaitable but both exhibit different behaviour when awaited multiple times.
A scenario I believe deserves a test is what happens in the asyncio
coroutine scheduler when a promise is awaited multiple times. The current
__await__ behaviour is to return self only when not done and then to return
the value after resolution for each subsequent await. The Task, however,
requires that it must be a Future emitted from the coroutine and not a
primitive value. Awaiting a resolved future should result
On Tue, Dec 15, 2015, 14:44 Guido van Rossum wrote:
> Agreed. (But let's hear from the OP first.)
>
> On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov > wrote:
>
>> Both Yury's suggestions sounds reasonable.
>>
>> On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov
>> wrote:
>> > Hi Roy and Guido,
>> >
>> > On 2015-12-15 3:08 PM, Guido van Rossum wrote:
>> > [..]
>> >>
>> >>
>> >> I don't know how long you have been using async/await, but I wonder if
>> >> it's possible that you just haven't gotten used to the typical usage
>> >> patterns? In particular, your claim "anything that takes an
>> `awaitable` has
>> >> to know that it wasn't already awaited" makes me sound that you're just
>> >> using it in an atypical way (perhaps because your model is based on
>> other
>> >> languages). In typical asyncio code, one does not usually take an
>> awaitable,
>> >> wait for it, and then return it -- one either awaits it and then
>> extracts
>> >> the result, or one returns it without awaiting it.
>> >
>> >
>> > I agree. Holding a return value just so that coroutine can return it
>> again
>> > seems wrong to me.
>> >
>> > However, since coroutines are now a separate type (although they share
>> a lot
>> > of code with generators internally), maybe we can change them to throw
>> an
>> > error when they are awaited on more than one time?
>> >
>> > That should be better than letting them return `None`:
>> >
>> > coro = coroutine()
>> > await coro
>> > await coro # <- will raise RuntimeError
>> >
>> >
>> > I'd also add a check that the coroutine isn't being awaited by more
>> than one
>> > coroutine simultaneously (another, completely different issue, more on
>> which
>> > here: https://github.com/python/asyncio/issues/288). This was fixed in
>> > asyncio in debug mode, but ideally, we should fix this in the
>> interpreter
>> > core.
>> >
>> > Yury
>> > ___
>> > Python-Dev mailing list
>> > [email protected]
>> > https://mail.python.org/mailman/listinfo/python-dev
>> > Unsubscribe:
>> >
>> https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>>
>>
>>
>> --
>> Thanks,
>> Andrew Svetlov
>>
> ___
>> Python-Dev mailing list
>> [email protected]
>> https://mail.python.org/mailman/listinfo/python-dev
>>
> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
Thanks for the insight Guido.
I've mostly used async/await inside of HHVM/Hack, and used Guava/Java
Futures extensively in the past so I found this behavior to be quite
surprising. I'd like to use Awaitables to represent a DAG of work that
needs to get done. For example, I used to be one of the maintainers of
Buck (a build tool similar to Bazel) and we used a collection of futures
for building all of our dependencies. For each rule, we'd effectively:
dependency_results = await asyncio.gather(*dependencies)
# Proceed with building.
Rules were free to depend on the same dependency and since the Future would
just return the same result when resolved more than one time things just
worked.
Similarly when building up the results for say a web request, I effectively
want to construct a DAG of work that needs to get done and then just await
on that DAG in a similar manner without having to enforce that the DAG is
actually a tree. I can of course write a function to wrap everything in
Futures, but this seems to be against the spirit of async/await.
Thanks,
Roy
On Tue, Dec 15, 2015 at 12:08 PM, Guido van Rossum wrote:
> I think this goes back all the way to a debate we had when we were
> discussing PEP 380 (which introduced 'yield from', on which 'await' is
> built). In fact I believe that the reason PEP 380 didn't make it into
> Python 2.7 was that this issue was unresolved at the time (the PEP author
> and I preferred the current approach, but there was one vocal opponent who
> disagreed -- although my memory is only about 60% reliable on this :-).
>
> In any case, problem is that in order to implement the behavior you're
> asking for, the generator object would have to somehow hold on to its
> return value so that each time __next__ is called after it has already
> terminated it can raise StopIteration with the saved return value. This
> would extend the lifetime of the returned object indefinitely (until the
> generator object itself is GC'ed) in order to handle a pretty obscure
> corner case.
>
> I don't know how long you have been using async/await, but I wonder if
> it's possible that you just haven't gotten used to the typical usage
> patterns? In particular, your claim "anything that takes an `awaitable` has
> to know that it wasn't already awaited" makes me sound that you're just
> using it in an atypical way (perhaps because your model is based on other
> languages). In typical asyncio code, one does not usually take an
> awaitable, wait for it, and then return it -- one either awaits it and then
> extracts the result, or one returns it without awaiting it.
>
> On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams wrote:
>
>> Howdy,
>>
>> I'm experimenting with async/await in Python 3, and one very surprising
>> behavior has been what happens when calling `await` twice on an Awaitable.
>> In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python,
>> calling `await` multiple times results in all future results getting back
>> `None`. Here's a small example program:
>>
>>
>> async def echo_hi():
>> result = ''
>> echo_proc = await asyncio.create_subprocess_exec(
>> 'echo', 'hello', 'world',
>> stdout=asyncio.subprocess.PIPE,
>> stderr=asyncio.subprocess.DEVNULL)
>> result = await echo_proc.stdout.read()
>> await echo_proc.wait()
>> return result
>>
>> async def await_twice(awaitable):
>> print('first time is {}'.format(await awaitable))
>> print('second time is {}'.format(await awaitable))
>>
>> loop = asyncio.get_event_loop()
>> loop.run_until_complete(await_twice(echo_hi()))
>>
>> This makes writing composable APIs using async/await in Python very
>> difficult since anything that takes an `awaitable` has to know that it
>> wasn't already awaited. Also, since the behavior is radically different
>> than in the other programming languages implementing async/await it makes
>> adopting Python's flavor of async/await difficult for folks coming from a
>> language where it's already implemented.
>>
>> In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that
>> can be awaited multiple times and either returns the result or throws any
>> thrown exceptions. It doesn't appear that the Awaitable class in Python
>> has a `result` or `exception` field but `asyncio.Future` does.
>>
>> Would it make sense to shift from having `await` functions return a `
>> *Future-like`* return object to returning a Future?
>>
>> Thanks,
>> Roy
>>
>>
>>
>> ___
>> Python-Dev mailing list
>> [email protected]
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org
Re: [Python-Dev] async/await behavior on multiple calls
On Tue, Dec 15, 2015 at 4:39 PM, Roy Williams wrote:
> Thanks for the insight Guido.
>
> I've mostly used async/await inside of HHVM/Hack, and used Guava/Java
> Futures extensively in the past so I found this behavior to be quite
> surprising. I'd like to use Awaitables to represent a DAG of work that
> needs to get done. For example, I used to be one of the maintainers of
> Buck (a build tool similar to Bazel) and we used a collection of futures
> for building all of our dependencies. For each rule, we'd effectively:
>
> dependency_results = await asyncio.gather(*dependencies)
> # Proceed with building.
>
> Rules were free to depend on the same dependency and since the Future
> would just return the same result when resolved more than one time things
> just worked.
>
> Similarly when building up the results for say a web request, I
> effectively want to construct a DAG of work that needs to get done and then
> just await on that DAG in a similar manner without having to enforce that
> the DAG is actually a tree. I can of course write a function to wrap
> everything in Futures, but this seems to be against the spirit of
> async/await.
>
Why would that be against the spirit? It's the only thing that will work
the way you're asking, and there is in fact already a function that does
this (asyncio.ensure_future()).
> Thanks,
> Roy
>
> On Tue, Dec 15, 2015 at 12:08 PM, Guido van Rossum
> wrote:
>
>> I think this goes back all the way to a debate we had when we were
>> discussing PEP 380 (which introduced 'yield from', on which 'await' is
>> built). In fact I believe that the reason PEP 380 didn't make it into
>> Python 2.7 was that this issue was unresolved at the time (the PEP author
>> and I preferred the current approach, but there was one vocal opponent who
>> disagreed -- although my memory is only about 60% reliable on this :-).
>>
>> In any case, problem is that in order to implement the behavior you're
>> asking for, the generator object would have to somehow hold on to its
>> return value so that each time __next__ is called after it has already
>> terminated it can raise StopIteration with the saved return value. This
>> would extend the lifetime of the returned object indefinitely (until the
>> generator object itself is GC'ed) in order to handle a pretty obscure
>> corner case.
>>
>> I don't know how long you have been using async/await, but I wonder if
>> it's possible that you just haven't gotten used to the typical usage
>> patterns? In particular, your claim "anything that takes an `awaitable` has
>> to know that it wasn't already awaited" makes me sound that you're just
>> using it in an atypical way (perhaps because your model is based on other
>> languages). In typical asyncio code, one does not usually take an
>> awaitable, wait for it, and then return it -- one either awaits it and then
>> extracts the result, or one returns it without awaiting it.
>>
>> On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams
>> wrote:
>>
>>> Howdy,
>>>
>>> I'm experimenting with async/await in Python 3, and one very surprising
>>> behavior has been what happens when calling `await` twice on an Awaitable.
>>> In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python,
>>> calling `await` multiple times results in all future results getting back
>>> `None`. Here's a small example program:
>>>
>>>
>>> async def echo_hi():
>>> result = ''
>>> echo_proc = await asyncio.create_subprocess_exec(
>>> 'echo', 'hello', 'world',
>>> stdout=asyncio.subprocess.PIPE,
>>> stderr=asyncio.subprocess.DEVNULL)
>>> result = await echo_proc.stdout.read()
>>> await echo_proc.wait()
>>> return result
>>>
>>> async def await_twice(awaitable):
>>> print('first time is {}'.format(await awaitable))
>>> print('second time is {}'.format(await awaitable))
>>>
>>> loop = asyncio.get_event_loop()
>>> loop.run_until_complete(await_twice(echo_hi()))
>>>
>>> This makes writing composable APIs using async/await in Python very
>>> difficult since anything that takes an `awaitable` has to know that it
>>> wasn't already awaited. Also, since the behavior is radically different
>>> than in the other programming languages implementing async/await it makes
>>> adopting Python's flavor of async/await difficult for folks coming from a
>>> language where it's already implemented.
>>>
>>> In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise
>>> that can be awaited multiple times and either returns the result or throws
>>> any thrown exceptions. It doesn't appear that the Awaitable class in
>>> Python has a `result` or `exception` field but `asyncio.Future` does.
>>>
>>> Would it make sense to shift from having `await` functions return a `
>>> *Future-like`* return object to returning a Future?
>>>
>>> Thanks,
>>> Roy
>>>
>>>
>>>
>>> ___
>>> Python-Dev mailing list
>>> [email protected]
>>> https://mail.pyt
Re: [Python-Dev] Third milestone of FAT Python
I realized yet another thing, which will reduce overhead: the original
array can store values directly, and you maintain the refs by
repeatedly updating them when moving refs around. RefCells will point
to a pointer to the value cell (which already exists in the table).
- `getitem` will be almost the same as a normal dict: it has to
check if value is valid (which it already checked, but it will be
invalid a lot more often).
- `setitem` the same as a normal dict (since the RefCells will just
point to the _address_ of the value pointer in the main table), except
that the dict will be bigger, and compaction/expansion has more
overhead. No creation of refcells here.
- `delitem` will just null the value pointer, which shouldn't cost
much more, if it doesn't cost less.
- Expansion and compaction will cost more, since we have to copy
over the RefCell pointers and also check whether they should be
copied.
- Deletion of the dict will cost more, due to the additional logic
for deciding what to delete, and the RefCells can no longer point into
the entry table. They would have to point at the value (requiring
logic, or the replacement of a function pointer) or at a new allocated
object (requiring an allocation of sizeof(PyObject*) bytes).
On Tue, Dec 15, 2015 at 5:38 PM, Victor Stinner
wrote:
> Sorry, I didn't read carefully your email, but I don't think that it's
> acceptable to make Python namespaces slower. In FAT mode, we need
> versionned dictionaries for module namespace, type namespace, global
> namespace, etc.
It was actually more "it might be a problem" than "it will be a
problem". I don't know if the overhead will be high enough to worry
about. It might be dominated by whatever savings there would be by not
having to look up names more than once. (Unless builtins get mixed
with globals? I think that's solvable, though. It's just that the
solutions I can think of have different tradeoffs.)
I am confident that the time overhead and the savings will beat the
versioning dict. The versioning dict method has to save a reference to
the variable value and a reference to the name, and regularly test
whether the dict has changed. This method only has to save a reference
to a reference to the value (though it might need the name to allow
debugging), doesn't care if it's changed, will be an identity (to
NULL?) test if it's deleted (and only if it's not replaced after), and
absolutely doesn't care if the dict had other updates (which might
increase the version number).
>>> Do you have an estimation of the cost of the "extra pointer"? Impact
>>> on memory and CPU. dict is really a very important type for the
>>> performance of Python. If you make dict slower, I'm sure that Python
>>> overall will be slower.
>>
>> I'm proposing it as a subclass.
>
> Please read the "Versionned dictionary" section of my email:
> https://mail.python.org/pipermail/python-dev/2015-December/142397.html
>
> I explained why using a subclass doesn't work in practice.
I've read it again. By subclass, I mean that it implements the same
interface. But at the C level, I want to have it be a fork(?) of the
current dict implementation. As for `exec`, I think it might be okay
for it to be slower at the early stages of this game.
Here's the lookup function for a string-only dict (used both for
setting and getting):
https://github.com/python/cpython/blob/master/Objects/dictobject.c#L443
I want to break that up into two parts:
- Figure out the index of the {hash, *key, *val} entry in the array.
- Do whatever to it. (In the original: make *value_addr point to the
value pointer.)
I want to do this so that I can use that index to point into ANOTHER
array, which will be the metadata for the refcells (whatever it ends
up being). This will mean that there's no second lookup. This has to
be done at the C level, because the dict object doesn't expose the
index of the {hash, *key, *val} entries on lookup.
If you don't want to make it a subclass, then we can propose a new
function `get_ref` (or something) for dict's C API (probably a hard
sell), which returns RefCell objects, and an additional pointer in
`dict` to the RefCells table (so a total of two pointers). When
`get_ref` is first called, it will
- calloc the RefCell table (which will be the same length as the entry table)
- replace all of the dict's functions with ones that know how to deal
with the RefCells,
- replace itself with a function that knows how to return these refs.
- call its replacement.
If the dict never gets RefCells, you only pay a few pointers in size,
and a few creation/deletion values. This is possible now that the
dictionary itself will store values as normal.
There might be more necessary. For example, the replaced functions
might need to keep pointers to their originals (so that you can slip
additional deep C subclasses in). And it might be nice if the
`get_index` function could be internally relied upon by the C-level
subclasses, because "keeping a metadata ta
Re: [Python-Dev] async/await behavior on multiple calls
@Kevin correct, that's the point I'd like to discuss. Most other
mainstream languages that implements async/await expose the programming
model with Tasks/Futures/Promises as opposed to coroutines PEP 492 states
'Objects with __await__ method are called Future-like objects in the rest
of this PEP.' but their behavior differs from that of Futures in this core
way. Given that most other languages have standardized around async
returning a Future as opposed to a coroutine I think it's worth exploring
why Python differs.
There's a lot of benefits to making the programming model coroutines
without a doubt. It's absolutely brilliant that I can just call code
annotated with @asyncio.coroutine and have it just work. Code using the
old @asyncio.coroutine/yield from syntax should absolutely stay the same.
Similarly, since ES7 async/await is backed by Promises it'll just work for
any existing code out there using Promises.
My proposal would be to automatically wrap the return value from an `async`
function or any object implementing `__await__` in a future with
`asyncio.ensure_future()`. This would allow async/await code to behave in
a similar manner to other languages implementing async/await and would
remain compatible with existing code using asyncio.
What's your thoughts?
Thanks,
Roy
On Tue, Dec 15, 2015 at 3:35 PM, Kevin Conway
wrote:
> I think there may be somewhat of a language barrier here. OP appears to be
> mixing the terms of coroutines and futures. The behavior OP describes is
> that of promised or async tasks in other languages.
>
> Consider a JS promise that has been resolved:
>
> promise.then(function (value) {...});
>
> promise.then(function (value) {...});
>
> Both of the above will execute the callback function with the resolved
> value regardless of how much earlier the promise was resolved. This is not
> entirely different from how Futures work in Python when using
> 'add_done_callback'.
>
> The code example from OP, however, is showing the behaviour of awaiting a
> coroutine twice rather than awaiting a Future twice. Both objects are
> awaitable but both exhibit different behaviour when awaited multiple times.
>
> A scenario I believe deserves a test is what happens in the asyncio
> coroutine scheduler when a promise is awaited multiple times. The current
> __await__ behaviour is to return self only when not done and then to return
> the value after resolution for each subsequent await. The Task, however,
> requires that it must be a Future emitted from the coroutine and not a
> primitive value. Awaiting a resolved future should result
>
> On Tue, Dec 15, 2015, 14:44 Guido van Rossum wrote:
>
>> Agreed. (But let's hear from the OP first.)
>>
>> On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov <
>> [email protected]> wrote:
>>
>>> Both Yury's suggestions sounds reasonable.
>>>
>>> On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov
>>> wrote:
>>> > Hi Roy and Guido,
>>> >
>>> > On 2015-12-15 3:08 PM, Guido van Rossum wrote:
>>> > [..]
>>> >>
>>> >>
>>> >> I don't know how long you have been using async/await, but I wonder if
>>> >> it's possible that you just haven't gotten used to the typical usage
>>> >> patterns? In particular, your claim "anything that takes an
>>> `awaitable` has
>>> >> to know that it wasn't already awaited" makes me sound that you're
>>> just
>>> >> using it in an atypical way (perhaps because your model is based on
>>> other
>>> >> languages). In typical asyncio code, one does not usually take an
>>> awaitable,
>>> >> wait for it, and then return it -- one either awaits it and then
>>> extracts
>>> >> the result, or one returns it without awaiting it.
>>> >
>>> >
>>> > I agree. Holding a return value just so that coroutine can return it
>>> again
>>> > seems wrong to me.
>>> >
>>> > However, since coroutines are now a separate type (although they share
>>> a lot
>>> > of code with generators internally), maybe we can change them to throw
>>> an
>>> > error when they are awaited on more than one time?
>>> >
>>> > That should be better than letting them return `None`:
>>> >
>>> > coro = coroutine()
>>> > await coro
>>> > await coro # <- will raise RuntimeError
>>> >
>>> >
>>> > I'd also add a check that the coroutine isn't being awaited by more
>>> than one
>>> > coroutine simultaneously (another, completely different issue, more on
>>> which
>>> > here: https://github.com/python/asyncio/issues/288). This was fixed
>>> in
>>> > asyncio in debug mode, but ideally, we should fix this in the
>>> interpreter
>>> > core.
>>> >
>>> > Yury
>>> > ___
>>> > Python-Dev mailing list
>>> > [email protected]
>>> > https://mail.python.org/mailman/listinfo/python-dev
>>> > Unsubscribe:
>>> >
>>> https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Andrew Svetlov
>>>
>> ___
>>> Python-Dev mailing list
>>> Pyt
Re: [Python-Dev] async/await behavior on multiple calls
On Dec 15, 2015, at 05:29 PM, Roy Williams wrote: >@Kevin correct, that's the point I'd like to discuss. Most other >mainstream languages that implements async/await expose the programming >model with Tasks/Futures/Promises as opposed to coroutines PEP 492 states >'Objects with __await__ method are called Future-like objects in the rest >of this PEP.' but their behavior differs from that of Futures in this core >way. Given that most other languages have standardized around async >returning a Future as opposed to a coroutine I think it's worth exploring >why Python differs. I'll just note something I've mentioned before, when a bunch of us sprinted on an asyncio based smtp server. The asyncio library documentation *really* needs a good overview and/or tutorial. These are difficult concepts to understand and it seems like bringing experience from other languages may not help (and may even hinder) understanding of Python's model. After a while, you get it, but I think it would be good to help folks get there sooner, especially if you're new to the whole area. Maybe those of you who have been steeped in asyncio for a long time could write that up? I don't think I'm the right person to do that, but I'd be very happy to review it. Cheers, -Barry ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
On Dec 15, 2015, at 17:29, Roy Williams wrote: > > My proposal would be to automatically wrap the return value from an `async` > function or any object implementing `__await__` in a future with > `asyncio.ensure_future()`. This would allow async/await code to behave in a > similar manner to other languages implementing async/await and would remain > compatible with existing code using asyncio. Two questions: Is it possible (and at all reasonable) to write code that actually depends on getting raw coroutines from async? If not, is there any significant performance impact for code that works with raw coroutines and doesn't need real futures to get them wrapped in futures anyway? ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
Roy, On 2015-12-15 8:29 PM, Roy Williams wrote: [..] My proposal would be to automatically wrap the return value from an `async` function or any object implementing `__await__` in a future with `asyncio.ensure_future()`. This would allow async/await code to behave in a similar manner to other languages implementing async/await and would remain compatible with existing code using asyncio. What's your thoughts? Other languages, such as JavaScript, have a notion of event loop integrated on a very deep level. In Python, there is no centralized event loop, and asyncio is just one way of implementing one. In asyncio, Future objects are designed to inter-operate with an event loop (that's also true for JS Promises), which means that in order to automatically wrap Python coroutines in Futures, we'd have to define the event loop deep in Python core. Otherwise it's impossible to implement 'Future.add_done_callback', since there would be nothing that calls the callbacks on completion. To avoid adding a built-in event loop, PEP 492 introduced coroutines as an abstract language concept. David Beazley, for instance, doesn't like Futures, and his new framework 'curio' does not have them at all. I highly doubt that we want to add a generalized event loop in Python core, define a generalized Future interface, and make coroutines return it. It's simply too much work with no clear wins. Now, your initial email highlights another problem: coro = coroutine() print(await coro) # will print the result of coroutine await coro # prints None This is a bug that needs to be fixed. We have two options: 1. Cache the result when the coroutine object is awaited first time. Return the cached result when the coroutine object is awaited again. 2. Raise an error if the coroutine object is awaited more than once. The (1) option would solve your problem. But it also introduces new complexity: the GC of result will be delayed; more importantly, some users will wonder if we cache the result or run the coroutine again. It's just not obvious. The (2) option is Pythonic and simple to understand/debug, IMHO. In this case, the best way for you to solve your initial problem, would be to have a decorator around your tasks. The decorator should wrap coroutines with Futures (with asyncio.ensure_future) and everything will work as you expect. Thanks, Yury ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
I agree with Barry. We need more material that introduces the community to the new async/await syntax and the new concepts they bring. We borrowed the words from other languages but not all of their behaviours. With coroutines in particular, we can do a better job of describing the differences between them and the previous generator-coroutines, the rules regarding what - if anything - is emitted from a '.send()', and how await resolves to a value. If you read through the asyncio Task code enough you'll figure it out, but we can't expect the community as a whole to learn the language, or asyncio, that way. Back to the OP's issue. The behaviour you are seeing of None being the value of an exhausted coroutine is consistent with that of an exhausted generator. Pushing the iterator with __next__() or .send() after completion results in a StopIteration being raised with a value of None regardless of what the final yielded/returned value was. Futures can be awaited multiple times because the __iter__/__await__ method defined causes them to raise StopIteration with the resolved value. I think the list is trying to tell you that awaiting a coro multiple times is simply not a valid case in Python because they are exhaustible resources. In asyncio, they are primarily a helpful mechanism for shipping promises to the Task wrapper. In virtually all cases the pattern is: > await some_async_def() and almost never: > coro = some_async_def() > await coro On Tue, Dec 15, 2015 at 9:34 PM Yury Selivanov wrote: > Roy, > > On 2015-12-15 8:29 PM, Roy Williams wrote: > [..] > > > > My proposal would be to automatically wrap the return value from an > > `async` function or any object implementing `__await__` in a future > > with `asyncio.ensure_future()`. This would allow async/await code to > > behave in a similar manner to other languages implementing async/await > > and would remain compatible with existing code using asyncio. > > > > What's your thoughts? > > Other languages, such as JavaScript, have a notion of event loop > integrated on a very deep level. In Python, there is no centralized > event loop, and asyncio is just one way of implementing one. > > In asyncio, Future objects are designed to inter-operate with an event > loop (that's also true for JS Promises), which means that in order to > automatically wrap Python coroutines in Futures, we'd have to define the > event loop deep in Python core. Otherwise it's impossible to implement > 'Future.add_done_callback', since there would be nothing that calls the > callbacks on completion. > > To avoid adding a built-in event loop, PEP 492 introduced coroutines as > an abstract language concept. David Beazley, for instance, doesn't like > Futures, and his new framework 'curio' does not have them at all. > > I highly doubt that we want to add a generalized event loop in Python > core, define a generalized Future interface, and make coroutines return > it. It's simply too much work with no clear wins. > > Now, your initial email highlights another problem: > > coro = coroutine() > print(await coro) # will print the result of coroutine > await coro # prints None > > This is a bug that needs to be fixed. We have two options: > > 1. Cache the result when the coroutine object is awaited first time. > Return the cached result when the coroutine object is awaited again. > > 2. Raise an error if the coroutine object is awaited more than once. > > The (1) option would solve your problem. But it also introduces new > complexity: the GC of result will be delayed; more importantly, some > users will wonder if we cache the result or run the coroutine again. > It's just not obvious. > > The (2) option is Pythonic and simple to understand/debug, IMHO. In > this case, the best way for you to solve your initial problem, would be > to have a decorator around your tasks. The decorator should wrap > coroutines with Futures (with asyncio.ensure_future) and everything will > work as you expect. > > Thanks, > Yury > > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com > ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no)
On 15 December 2015 at 23:11, Victor Stinner wrote: > I guess that the optimizations on "in" and "is" operators are fine, > but optimizations on all other operations must be removed to not break > the Python semantic. Right, this is why we have functools.total_ordering as a class decorator to "fill in" the other comparison implementations based on the ones in the class body. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "python.exe is not a valid Win32 app"
On 16 December 2015 at 01:14, R. David Murray wrote: > That said, I'm not sure whether or not there is a way we could add > "supported versions" to the main docs that would make sense and be > useful...your bugs.python.org issue would be useful for discussing that. Having "minimum supported version" for Windows and Mac OS X in the "using" guide would likely make sense. For Linux, supported versions are handled by redistributors, so the most we could do is offer guidance to folks on checking their version and ensuring they're looking at the right documentation. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] async/await behavior on multiple calls
On 16 December 2015 at 11:41, Barry Warsaw wrote: > The asyncio library documentation *really* needs a good overview and/or > tutorial. These are difficult concepts to understand and it seems like > bringing experience from other languages may not help (and may even hinder) > understanding of Python's model. After a while, you get it, but I think it > would be good to help folks get there sooner, especially if you're new to the > whole area. > > Maybe those of you who have been steeped in asyncio for a long time could > write that up? I don't think I'm the right person to do that, but I'd be very > happy to review it. One smaller step that may be helpful is changing the titles of a couple of the sections from: * 18.5.4. Transports and protocols (low-level API) * 18.5.5. Streams (high-level API) to: * 18.5.4. Transports and protocols (callback based API) * 18.5.5. Streams (coroutine based API) That's based on a sample size of one though (a friend for whom light dawned once I explained that low-level=callbacks and high-level=coroutines), which is why I hadn't written a patch for it. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Third milestone of FAT Python
Le mercredi 16 décembre 2015, Franklin? Lee a écrit : > > I am confident that the time overhead and the savings will beat the > versioning dict. The versioning dict method has to save a reference to > the variable value and a reference to the name, and regularly test > whether the dict has changed. The performance of guards matters less than the performance of regular usage of dict. If we have to make a choice, I prefer "slow" guard but no impact on regular dict methods. It's very important that enabling FAT mode doesn't kill performances. Remember that FAT python is a static optimizer and so can only optimize some patterns, not all Python code. In my current implementation, a lookup is only needed when aguard is checked if the dict was modified. The dict version doesn't change if a mutable object was modified in place for example. I didn't benchmark, but I expect that the lookup is avoided in most cases. You should try FAT python and implement statistics before going too far in your idea. > I've read it again. By subclass, I mean that it implements the same > interface. But at the C level, I want to have it be a fork(?) of the > current dict implementation. As for `exec`, I think it might be okay > for it to be slower at the early stages of this game. Be careful, dict methods are hardcoded in the C code. If your type is not a subtype, there is risk of crashes. I fixed issues in Python/ceval.c, but it's not enough. You may also have to fix issues in third party C extensions why only expect dict for namespaces. Victor ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
