New submission from Joongi Kim :
I'm now tracking the recent addition and discussion of TaskGroup and
cancellation scopes. It's interesting! :)
I would like to suggest to have a different mode of operation in
asyncio.TaskGroup, which I named "PersistentTaskGroup".
AFAI
New submission from Joongi Kim :
Along with bpo-46843 and the new asyncio.TaskGroup API, I would like to suggest
addition of context-based TaskGroup feature.
Currently asyncio.create_task() just creates a new task directly attached to
the event loop, while asyncio.TaskGroup.create_task
Joongi Kim added the comment:
The main benefit is that any legacy code that I cannot modify can be upgraded
to TaskGroup-based codes, which offers a better machinary for exception
handling and propagation.
There may be different ways to visit this issue: allow replacing the task
factory in
Joongi Kim added the comment:
Conceptually it is similar to replace malloc using LD_PRELOAD or
LD_LIBRARY_PATH manipulation. When I cannot modify the executable/library
binaries, this allows replacing the functionality of specific functions.
If we could assign a specific (persistent) task
Joongi Kim added the comment:
It is also useful to write debugging/monitoring codes for asyncio applications.
For instance, we could "group" tasks from different libraries and count them.
--
___
Python tracker
<https://bugs.python.o
Joongi Kim added the comment:
My propsal is to opt-in the taskgroup binding for asyncio.create_task() under a
specific context, not changing the defautl behavior.
--
___
Python tracker
<https://bugs.python.org/issue46
Joongi Kim added the comment:
An example would be like:
tg = asyncio.TaskGroup()
...
async with tg:
with asyncio.TaskGroupBinder(tg): # just a hypothetical API
asyncio.create_task(...) # equivalent to tg.create_task(...)
await some_library.some_work() # all tasks are
Joongi Kim added the comment:
Ah, and this use case also requires that TaskGroup should have an option like
`return_exceptions=True` which makes it not to cancel sibling tasks upon
unhandled exceptions, as I suggested in PersistentTaskGroup (bpo-46843
Joongi Kim added the comment:
Ok, let me be clear: Patching asyncio.create_task() to support this opt-in
contextual task group binding is not an ultimate goal of this issue. If it
becomes possible to override/extend the task factory at runtime with any event
loop implementation, then it
Joongi Kim added the comment:
So I have more things in mind.
Basically PersistentTaskGroup resemble TaskGroup in that:
- It has the same "create_task()" method.
- It has an explicit "cancel()" or "shutdown()" method.
- Exiting of the context manager means th
Joongi Kim added the comment:
I think people may ask "why in stdlib?".
My reasons are:
- We are adding new asyncio APIs in 3.11 such as TaskGroup, so I think it is a
good time to add another one, as long as it does not break existing stuffs.
- I believe that long-running tas
Joongi Kim added the comment:
Example use cases:
* Implement an event iteration loop to fetch events and dispatch the handlers
depending on the event type (e.g., WebSocket connections, message queues, etc.)
- https://github.com/aio-libs/aiohttp/pull/2885
- https://github.com/lablup
Joongi Kim added the comment:
Some search results from cs.github.com with the input "asyncio task weakset",
which may be replaced/simplified with PersistentTaskGroup:
-
https://github.com/Textualize/textual/blob/38efc821737e3158a8c4c7ef8ecfa953dc7c0ba8/src/textual/message_p
Joongi Kim added the comment:
@yselivanov @asvetlov
I think this API suggestion would require more refining and discussion in
depths, and probably it may be better to undergo the PEP writing and review
process. Or I might need to have a separate discussion thread somewhere else
(maybe
Change by Joongi Kim :
--
nosy: +achimnol
___
Python tracker
<https://bugs.python.org/issue46622>
___
___
Python-bugs-list mailing list
Unsubscribe:
Joongi Kim added the comment:
@gvanrossum As you mentioned, the event loop currently plays the role of the
top-level task group already, even without introducing yet another top-level
task. For instance, asyncio.run() includes necessary shutdown procedures to
cancel all belonging
Joongi Kim added the comment:
This particular experience,
https://github.com/lablup/backend.ai-agent/pull/331, has actually motivated me
to suggest PersistentTaskGroup.
The program subscribes the event stream of Docker daemon using aiohttp as an
asyncio task, and this should be kept
Joongi Kim added the comment:
I ended up with the following conclusion:
- The new abstraction should not cancel sibling tasks and itself upon unhandled
execption but loudly report such errors (and the fallback error handler should
be customizable).
- Nesting task groups will give additional
Joongi Kim added the comment:
Here is one another story.
When handling message queues in distributed applications, I use the following
pattern frequently for graceful shutdown:
* Use a sentinel object to signal the end of queue.
* Enqueue the sentinel object when:
- The server is shutting
Joongi Kim added the comment:
I have added more about my stories in bpo-46843.
I think the suggestion of implicit taskgroup binding with the current
asyncio.TaskGroup has no point but it would have more meaning with
PersistentTaskGroup.
So, if we treat PersistentTaskGroup as a "n
Joongi Kim added the comment:
Updated the title to reduce confusion.
--
title: Context-based TaskGroup for legacy libraries -> Implicit binding of
PersistentTaskGroup (or virtual event loops)
___
Python tracker
<https://bugs.python.org/issu
Joongi Kim added the comment:
Anoter case:
https://github.com/lablup/backend.ai-manager/pull/533
https://github.com/lablup/backend.ai-agent/pull/341
When shutting down the application, I'd like to explicitly cancel the shielded
tasks, while keep them shielded before shutdown.
So I ins
Joongi Kim added the comment:
Good to hear that TaskGroup already uses WeakSet.
When all tasks finish, PersistentTaskGroup should not finish and wait for
future tasks, unless explicitly cancelled or shutdown. Could this be also
configured with asyncio.TaskGroup?
I'm also ok with add
Joongi Kim added the comment:
> As for errors in siblings aborting the TaskGroup, could you apply a wrapper
> to the scheduled coroutines to swallow and log any errors yourself?
Yes, this could be a simplest way to implement PersistentTaskGroup if TaskGroup
supports "persistent
Joongi Kim added the comment:
> And just a question: I'm just curious about what happens if belonging tasks
> see the cancellation raised from their inner tasks. Sibling tasks should not
> be cancelled, and the outer task group should not be cancelled, unless the
> task
Joongi Kim added the comment:
Short summary:
PersistentTaskGroup shares the followings from TaskGroup:
- It uses WeakSet to keep track of child tasks.
- After exiting the async context manager scope (or the shutdown procedure), it
ensures that all tasks are complete or cancelled
Joongi Kim added the comment:
I have updated the PersistentTaskGroup implementation referring
asyncio.TaskGroup and added more detailed test cases, which works with the
latest Python 3.11 GitHub checkout.
https://github.com/achimnol/aiotools/pull/36/files
Please have a look at the class
New submission from Joongi Kim :
The __repr__() method in asyncio.TaskGroup does not include self._name.
I think this is a simple overlook, because asyncio.Task includes the task name
in __repr__(). :wink:
https://github.com/python/cpython/blob/345572a1a02/Lib/asyncio/taskgroups.py#L28-L42
Joongi Kim added the comment:
Ah, I'm confused with aiotools.TaskGroup (originated from EdgeDB's TaskGroup)
code while browsing both aiotools and stdlib asyncio.TaskGroup source codes.
The naming facility seems to be intentionally removed when ported to the stdlib.
So I am closin
Joongi Kim added the comment:
I have released the new version of aiotools with rewritten TaskGroup and
PersistentTaskGroup.
https://aiotools.readthedocs.io/en/latest/aiotools.taskgroup.html
aiotools.TaskGroup has small additions to asyncio.TaskGroup: a naming API and
`current_taskgroup
New submission from Joongi Kim :
This is just an idea: ContextVar.set() and ContextVar.reset() looks naturally
mappable with the "with" statement.
For example:
a = ContextVar('a')
token = a.set(1234)
...
a.reset(token)
could be naturally rewritten as:
a = ContextVar(
Joongi Kim added the comment:
After checking out PEP-567 (https://www.python.org/dev/peps/pep-0567/),
I'm adding njs to the nosy list.
--
nosy: +njs
___
Python tracker
<https://bugs.python.org/is
New submission from Joongi Kim :
This is a rough early idea suggestion on adding io_uring as an alternative I/O
multiplexing mechanism in Python (maybe selectors and asyncio). io_uring is a
relatively new I/O mechanism introduced in Linux kernel 5.1 or later.
https://lwn.net/Articles/776703
Joongi Kim added the comment:
Ah, yes, but one year has passed so it may be another chance to discuss its
adoption, as new advances like tokio_uring became available.
--
___
Python tracker
<https://bugs.python.org/issue44
Joongi Kim added the comment:
As in the previous discussion, instead of tackling stdlib right away, it would
be nice to evaluate the approach using 3rd-party libs, such as trio and/or
async-tokio, or maybe a new library.
I have a strong feeling that we need to improve the async file I/O
Change by Joongi Kim :
--
keywords: +patch
nosy: +Joongi Kim
nosy_count: 6.0 -> 7.0
pull_requests: +27160
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/28850
___
Python tracker
<https://bugs.python.org/i
Joongi Kim added the comment:
It is also generating deprecation warning:
> /opt/python/3.8.0/lib/python3.8/asyncio/queues.py:48: DeprecationWarning: The
> loop argument is deprecated since Python 3.8, and scheduled for removal in
> Python 3.10.
> self._finished = locks.Eve
Joongi Kim added the comment:
I just encountered this issue when doing "sys.exit(1)" on a Click-based CLI
program that internally uses asyncio event loop via wrapped via a context
manager, on Python 3.8.2.
Using uvloop or adding "time.sleep(0.1)" before "sys.e
Joongi Kim added the comment:
And I suspect that this issue is something simliar to what I did in a recent
janus PR:
https://github.com/aio-libs/janus/blob/ec8592b91254971473b508313fb91b01623f13d7/janus/__init__.py#L84
to give a chance for specific callbacks to execute via an extra context
Change by Joongi Kim :
--
nosy: +achimnol
___
Python tracker
<https://bugs.python.org/issue41320>
___
___
Python-bugs-list mailing list
Unsubscribe:
Joongi Kim added the comment:
>From the given example, if I add "await q.aclose()" after "await
>q.asend(123456)" it does not leak the memory.
This is a good example showing that we should always wrap async generators with
explicit "aclosing" context mana
Joongi Kim added the comment:
I've searched the Python documentation and the docs must be updated to
explicitly state the necessity of aclose().
refs)
https://docs.python.org/3/reference/expressions.html#asynchronous-generator-functions
https://www.python.org/dev/peps/pep-0525/
I'
Change by Joongi Kim :
--
nosy: +njs
___
Python tracker
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Change by Joongi Kim :
--
nosy: +Joongi Kim
nosy_count: 6.0 -> 7.0
pull_requests: +20687
pull_request: https://github.com/python/cpython/pull/21545
___
Python tracker
<https://bugs.python.org/issu
Change by Joongi Kim :
--
pull_requests: +22115
pull_request: https://github.com/python/cpython/pull/23217
___
Python tracker
<https://bugs.python.org/issue41
Change by Joongi Kim :
--
nosy: +achimnol
___
Python tracker
<https://bugs.python.org/issue32528>
___
___
Python-bugs-list mailing list
Unsubscribe:
Joongi Kim added the comment:
I strongly agree to have discretion of CancelledError and other general
exceptions, though I don't have concrete ideas on good unobtrusive ways to
achieve this.
If I write my codes carefully I could control most of cancellation explicitly,
but it is still
Change by Joongi Kim :
--
keywords: +patch
pull_requests: +5035
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue32526>
___
___
Python-
New submission from Joongi Kim :
I have installed Python 3.6.4 for macOS by downloading from the official site
(www.python.org) and then tried installing 3.6.5 using pyenv.
Then the installation process hangs here:
https://user-images.githubusercontent.com/555156/38078784-57e44462-3378-11e8
Joongi Kim added the comment:
I found that the reason was my Python 3.6.4 installed via the
official-installer has the permission of "root:wheel" and pyenv is running in
my plain user privilege. Using chown command to change the permissions to
"joongi:admin" and retryi
Change by Joongi Kim :
--
nosy: +achimnol
___
Python tracker
<https://bugs.python.org/issue33221>
___
___
Python-bugs-list mailing list
Unsubscribe:
Joongi Kim added the comment:
I like trio-style instrumentation API because it could be used for more generic
purposes, not only for statistics.
This stats or instrumentation API will greatly help me to utilize external
monitoring services such as Datadog in my production deployments
Change by Joongi Kim :
--
components: asyncio
nosy: achimnol, asvetlov, njs, yselivanov
priority: normal
severity: normal
status: open
title: Closing async generator while it is running does not raise an exception
type: behavior
versions: Python 3.6
New submission from Joongi Kim :
Here is the minimal example code:
https://gist.github.com/achimnol/965a6aecf7b1f96207abf11469b68965
Just run this code using "python -m pytest -s test.py" to see what happens.
(My setup uses Python 3.6.4 and pytest 3.3.2 on macOS High Sierra 10.13.2
54 matches
Mail list logo