twisteroid ambassador added the comment:
Well this is unexpected, the same code running on Linux is throwing
GeneratorExit-related mysterious exceptions as well. I'm not sure whether this
is the same problem, but this one has a clearer traceback. I will attach the
full error log, bu
twisteroid ambassador added the comment:
I have attached a script that should be able to reproduces this problem. It's
not a minimal reproduction, but hopefully easy enough to trigger.
The script is a SOCKS5 proxy server listening on localhost:1080. In its current
form it does not nee
Change by twisteroid ambassador :
--
versions: +Python 3.9
___
Python tracker
<https://bugs.python.org/issue39116>
___
___
Python-bugs-list mailing list
Unsub
twisteroid ambassador added the comment:
This problem still exists on Python 3.9 and latest Windows 10.
I tried to catch the GeneratorExit and turn it into a normal Exception, and
things only got weirder from here. Often several lines later another await
statement would raise another
New submission from twisteroid ambassador :
I have been getting these strange exception since Python 3.8 on my Windows 10
machine. The external symptoms are many errors like "RuntimeError: aclose():
asynchronous generator is already running" and "Task was destroyed but it is
twisteroid ambassador added the comment:
With regards to the failing test, it looks like the test basically boils down
to testing whether loop.getaddrinfo('fe80::1%1', 80, type=socket.SOCK_STREAM)
returns (, , *, *, ('fe80::1', 80, 0, 1)).
This feels like a dangerou
twisteroid ambassador added the comment:
AFAIK the reason why scope id is required for IPv6 is that every IPv6
interfaces has its own link-local address, and all these addresses are in
the same subnet, so without an additional scope id there’s no way to tell
from which interface an address can
twisteroid ambassador added the comment:
The difference is because you grabbed and print()ed the exception themselves in
Script 2, while in Script 1 you let Python's built-in unhandled exception
handler (sys.excepthook) print the traceback for you.
If you want a traceback, then you ne
twisteroid ambassador added the comment:
The child watchers are documented now, see here:
https://docs.python.org/3/library/asyncio-policy.html#process-watchers
Sounds like FastChildWatcher
https://docs.python.org/3/library/asyncio-policy.html#asyncio.FastChildWatcher
is exactly what you
twisteroid ambassador added the comment:
Duplicate of issue35545, I believe.
--
nosy: +twisteroid ambassador
___
Python tracker
<https://bugs.python.org/issue36
twisteroid ambassador added the comment:
I feel like once you lay out all the requirements: taking futures from an
(async) generator, limiting the number of concurrent tasks, getting completed
tasks to one consumer "as completed", and an implicit requirement that back
pressur
twisteroid ambassador added the comment:
I wrote a recipe on this idea:
https://gist.github.com/twisteroidambassador/f35c7b17d4493d492fe36ab3e5c92202
Untested, feel free to use it at your own risk.
--
___
Python tracker
<https://bugs.python.
twisteroid ambassador added the comment:
There is a way to distinguish whether a task is being cancelled from the
"inside" or "outside", like this:
async def task1func():
task2 = asyncio.create_task(task2func())
try:
await asyncio.wait((tas
twisteroid ambassador added the comment:
Oh wait, there's also this in asyncio docs for loop.sock_connect:
Changed in version 3.5.2: address no longer needs to be resolved. sock_connect
will try to check if the address is already resolved by calling
socket.inet_pton(). I
twisteroid ambassador added the comment:
I just noticed that in the socket module, an AF_INET address tuple is allowed
to have an unresolved host name. Quote:
A pair (host, port) is used for the AF_INET address family, where host is a
string representing either a hostname in Internet domain
twisteroid ambassador added the comment:
Hi Emmanuel,
Are you referring to my PR 11403? I don't see where IPv6 uses separate
parameters.
--
___
Python tracker
<https://bugs.python.org/is
Change by twisteroid ambassador :
--
pull_requests: +10786, 10787, 10788, 10789
___
Python tracker
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list m
Change by twisteroid ambassador :
--
pull_requests: +10790, 10791
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue33678>
___
___
Py
Change by twisteroid ambassador :
--
pull_requests: +10790
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue33678>
___
___
Python-
Change by twisteroid ambassador :
--
pull_requests: +10786
___
Python tracker
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailing list
Unsub
Change by twisteroid ambassador :
--
pull_requests: +10786, 10787, 10788
___
Python tracker
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailin
Change by twisteroid ambassador :
--
pull_requests: +10786, 10787
___
Python tracker
<https://bugs.python.org/issue35545>
___
___
Python-bugs-list mailin
twisteroid ambassador added the comment:
Looks like this bug is also cause by using _ensure_resolved() more than once
for a given host+port, so it can probably be fixed together with
https://bugs.python.org/issue35545 .
Masking sock.type should not be necessary anymore since
https
twisteroid ambassador added the comment:
Also I believe it's a good idea to change the arguments of _ensure_resolved()
from (address, *, ...) to (host, port, *, ...), and go through all its usages,
making sure we're not mixing host + port with address tuples everywhere i
twisteroid ambassador added the comment:
I think the root cause of this bug is a bit of confusion.
The "customer-facing" asyncio API, create_connection(), takes two arguments:
host and port. The lower-level API that actually deal with connecting sockets,
socket.con
twisteroid ambassador added the comment:
I don't have a Mac, so I have not tested Ronald's workaround. Assuming it
works, we will have to either i) implement platform-specific behavior and only
apply IPV6_V6ONLY on macOS for each AF_INET6 socket created, or ii) apply it to
al
Change by twisteroid ambassador :
--
keywords: +patch
pull_requests: +10471
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issu
twisteroid ambassador added the comment:
IMO macOS is at fault here, for even allowing an IPv6 socket to bind to an IPv4
address. ;-)
I have given some thought about this issue when writing my happy eyeballs
library. My current solution is closest to Neil's first suggestion, i.e. each
Change by twisteroid ambassador :
--
pull_requests: +9174
___
Python tracker
<https://bugs.python.org/issue34769>
___
___
Python-bugs-list mailing list
Unsub
twisteroid ambassador added the comment:
I’m now convinced that the bug we’re fixing and the original bug with debug
mode off are two separate bugs. With the fix in place and debug mode off, I’m
still seeing the original buggy behavior. Bummer.
In my actual program, I have an async
Change by twisteroid ambassador :
--
keywords: +patch
pull_requests: +9099
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issu
twisteroid ambassador added the comment:
I'll open a PR with your diff soon, but I don't have a reliable unit test yet.
Also, it does not seem to fix the old problem with debug mode off. :-( I had
hoped that the problem with debug mode off is nothing more than
_asyncgen_finalize
twisteroid ambassador added the comment:
I have finally managed to reproduce this one reliably. The issue happens when
i) async generators are not finalized immediately and must be garbage collected
in the future, and ii) the garbage collector happens to run in a different
thread than the
New submission from twisteroid ambassador :
When testing my happy eyeballs library, I occasionally run into issues with
async generators seemingly not finalizing. After setting loop.set_debug(True),
I have been seeing log entries like these:
Exception ignored in:
Traceback (most recent
twisteroid ambassador added the comment:
As an aside, I'm wondering whether it makes sense to add a blanket "assert
exception handler has not been called" condition to ProactorEventLoop's tests,
or even other asyncio tests. It looks like ProactorEventLoop is mor
twisteroid ambassador added the comment:
No problem. Running the attached test script on latest master, Windows 10 1803,
several errors like this are logged:
Exception in callback
_ProactorBaseWritePipeTransport._loop_writing(<_OverlappedF...events.py:479>)
handle: )
crea
twisteroid ambassador added the comment:
Well, I opened the PR, it shows up here, but there's no reviewer assigned.
--
___
Python tracker
<https://bugs.python.org/is
Change by twisteroid ambassador :
--
pull_requests: +7571
___
Python tracker
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailing list
Unsub
twisteroid ambassador added the comment:
Turns out my typo when preparing the pull request had another victim: the
changelog entries in documentation currently links to the wrong issue. I'll
make a PR to fix that typo; since it's just documentation, hopefully it can
still get i
Change by twisteroid ambassador :
--
pull_requests: +7500
___
Python tracker
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Unsub
New submission from twisteroid ambassador :
When running the built-in regression tests, although
test_sendfile_close_peer_in_the_middle_of_receiving on ProactorEventLoop
completes successfully, an InvalidStateError is logged.
Console output below
Change by twisteroid ambassador :
--
keywords: +patch
pull_requests: +7251
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issu
Change by twisteroid ambassador :
--
type: -> behavior
___
Python tracker
<https://bugs.python.org/issue33833>
___
___
Python-bugs-list mailing list
Un
New submission from twisteroid ambassador :
Sometimes when a socket transport under ProactorEventLoop is writing while the
peer closes the connection, asyncio logs an AssertionError.
Attached is a script that fairly reliably reproduces the behavior on my
computer.
This is caused by
Change by twisteroid ambassador :
--
pull_requests: +6867
___
Python tracker
<https://bugs.python.org/issue33530>
___
___
Python-bugs-list mailing list
Unsub
Change by twisteroid ambassador :
--
keywords: +patch
pull_requests: +6785
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue31647>
___
_
Change by twisteroid ambassador :
--
pull_requests: +6786
___
Python tracker
<https://bugs.python.org/issue31467>
___
___
Python-bugs-list mailing list
Unsub
twisteroid ambassador added the comment:
I was about to write a long comment asking what the appropriate behavior should
be, but then saw that _ProactorSocketTransport already handles the same issue
properly, so I will just change _SelectorSocketTransport to do the same thing
Change by twisteroid ambassador :
--
pull_requests: +6783
___
Python tracker
<https://bugs.python.org/issue31467>
___
___
Python-bugs-list mailing list
Unsub
Change by twisteroid ambassador :
--
keywords: +patch
pull_requests: +6734
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue33630>
___
_
Change by twisteroid ambassador :
--
keywords: +patch
pull_requests: +6733
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue33530>
___
_
New submission from twisteroid ambassador :
Add a Happy Eyeballs implementation to asyncio, based on work in
https://github.com/twisteroidambassador/async_stagger .
Current plans:
- Add 2 keyword arguments to loop.create_connection and asyncio.open_connection.
* delay: Optional[float
twisteroid ambassador added the comment:
I would like to comment on the last observation about current_task().cancel().
I also ran into this corner case recently.
When a task is cancelled from outside, by virtue of there *being something
outside doing the cancelling*, the task being
Change by twisteroid ambassador :
--
nosy: +giampaolo.rodola, haypo
___
Python tracker
<https://bugs.python.org/issue31647>
___
___
Python-bugs-list mailin
twisteroid ambassador added the comment:
This issue is somewhat related to issue27223, in that both are caused by using
self._sock after it has already been assigned None when the connection is
closed. It seems like Transports in general may need better protection from
this kind of behavior
New submission from twisteroid ambassador :
Currently, if one attempts to do write_eof() on a StreamWriter after the
underlying transport is already closed, an AttributeError is raised:
Traceback (most recent call last):
File "\scratch_3.py", line 34, in main_coro
writer
New submission from twisteroid ambassador:
In docs / Library Reference / asyncio / Transports and Protocols, it is
mentioned that "asyncio currently implements transports for TCP, UDP, SSL, and
subprocess pipes. The methods available on a transport depend on the
transport’s kind.&quo
57 matches
Mail list logo