Vincent Michel added the comment:
My team ran into this issue while developing a fuse application too.
In an effort to help this issue move forward, I tried to list all occurrences
of the `isatty` C function in the cpython code base. I found 14 of them.
9 of them are directly related to
Vincent Michel added the comment:
Here's a possible patch that fixes the 3 unprotected calls to `isatty`
mentioned above. It successfully passes the test suite. I can submit a PR with
this patch if necessary.
--
keywords: +patch
Added file: https://bugs.python.org/file5027
Change by Vincent Michel :
--
nosy: +vxgmichel
nosy_count: 2.0 -> 3.0
pull_requests: +26670
pull_request: https://github.com/python/cpython/pull/28250
___
Python tracker
<https://bugs.python.org/issu
Change by Vincent Michel :
--
pull_requests: +26671
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/28250
___
Python tracker
<https://bugs.python.org/issu
Change by Vincent Michel :
--
pull_requests: -26670
___
Python tracker
<https://bugs.python.org/issue44129>
___
___
Python-bugs-list mailing list
Unsubscribe:
Vincent Michel added the comment:
There are a couple of reasons why I did not make changes to the stdstream
related functions.
The first one is that a PR with many changes is less likely to get reviewed and
merged than a PR with fewer changes. The second one is that it's hard for
New submission from Vincent Michel :
On windows, the timestamps produced by time.time() often end up being equal
because of the 15 ms resolution:
>>> time.time(), time.time()
(1580301469.6875124, 1580301469.6875124)
The problem I noticed is that a value produced by time_n
Vincent Michel added the comment:
I thought about it a bit more and I realized there is no way to recover the
time in hundreds of nanoseconds from the float produced by `time.time()` (since
the windows time currently takes 54 bits and will take 55 bits in 2028).
That means `time()` and
Change by Vincent Michel :
Added file: https://bugs.python.org/file48881/comparing_errors.py
___
Python tracker
<https://bugs.python.org/issue39484>
___
___
Python-bug
Vincent Michel added the comment:
Thanks for your answers, that was very informative!
> >>> a/10**9
> 1580301619.9061854
> >>> a/1e9
> 1580301619.9061852
>
> I'm not sure which one is "correct".
Originally, I thought `a/10**9` was more p
Vincent Michel added the comment:
> The problem is that there is a double rounding in [...]
Actually `float(x) / 1e9` and `x / 1e9` seems to produce the same results:
```
import time
import itertools
now = time.time
Vincent Michel added the comment:
@serhiy.storchaka
> 1580301619906185300/10**9 is more accurate than 1580301619906185300/1e9.
I don't know exactly what `F` represents in your example but here is what I get:
>>> r = 15
Vincent Michel added the comment:
@mark.dickinson
> To be clear: the following is flawed as an accuracy test, because the
> *multiplication* by 1e9 introduces additional error.
Interesting, I completely missed that!
But did you notice that the full conversion might still perform
Change by Vincent Michel :
Added file: https://bugs.python.org/file48883/comparing_conversions.py
___
Python tracker
<https://bugs.python.org/issue39484>
___
___
Pytho
Change by Vincent Michel :
--
pull_requests: +14550
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/14755
___
Python tracker
<https://bugs.python.org/issu
New submission from Vincent Michel:
Calling `config_parser.read` with `'test'` is equivalent to:
config_parser.read(['test'])
while calling `config_parser.read` with `b'test'` is treated as:
config_parser.read([116, 101, 115, 116])
which means py
Changes by Vincent Michel :
--
pull_requests: +3417
___
Python tracker
<http://bugs.python.org/issue31307>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Vincent Michel :
--
pull_requests: +3418
___
Python tracker
<http://bugs.python.org/issue29627>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Vincent Michel :
As far as I can tell, this issue is different than:
https://bugs.python.org/issue34730
I noticed `async_gen.aclose()` raises a GeneratorExit exception if the async
generator finalization awaits and silence a failing unfinished future (see
example.py
Change by Vincent Michel :
Added file: https://bugs.python.org/file47973/test.py
___
Python tracker
<https://bugs.python.org/issue35409>
___
___
Python-bugs-list mailin
Change by Vincent Michel :
--
keywords: +patch
Added file: https://bugs.python.org/file47974/patch.diff
___
Python tracker
<https://bugs.python.org/issue35
Change by Vincent Michel :
--
keywords: +patch
pull_requests: +12333
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue31062>
___
___
Py
Vincent Michel added the comment:
I ran into this issue too so I went ahead and created a pull request
(https://github.com/python/cpython/pull/12370).
--
nosy: +vxgmichel
versions: +Python 3.7, Python 3.8
___
Python tracker
<ht
New submission from Vincent Michel :
It's currently not possible to receive replies from multicast UDP with asyncio,
as reported in the following issue:
https://github.com/python/asyncio/issues/480
That's because asyncio connects the UDP socket to the broadcast address,
causing a
New submission from Vincent Michel :
I'm not sure whether it is intended or not, but I noticed a change in the
behavior of `StreamReader` between version 3.7 and 3.8.
Basically, reading some received data from a closed TCP stream using
`StreamReader.read` might hang forever, under ce
Vincent Michel added the comment:
Hi Andrew!
I reverted the commit associated with the following PR, and the hanging issue
disappeared:
https://github.com/python/cpython/pull/9201
I'll look into it.
--
type: -> behavior
___
Python
Vincent Michel added the comment:
I found the culprit:
https://github.com/python/cpython/blob/a05bef4f5be1bcd0df63ec0eb88b64fdde593a86/Lib/asyncio/streams.py#L350
The call to `_untrack_reader` is performed too soon. Closing the transport
causes `protocol.connection_lost()` to be "called
Change by Vincent Michel :
--
pull_requests: +9528
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue35065>
___
___
Python-bugs-list mai
New submission from Vincent Michel:
The Bytearray type is a mutable object that support the read-write buffer
interface. The fcntl.ioctl() function is supposed to handle mutable object
(such as array.array) for the system calls in order to pass object that are
more than 1024 bytes long.
The
Vincent Michel added the comment:
While I was working on the documentation update, I realized that what we called
`run_coroutine_threadsafe` is actually a thread-safe version of
`ensure_future`. What about renaming it to `ensure_future_threadsafe`? It might
be a bit late since
Vincent Michel added the comment:
I attached the first version of the documentation for
`run_coroutine_threadsafe`. The `Concurrency and multithreading` section also
needs to be updated but I could already use some feedback.
Also, I think we should add a `try-except` in the callback function
Vincent Michel added the comment:
> The docs look good.
Should I add a note to explain why the loop argument has to be explicitly
passed? (there is a note at the beginning of the `task functions` section
stating "In the functions below, the optional loop argument ...")
> Wha
Vincent Michel added the comment:
I attached a patch that should sum up all the points we discussed.
I replaced the `call_soon_threadsafe` example with:
loop.call_soon_threadsafe(callback, *args)
cause I couldn't find a simple specific usage. Let me know if you think of a
better ex
Vincent Michel added the comment:
I agree with Yury's ideas about the implementation of this feature. However, it
is a bit confusing to have `asyncio.get_event_loop` defined as:
def get_event_loop():
policy = get_event_loop_policy()
return policy.get_running_loop
34 matches
Mail list logo