Hrvoje Nikšić added the comment:
> Any suggestions on what needs to be done for current revisions?
Hi! I'm the person who submitted this issue back in 2013. Let's take a look at
how things are in Python 3.10:
Python 3.10.2 (main, Jan 13 2022, 19:06:22) [GCC 10.3.0] on lin
Hrvoje Nikšić added the comment:
Justin, thanks for updating the PR! I'll take another look at the code.
--
___
Python tracker
<https://bugs.python.org/is
Hrvoje Nikšić added the comment:
Hi, thanks for providing a PR. One thing I noticed is that the implementation
in the PR yields results of the futures from the generator. This issue proposes
a generator that instead yields the futures passed to as_completed. This is
needed not just for
New submission from Hrvoje Nikšić :
Originally brought up on StackOverflow,
https://stackoverflow.com/questions/60799366/nested-async-comprehension :
This dict comprehension parses and works correctly:
async def bar():
return {
n: await foo(n) for n in [1, 2, 3]
}
But making
Hrvoje Nikšić added the comment:
> The only way I could see this to work as intended without making any changes
> to threading would be to optionally use daemon threads and avoid joining the
> threads in `executor.shutdown()` if `wait_at_exit` is set to False in the
> constru
Hrvoje Nikšić added the comment:
Thanks for the clarification, I didn't know about the change to non-daemon
threads.
I still think this patch is useful, and won't harm general use because it is
opt-in, just like daemon threads themselves. I suggest to update the PR to
specify n
Hrvoje Nikšić added the comment:
> I don't think there's much ThreadPoolExecutor can do. If you drop the
> references to the threads, they still exist and they still be waited upon at
> interpreter exit.
ThreadPoolExecutor introduces additional waiting of its own, and i
Change by Hrvoje Nikšić :
--
title: Event loop implementation docs advertise set_event_loop -> Event loop
implementation docs advertise set_event_loop which doesn't work with asyncio.run
___
Python tracker
<https://bugs.python.org
New submission from Hrvoje Nikšić :
The docs of SelectorEventLoop and ProactorEventLoop contain examples that call
asyncio.set_event_loop:
selector = selectors.SelectSelector()
loop = asyncio.SelectorEventLoop(selector)
asyncio.set_event_loop(loop)
But this won't hav
New submission from Hrvoje Nikšić :
This is a followup on issue38178. While testing the new code, I noticed that my
change introduced a bug, where the code still attempts to pass "loop" when
constructing an EchoClientProtocol. A pull request is attached.
Also, I've noticed that
Change by Hrvoje Nikšić :
--
keywords: +patch
pull_requests: +15806
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/16202
___
Python tracker
<https://bugs.python.org/issu
Hrvoje Nikšić added the comment:
Raymond, no problem; I guess I assumed that the authors are following the bug
tracker (or have possibly moved on and are inactive).
I also had reason to believe the change to be non-controversial, since it is in
line with Yury's own recommendations, e.g.
Change by Hrvoje Nikšić :
--
keywords: +patch
pull_requests: +15769
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/16159
___
Python tracker
<https://bugs.python.org/issu
New submission from Hrvoje Nikšić :
The EchoClientProtocol example receives a "loop" argument, which is not used at
all in the TCP example, and is used to create a future in the UDP example. In
modern asyncio code the explicit loop arguments are no longer used since the
loop can
Hrvoje Nikšić added the comment:
@asvetlov The idea of the new flag is to disable any subsequent waiting for
futures after ThreadPoolExecutor.shutdown(wait=False) returns.
Currently the additional waiting is implemented using "atexit", so I assumed it
referred to process
Change by Hrvoje Nikšić :
--
keywords: +patch
pull_requests: +13161
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue36780>
___
___
Py
Hrvoje Nikšić added the comment:
@matrixise I've signed the CLA in the meantime, and it's now confirmed by
https://check-python-cla.herokuapp.com/
Thanks for the pointer.
--
___
Python tracker
<https://bugs.python.o
Hrvoje Nikšić added the comment:
Ok, found it, and I've now updated the github name on my bpo account. I'll
gladly sign the CLA if needed, I thought it wasn't necessary for small changes
based on previous experience. Please advise whether it
Hrvoje Nikšić added the comment:
> Are you interested in writing a patch?
Yes - I wanted to check if there is interest in the feature before I commit
time to write the patch, documentation, tests, etc. (And also I wanted to check
if there's a better way to do it.)
In any case, th
Hrvoje Nikšić added the comment:
How do I connect the accounts?
Please note that I've previously submitted PRs which have been accepted, e.g.
https://bugs.python.org/issue34476 and https://bugs.python.org/issue35465
--
___
Python tracker
&
Hrvoje Nikšić added the comment:
I agree with the last comment. The disowning functionality is only used in
specific circumstances, so it's perfectly fine to keep the functionality as a
shutdown flag. I also agree that the change cannot be *unconditional*, for
backward compatibility i
Hrvoje Nikšić added the comment:
Also, the docstring of asyncio.Lock still states:
When more than one coroutine is blocked in acquire() waiting for
the state to turn to unlocked, only one coroutine proceeds when a
release() call resets the state to unlocked; first coroutine which
New submission from Hrvoje Nikšić :
At interpreter shutdown, Python waits for all pending futures of all executors
to finish. There seems to be no way to disable the wait for pools that have
been explicitly shut down with pool.shutdown(wait=False). The attached script
demonstrates the issue
New submission from Hrvoje Nikšić :
It seems impossible to correctly close() an asyncio Process on which terminate
has been invoked. Take the following coroutine:
async def test():
proc = await asyncio.create_subprocess_shell(
"sleep 1", stdout=asyncio.subpr
Hrvoje Nikšić added the comment:
Done, https://github.com/python/cpython/pull/11145
--
___
Python tracker
<https://bugs.python.org/issue35465>
___
___
Python-bug
New submission from Hrvoje Nikšić :
In https://stackoverflow.com/q/53704709/1600898 a StackOverflow user asked how
the add_signal_handler event loop method differs from the signal.signal
normally used by Python code.
The add_signal_handler documentation is quite brief - if we exclude the
Hrvoje Nikšić added the comment:
I didn't start working on the PR, so please go ahead if you're interested.
One small suggestion: If you're implementing this, please note that the
proof-of-concept implementation shown in the description is not very efficient
because each cal
Hrvoje Nikšić added the comment:
If there is interest in this, I'd like to attempt a PR for a sync/async variant
of as_completed.
Note that the new docs are *much* clearer, so the first (documentation) problem
from the description is now fixed. Although the documentation is still brief
Hrvoje Nikšić added the comment:
Agreed about the special case. Minor change suggestion:
``sleep()` always suspends the current task, allowing other tasks to run.
That is, replace "switches execution to another [task]" because there might not
be other tasks or they mi
Hrvoje Nikšić added the comment:
The issue is because the current documentation *doesn't* say that
"`asyncio.sleep()` always pauses the current task and switches execution to
another one", it just says that it "blocks for _delay_ seconds".
With that desc
Hrvoje Nikšić added the comment:
That's exactly it, thanks! I have no idea how I missed it, despite looking (I
thought) carefully.
But yes, they should be linked from
https://docs.python.org/3/library/stdtypes.html . Just as currently there is
https://docs.python.org/3/library/stdtypes
New submission from Hrvoje Nikšić :
Coroutine objects have public methods such as send, close, and throw, which do
not appear to be documented. For example, at
https://stackoverflow.com/q/51975658/1600898 a StackOverflow user asks how to
abort an already created (but not submitted) coroutine
New submission from Hrvoje Nikšić :
When an SSL handshake fails in asyncio, an exception traceback is logged to
standard error even if the application code catches the exception. This logging
cannot be suppressed, except by providing a custom exception handler for the
whole event loop. The
Hrvoje Nikšić added the comment:
Also, the "Create a coroutine ..." wording in the current documentation is a
bit strange - sleep() is already marked as a coroutine, and documentation of
other coroutines simply state their effect in
New submission from Hrvoje Nikšić :
Looking at the implementation and at the old issue at
https://github.com/python/asyncio/issues/284 shows that asyncio.sleep
special-cases asyncio.sleep(0) to mean "yield control to the event loop"
without incurring additional overhead of sleepin
Hrvoje Nikšić added the comment:
I would definitely not propose or condone sacrificing performance.
Part of the reason why I suggested the check is that it can be done efficiently
- it is literally a comparison of two integers, both of which are obtained
trivially. I would hope that it
Hrvoje Nikšić added the comment:
> I'd be OK with this if the performance penalty is within 0.5% in
> microbenchmarks for asyncio & uvloop.
@yselivanov Are you referring to specific microbenchmarks published somewhere,
or the general "echo server&qu
New submission from Hrvoje Nikšić :
Looking at StackOverflow's python-asyncio tag[1], it appears that it's a very
common mistake for users to invoke asyncio functions or methods from a thread
other than the event loop thread. In some cases this happens because the user
is car
Hrvoje Nikšić added the comment:
Another option occurred to me: as_completed could return an object that
implements both synchronous and asynchronous iteration protocol:
class as_completed:
def __init__(fs, *, loop=None, timeout=None):
self.__fs = fs
self.__loop = loop
Hrvoje Nikšić added the comment:
"""
At the moment this can be done but it will cancel all the coroutines with any
exception that is raised and at some occasions this may not be desired.
"""
Does wait() really "cancel all the coroutines"? The documen
Hrvoje Nikšić added the comment:
Deprecating Event.wait would be incorrect because Event was designed to mimic
the threading.Event class which has a (blocking) wait() method[1].
Supporting `await event` is still worthwhile, though.
[1]
https://docs.python.org/2/library/threading.html
Change by Hrvoje Nikšić :
--
type: -> enhancement
___
Python tracker
<https://bugs.python.org/issue33533>
___
___
Python-bugs-list mailing list
Unsubscrib
Hrvoje Nikšić added the comment:
Of course, `yield from done` would actually have to be `for future in done:
yield future`, since async generators don't support yield from.
--
___
Python tracker
<https://bugs.python.org/is
New submission from Hrvoje Nikšić :
Judging by questions on the StackOverflow python-asyncio tag[1][2], it seems
that users find it hard to understand how to use as_completed correctly. I have
identified three issues:
* It's somewhat sparingly documented.
A StackOverflow user ([2]) d
Change by Hrvoje Nikšić :
--
components: +asyncio
nosy: +asvetlov, yselivanov
___
Python tracker
<https://bugs.python.org/issue33469>
___
___
Python-bugs-list m
New submission from Hrvoje Nikšić :
Looking at a StackOverflow question[1], I was unable to find a way to correctly
close an event loop that uses run_in_executor() for long-running tasks.
The question author tried to implement the following scenario:
1. create some tasks that use
Hrvoje Nikšić added the comment:
The issue is also present in Python 3.7.0b1.
--
versions: +Python 3.7
___
Python tracker
<https://bugs.python.org/issue30
Hrvoje Nikšić added the comment:
I encountered this bug while testing the code in this StackOverflow answer:
https://stackoverflow.com/a/48565011/1600898
The code at the end of the answer runs on Python 3.5, but fails on 3.6 with the
"unexpected keyword argument 'manager_own
Hrvoje Nikšić added the comment:
I am of course willing to sign the CLA (please send further instructions via
email), although I don't know how useful my original patch is, given that it
caches the null context manager.
--
___
Python tr
Hrvoje Nikšić added the comment:
For what it's worth, we are still using our own null context manager function
in critical code. We tend to avoid contextlib.ExitStack() for two reasons:
1) it is not immediately clear from looking at the code what ExitStack() means.
(Unlik
Hrvoje Nikšić added the comment:
> Can you suggest a couple of sentences you would have like to have
> seen, and where?
Thanks, I would suggest to add something like this to the documentation of
ast.parse:
"""
``parse`` raises ``SyntaxError`` if the compiled s
Hrvoje Nikšić added the comment:
> The appropriate fix would probably be to add a sentence to the
> `ast.PyCF_ONLY_AST` documentation to say that some syntax errors
> are only detected when compiling the AST to a code object.
Yes, please. I'm not saying the current behavior is w
New submission from Hrvoje Nikšić:
Our application compiles snippets of user-specified code using the compile
built-in with ast.PyCF_ONLY_AST flag. At this stage we catch syntax errors and
perform some sanity checks on the AST. The AST is then compiled into actual
code using compile() and run
Hrvoje Nikšić added the comment:
You can simplify pickle_lambda in the test by using marshal.dumps(code_obj) and
marshal.loads(code_obj) to dump and load the code object without going through
its entire guts. It would be a shame to have to change a pickle test just
because some detail of the
Hrvoje Nikšić added the comment:
Barun, Serhiy, thanks for picking this up so quickly.
I would further suggest to avoid using a fixed buffer in abspath
(_getfullpathname, but only abspath seems to call it). Other filesystem calls
are using the interface where PyArg_ParseTuple allocates the
Hrvoje Nikšić added the comment:
The problem can be encountered and easily reproduced by calling os.path
functions, such as os.path.abspath, with a sufficiently large string on Windows:
>>> os.path.abspath("a" * 1024)
Traceback (most recent call last):
File "", l
New submission from Hrvoje Nikšić:
The documentation for the "es#" format (and the "et#" that derives from it)
documents the possibility of providing an already allocated buffer. Buffer
overflow is detected and handled as follows: "If the buffer is not large
enough,
Hrvoje Nikšić added the comment:
Note that defaulting to unsafe math in extensions will make *their* use of the
Py_NAN macro break under icc.
If we go that route ("-fp-model strict" for Python build, but not for
extensions), we should also apply the attached patch that defines
Hrvoje Nikšić added the comment:
Using -fp-model strict (or other appropriate icc flag) seems like a reasonable
resolution.
It should likely also be applied to Python 3.x, despite the version field of
this issue. (Even if float('nan') happens to work in current 3.x, internal
Hrvoje Nikšić added the comment:
Mark:
> > If Python requires strict IEEE 754 floating-point,
>
> It doesn't (at the moment).
Does this imply that my patch is a better fix than requiring the builder to
specify -fp-model strict to icc?
For our use case either solut
Hrvoje Nikšić added the comment:
The compilation was performed with the default flags, i.e. without -fp-model
strict or similar.
If Python requires strict IEEE 754 floating-point, it should probably be
mentioned at a prominent place in the documentation. Administrators building
Python with
Hrvoje Nikšić added the comment:
sys.float_repr_style is 'short'.
I haven't actually tried this in Python 3.5, but I did note the same definition
of Py_NAN (which is used elsewhere in the code), so I tagged the issue as
likely also
New submission from Hrvoje Nikšić:
On a Python compiled with Intel C compiler, float('nan') returns 0.0. This
behavior can be reproduced with icc versions 11 and 12.
The definition of Py_NAN in include/pymath.h expects `HUGE_VAL * 0.0` to
compile to a NaN value on IEEE754
Hrvoje Nikšić added the comment:
Indeed, that works, thanks. Here is the updated patch for review, passing all
tests.
--
Added file: http://bugs.python.org/file31908/exitstack.diff
___
Python tracker
<http://bugs.python.org/issue19
Hrvoje Nikšić added the comment:
Here is the updated patch, with a very minor improvement (no longer
unnecessarily holds on to original exc_info), and with new tests. The tests
test for the non-suppression of exit-exception (which fails without the fix)
and for the correct suppression of body
Hrvoje Nikšić added the comment:
Nick, thanks for the review. Do you need me to write the patch for the test
suite along with the original patch?
--
___
Python tracker
<http://bugs.python.org/issue19
New submission from Hrvoje Nikšić:
While using contextlib.ExitStack in our project, we noticed that its __exit__
method of contextlib.ExitStack suppresses the exception raised in any
contextmanager's __exit__ except the outermost one. Here is a test case to
reproduce the problem:
clas
Hrvoje Nikšić added the comment:
Thanks for pointing out the make_header(decode_header(...)) idiom, which I was
indeed not aware of. It solves the problem perfectly.
I agree that it is a doc bug. While make_header is documented on the same
place as decode_header and Header itself, it is not
Hrvoje Nikšić added the comment:
An example of the confusion that lack of a clear "convert to unicode" method
creates is illustrated by this StackOverflow question:
http://stackoverflow.com/q/15516958/1600898
--
___
Python trac
Changes by Hrvoje Nikšić :
--
type: -> behavior
___
Python tracker
<http://bugs.python.org/issue17505>
___
___
Python-bugs-list mailing list
Unsubscri
New submission from Hrvoje Nikšić:
The __unicode__ method is documented to "return the header as a Unicode
string". For this to be useful, I would expect it to decode a string such as
"=?gb2312?b?1eLKx9bQzsSy4srUo6E=?=" into a Unicode string that can be displayed
to the
Hrvoje Nikšić added the comment:
Could this patch please be committed to Python? We have just run into this
problem in production, where our own variant of AttrDict was shown to be
leaking.
It is possible to work around the problem by implementing explicit __getattr__
and __setattr__, but
Hrvoje Nikšić added the comment:
Is there anything else I need to do to have the patch reviewed and applied?
I am in no hurry since we're still using 2.x, I'd just like to know if more
needs to be done on my part to move the issue forward. My last Python patch
was accepted quite
Hrvoje Nikšić added the comment:
Here is a more complete patch that includes input from Nick, as well as the
patch to test_contextlib.py and the documentation.
For now I've retained the function-returning-singleton approach for consistency
and future extensibility.
--
key
Hrvoje Nikšić added the comment:
I considered using a variable, but I went with the factory function for two
reasons: consistency with the rest of contextlib, and equivalence to the
contextmanager-based implementation.
Another reason is that it leaves the option of adding optional parameters
Hrvoje Nikšić added the comment:
That is what we are using now, but I think a contextlib.null() would be useful
to others, i.e. that its use is a useful idiom to adopt. Specifically I would
like to discourage the "duplicated code" idiom from the report, which I've seen
all
Hrvoje Nikšić added the comment:
Thank you for your comments.
@Michael: I will of course write tests and documentation if there is indication
that the feature will be accepted for stdlib.
@Antoine: it is true that a null context manager can be easily defined, but it
does requires a separate
Changes by Hrvoje Nikšić :
--
components: +Library (Lib)
type: -> feature request
___
Python tracker
<http://bugs.python.org/issue10049>
___
___
Python-
New submission from Hrvoje Nikšić :
I find that I frequently need the "null" (no-op) context manager. For example,
in code such as:
with transaction or contextlib.null():
...
Since there is no easy expression to create a null context manager, we must
resort to workarounds, su
Hrvoje Nikšić added the comment:
Here is a small test case that demonstrates the problem, expected behavior and
actual behavior:
{{{
for ev in xml.etree.cElementTree.iterparse(StringIO('rubbish'),
events=('start', 'end')):
print ev
}}}
The above code sho
Hrvoje Nikšić added the comment:
Yes, and I use it in the second example, but the buffer interface
doesn't really help with adding new elements into the array.
___
Python tracker
<http://bugs.python.org/i
New submission from Hrvoje Nikšić :
The array.array type is an excellent type for storing a large amount of
"native" elements, such as integers, chars, doubles, etc., without
involving the heavy machinery of numpy. It's both blazingly fast and
reasonably efficient with memory.
Hrvoje Nikšić <[EMAIL PROTECTED]> added the comment:
Note that the item retrieved by PyList_GET_ITEM must be increffed before
being passed to the function. Otherwise mutating the list can remove
the item from the list and destroy the underlying object, in which case
the current maxit
New submission from Hrvoje Nikšić <[EMAIL PROTECTED]>:
__cmp__ is apparently still documented at
http://docs.python.org/dev/3.0/reference/datamodel.html#object.__cmp__ .
If it is going away for 3.0, it should be removed from the
documentation as well.
--
assignee: georg.
Hrvoje Nikšić <[EMAIL PROTECTED]> added the comment:
I think preserving integer width is a good idea because it saves us from
having to throw overflow errors when unpickling to machines with
different width of C types. The cost is that pickling/unpickling the
array might change the a
Hrvoje Nikšić <[EMAIL PROTECTED]> added the comment:
Unfortunately dumping the internal representation of non-long arrays
won't work, for several reasons. First, it breaks when porting pickles
between platforms of different endianness such as Intel and SPARC.
Then, it ignores the c
Hrvoje Nikšić <[EMAIL PROTECTED]> added the comment:
I guess it went unnoticed due to prevalence of little-endian 32-bit
machines. With 64-bit architectures becoming more and more popular,
this might become a bigger issue.
Raymond, why do you think fixing this bug would complicate port
New submission from Hrvoje Nikšić <[EMAIL PROTECTED]>:
In some cases it is unfortunate that any error in the XML chunk seen by
the buffer prevents the events generated before the error from being
delivered. For example, in some cases valid XML is embedded in a larger
file or stream, and
Hrvoje Nikšić <[EMAIL PROTECTED]> added the comment:
Here is an example that directly demonstrates the bug. Pickling on x86_64:
Python 2.5.1 (r251:54863, Mar 21 2008, 13:06:31)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
Type "help", "copyright",
New submission from Hrvoje Nikšić <[EMAIL PROTECTED]>:
It would seem that pickling arrays directly exposes the underlying
machine words, making the pickle non-portable to platforms with
different layout of array elements. The guts of array.__reduce__ look
like this:
if (array-&g
Hrvoje Nikšić added the comment:
Here is a patch, as per description above.
Added file: http://bugs.python.org/file9160/addpatch
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/
Hrvoje Nikšić added the comment:
I agree that a leak would very rarely occur in practice, but since there
is a straightforward fix, why not apply it? If nothing else, the code
in the core should be an example of writing leak-free Python/C code, and
a fix will also prevent others from wasting
New submission from Hrvoje Nikšić:
PyModule_AddObject has somewhat strange reference-counting behavior in
that it *conditionally* steals a reference. In case of error it doesn't
change the reference to the passed object, but in case of success it
steals it. This means that, as wr
Hrvoje Nikšić added the comment:
Thanks for the quick review. I considered guarding the include with
#ifdef as well, but I concluded it's not necessary for the following
reasons:
1. a large number of existing tests already simply include
(the makedev test, sizeof(off_t) test, IPv6-re
New submission from Hrvoje Nikšić:
The printf("%zd", ...) configure test fails on Linux, although it
supports the %zd format. config.log reveals that the test tests for %zd
with Py_ssize_t, which is (within the test) typedeffed to ssize_t. But
the appropriate system header is not i
95 matches
Mail list logo