[issue38306] High level API for loop.run_in_executor(None, ...)?
Paul Martin added the comment: run_in_executor doesn't necessarily create a new thread each time so create_thread would be misleading. run_in_thread might be better. -- nosy: +primal ___ Python tracker <https://bugs.python.org/issue38306> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37759] Polish whatsnew for 3.8
Paul Martin added the comment: Should singledispatchmethod and cached_property be added? -- nosy: +primal ___ Python tracker <https://bugs.python.org/issue37759> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38471] _ProactorDatagramTransport: If close() is called when write buffer is not empty, the remaining data is not sent and connection_lost is not called
New submission from Paul Martin : Expected behaviour for DatagramTransport (from_SelectorDatagramTransport): transport.close() called. If there is data in the write buffer, don't call connection_lost. When all data is written and the buffer is empty, check if connection has been lost and if so, call connection_lost However for _ProactorDatagramTransport, if close is called with data in the buffer, _loop_writing returns immediately, so it never gets to the point of sending the remaining data or calling connection_lost. The code for calling connection_lost inside _loop_writing is completely unreachable, because the method immediately returns if the connection has been lost. -- components: Windows, asyncio messages: 354626 nosy: asvetlov, paul.moore, primal, steve.dower, tim.golden, yselivanov, zach.ware priority: normal severity: normal status: open title: _ProactorDatagramTransport: If close() is called when write buffer is not empty, the remaining data is not sent and connection_lost is not called type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue38471> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38471] _ProactorDatagramTransport: If close() is called when write buffer is not empty, the remaining data is not sent and connection_lost is not called
Change by Paul Martin : -- versions: +Python 3.9 ___ Python tracker <https://bugs.python.org/issue38471> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38471] _ProactorDatagramTransport: If close() is called when write buffer is not empty, the remaining data is not sent and connection_lost is not called
Change by Paul Martin : -- keywords: +patch pull_requests: +16341 stage: -> patch review pull_request: https://github.com/python/cpython/pull/16779 ___ Python tracker <https://bugs.python.org/issue38471> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38471] _ProactorDatagramTransport: If close() is called when write buffer is not empty, the remaining data is not sent and connection_lost is not called
Change by Paul Martin : -- pull_requests: +16408 pull_request: https://github.com/python/cpython/pull/16863 ___ Python tracker <https://bugs.python.org/issue38471> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32309] Implement asyncio.run_in_executor shortcut
Paul Martin added the comment: I don't think changing the default executor is a good approach. What happens, if two or more thread pools are running at the same time? In that case they will use the same default executor anyway, so creating a new executor each time seems like a waste. Shutting down the default executor seems unnecessary and could impact lower level code which is using it. The default executor is shutdown at the end of asyncio.run anyway. I also think it would be good to have a synchronous entry point, and not require a context manager. Having a ThreadPool per class instance would be a common pattern. class ThreadPool: def __init__(self, timeout=None): self.timeout = timeout self._loop = asyncio.get_event_loop() self._executor = concurrent.futures.ThreadPoolExecutor() async def close(self): await self._executor.shutdown(timeout=self.timeout) async def __aenter__(self): return self async def __aexit__(self, *args): await self.close() def run(self, func, *args, **kwargs): call = functools.partial(func, *args, **kwargs) return self._loop.run_in_executor(self._executor, call) I'm not sure if a new ThreadPoolExecutor really needs to be created for each ThreadPool though. -- nosy: +primal ___ Python tracker <https://bugs.python.org/issue32309> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32309] Implement asyncio.run_in_executor shortcut
Paul Martin added the comment: Run method should be: async def run(self, func, *args, **kwargs): call = functools.partial(func, *args, **kwargs) return await self._loop.run_in_executor(None, call) -- ___ Python tracker <https://bugs.python.org/issue32309> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32309] Implement asyncio.run_in_executor shortcut
Paul Martin added the comment: Good points. I made a mistake in run Should be: async def run(self, func, *args, **kwargs): call = functools.partial(func, *args, **kwargs) return await self._loop.run_in_executor(self._executor, call) Also in this case run awaits and returns the result. Yury suggested earlier just to return the future and not await. I have no strong opinion either way. The above example does seem more higher level but Yury's example is more flexible. I agree that shutdown_default_executor and _do_shutdown should be changed to accept an executor argument so that any executor can be shutdown asynchronously. So the loop API would have a shutdown_executor method. shutdown_default_executor would just call shutdown_executor with the default executor as argument. That would be a good first step. -- ___ Python tracker <https://bugs.python.org/issue32309> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44663] Possible bug in datetime utc
Paul Martin added the comment: The difference between the two is the difference between your local time and utc. datetime.now(timezone.utc) This returns the current time in utc and is timezone aware. So the timestamp can figure out the seconds since epoch taking into account the timezone. datetime.utcnow() Returns the current utc time but is not timezone aware. When timestamp method is run, it is interpreted as a local timestamp. This is explained in the docs but perhaps it should be made clearer that datetime.utcnow().timestamp() is incorrect and would cause bugs. I'm not sure about changing the behaviour of utcnow to return a timezone-aware dt as is it could cause hard to detect bugs in existing code. But I did have issues recently where I was using utcnow until I went back and read the docs and changed to datetime.now(timezone.utc). So it's probably a common trap to fall into. >From the docs: " Naive datetime instances are assumed to represent local time # Note There is no method to obtain the POSIX timestamp directly from a naive datetime instance representing UTC time. If your application uses this convention and your system timezone is not set to UTC, you can obtain the POSIX timestamp by supplying tzinfo=timezone.utc: timestamp = dt.replace(tzinfo=timezone.utc).timestamp() or by calculating the timestamp directly: timestamp = (dt - datetime(1970, 1, 1)) / timedelta(seconds=1)" Warning Because naive datetime objects are treated by many datetime methods as local times, it is preferred to use aware datetimes to represent times in UTC. As such, the recommended way to create an object representing the current time in UTC is by calling datetime.now(timezone.utc). " -- nosy: +primal ___ Python tracker <https://bugs.python.org/issue44663> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40454] DEBUG kw to asyncio.run overrides DEBUG mode set elsewhere
New submission from Paul Martin : According to the docs: " There are several ways to enable asyncio debug mode. Setting the PYTHONASYNCIODEBUG environment variable to 1. Using the -X dev Python command line option. Passing debug=True to asyncio.run(). Calling loop.set_debug(). " My understanding of this would be that any of the above methods can be used to enable debug mode. However if asyncio.run is used then whatever value for DEBUG is passed to asyncio.run (or False by default) overrides DEBUG mode being set elsewhere. So, when asyncio.run is used, the only way to enable DEBUG mode is to pass DEBUG=True to asyncio.run. The other methods won't work. One solution might be to change this line in asyncio/runners.py: loop.set_debug(debug) To loop.set_debug(debug or coroutines._DEBUG) So asyncio.run won't disable debug mode if it's already been set elsewhere -- components: asyncio messages: 367779 nosy: asvetlov, primal, yselivanov priority: normal severity: normal status: open title: DEBUG kw to asyncio.run overrides DEBUG mode set elsewhere versions: Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue40454> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37817] create_pipe_connection and start_serving_pipe not documented
New submission from Paul Martin : I found these two methods in the windows_events code for asyncio. Is there a reason why they don't seem to be documented, and are not included in AbstractServer? They provide a good Windows alternative to create_unix_server & create_unix_connection for inter-process communication. -- components: asyncio messages: 349371 nosy: asvetlov, primal, yselivanov priority: normal severity: normal status: open title: create_pipe_connection and start_serving_pipe not documented versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue37817> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com