[issue33811] asyncio accepting connection limit
New submission from Lisa Guo : https://bugs.python.org/issue27906 describes a situation where accept rate of connection is too slow for use cases where spikes of incoming connection warrants fast accept. The fix for that was to accept socket connection in a tight loop until it reaches "backlog" connections. This doesn't work very well in a web server scenario where we have many processes listening on the same socket. Each process should not accept up to "backlog" connections, for better load balancing among processes. It would be ideal if this is a separate argument for the server configuration so that the application can decide up to how many connections it is willing to accept in the loop, independent of the backlog parameter for listen() system call. Let me know if this makes sense. Lisa -- messages: 319116 nosy: lguo priority: normal severity: normal status: open title: asyncio accepting connection limit type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue33811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33811] asyncio accepting connection limit
Lisa Guo added the comment: One rough idea would be like this: https://github.com/python/cpython/compare/master...lguo2020:fix-issue-33811?expand=1. Another option is to associate it with the loop: loop.set_max_accept(2) and then later self._loop._start_serving(., max_accept=self._loop._max_accept) -- ___ Python tracker <https://bugs.python.org/issue33811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33840] connection limit on listening socket in asyncio
New submission from Lisa Guo : I'd like to re-open the discussion on pause_server/resume_server that's been discussed here: https://groups.google.com/forum/?utm_source=digest&utm_medium=email#!topic/python-tulip/btGHbh5kUUM with PR: https://github.com/python/asyncio/pull/448 We would like to set a max_connection parameters to a listening socket. Whenever it reaches this number with open accepted sockets, it stops accepting new connections until a user request is served, response sent back, and the connection closed. This is useful for a web application where the more user requests accepted and processed in-flight isn't necessarily better for performance. It would be great if we could dynamically change this value as well. Some more detailed behavior: - it can be either a per loop parameter, or per server parameter - the number of currently open accepted connections is counted against this limit - if max connection is reached, remove the listening socket from the loop so it back pressures new connections to kernel and other processes can take them - when total number of accepted connections drops below max connection, put the listening socket back in the loop - it can be dynamically configured but has no effect on currently already accepted connections (useful for graceful shutdown) Lisa -- components: asyncio messages: 319347 nosy: asvetlov, lguo, yselivanov priority: normal severity: normal status: open title: connection limit on listening socket in asyncio type: enhancement versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue33840> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33840] connection limit on listening socket in asyncio
Lisa Guo added the comment: Hi Yury, no, I'm not familiar with the other frameworks (libuv doesn't have this). I'll need to look into it. If anybody else knows, please comment as well. -- ___ Python tracker <https://bugs.python.org/issue33840> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com