I have two python "applications" (more like scripts, they're only about 80 lines each) that are dumbed down http-servers: They accept a connection, reads everything in the socket untill "\r\n\r\n" and then responds with "HTTP/1.1 200 OK\r\n\r\nHello World!" and then closes the connection.
There is one multiplexing single-threaded one and one multithreaded one, when I use apache bench with something like "ab -n 5000 -c 5 http://localhost/" everything works fine (about ~1500req/s each) - but when I raise the concurrency (-c command in apache bench) of the bench to something that is above the socket.listen(10) (ten in this case) call in the server socket something strange happens: In the multiplexing version everything still works fine, pulling ~1400req/s or there about, but here is my problem: In the multithreaded version of the code, when the concurrency level in apache bench is raised above the socket.listen(10) call (say 11 or 100) performance dropps through the floor to something like ~10req/s. But as long as the concurrency level stays less then or equal to the socket.listen()-call everything works fine. I've been staring at this problem for a day now, unable to figure it out - I don't think I have any long locking times for the connection queue used between the threads (and I can't see how they could occur when the concurrency level is raised) and just can't see the reason. Here's the code: http://paste2.org/p/89679 about 80 lines and it runs as it is, no external deps. Any ideas/tips are greatly appriciated. Regards, Fredrik. -- http://mail.python.org/mailman/listinfo/python-list