Thanks for checking it out, Filip. > I'm running this on what will be 6.0.19, meaning 6.0.x/trunk
Yes, running from the trunk yields very different #s. Looking into it more, 6.0.18 didn't honor the pollerThreadCount setting. results (all tests were run for around 3000 samples): 6.0.18: with 2 acceptors and 2 pollers set (really only 1 poller was used because 6.0.18 didn't honor the poller setting): avg 20s, min 10s, max 60s 6.0.19: 2 acceptors, 2 pollers: avg 15s, min 10s, max 32s 10 acceptors, 10 pollers: avg 13s, min 10s, max 53s 50 acceptors, 50 pollers: avg 11s, min 10s, max 27s 1 acceptor, 50 pollers: avg 11s, min 10s, max 32s So it seems that in my app, where timely timeouts are important, raising the number of pollers helps. > Timeouts happen when the poller thread is free, and the time has passed. Ok, so the results above make sense because having more poller threads will increase the likelihood that one will be free and that my timeout will get serviced more quickly. What I don't understand is the connection between non-comet http requests and comet requests. Running the same test above, without the non-comet http requests (setting the # of threads in the HttpTest thread group to 0, and upping the comet threads to 200) on 6.0.18 I get: avg 10.3s, min 10.0s, max 13s A non-comet request shouldn't be tying up a poller thread, should it? So why would non-comet requests delay the delivery of comet timeouts? Peter -- View this message in context: http://www.nabble.com/nio-connector-configuration-tp21969270p22378593.html Sent from the Tomcat - User mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org