2 remarks about all your stress testing efforts: A) TIME_WAIT
When not doing HTTP Keep-Alive, under high load the size of the TCP hash table and the effectiveness of the system to lookp up TCP connections can limit the throughput you can reach. More precisely, depending on the excat way of connection shutdown, you get TIME_WAIT states for the finished connections (without HTTP Keep Alive it could be one such connection per request). Most systems get slow, once the number of those connections reaches somthing arounf 30000. E.g. if you are doing 2000 requests per second without HTTP Keep Alive and the combination of web server and stress test tool leads to TIME_WAITs, after 15 seconds your table size might reach a critical size. The amount of time a system waits before it destroys TIME_WAIT connections varies. Solaris 4 minutes, but tunable down to 5 seconds, Linux 1 minute (I think) and still not tunable (a free pizza to the first one who can show me how to tune the time interval it takes for a TCP connection to be moved out of TIME_WAIT), on Windows I think also 1 minute but tunable. Not using HTTP Keep Alive will very likely limit quickly the achievable throughput when going up in concurrency. B) Resource usage You are doing a first interesting analysis, namely the base throughput you can reach using concurrency one. Now the throughput we can reach using a given setup always depends on the first bottleneck we hit. For big files it might be the network bandwidth. For small files it might be CPU or maybe I/Os per second we can do on the interface. So it is also interesting to compare the resource usage. Some resources are harder to measure (like memory), but the one easy to measure resource is CPU. So you get some more info, when also measuring CPU per request. Think of it as maximum speed and gas efficiency. Regards, Rainer --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org