I'm curious by your comment that Coyote/APR is performing on par with httpd, from the results in your first message I saw it was a pretty big difference. Or are you saying that wasn't using APR?
Also, I'd be curious about the big disparity between the 16MiB files and the other 1MiB-32MiB files... It looks like all of them are relatively consistent at the KiB/sec rates you show - but suddenly there's a huge burst of speed on the 16MiB file (for httpd). So I'd be really curious to understand why the large disparity is evident there. I'd like to see your results using TC6.0.18 and APR as well. Also, just idle curiosity - does HTTPS affect the performance difference between the two at all? Even though these are static files, it would probably show if there are any SSL handling differences between tomcat and apache. -- Robin D. Wilson Director of Web Development KingsIsle Entertainment, Inc. WORK: 512-623-5913 CELL: 512-426-3929 www.KingsIsle.com -----Original Message----- From: Christopher Schultz [mailto:ch...@christopherschultz.net] Sent: Monday, May 18, 2009 10:24 AM To: Tomcat Users List Subject: Re: Apache httpd vs Tomcat static content performance -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Chuck, On 5/18/2009 10:33 AM, Peter Crowther wrote: >> From: Christopher Schultz [mailto:ch...@christopherschultz.net] 1. >> Is the number of requests (100,0000 sufficient? It seems to take >> forever on this machine... my Coyote tests took longer than >> overnight. > > You want enough tests that they're sensitive to statistically > significant differences that you're interested in finding. The tests > shouldn't be dominated by end effects - startup and shutdown. I'd be > more inclined to run *multiple* tests - 3 is about the minimum - to > make sure that your single test hasn't been messed up by something > unexpected. I'd expect a few minutes per test to be enough to ignore > end effects; I'd be far more inclined to run 10 2-minute tests than > 1 20-minute test, for example. Well, these tests are taking a long time. The size of each file obviously has an effect on the time it takes to serve 100,000 requests. I'm running my Coyote/APR tests, now (which I'll throw out, because its an old version of tcnative AND my site is actively being used right now) and the 4KiB tests took 54.987 seconds while the 512KiB tests took 344.414 seconds. (I'm happy to say that Coyote/APR is performing on par with httpd which seems pretty obvious since they should be running roughly the same code). I suppose I could gauge each test so it would take (roughly) a certain amount of time (say, 10 minutes). At least then I'd know how long the entire battery would take :) >> 2. Is a concurrency of 1 okay? I thought about it and testing the >> ability of the OS to schedule processes and threads doesn't seem >> like it adds anything to the data. > > Depends. *Exactly* what are you testing? If it's "who can serve the > most bytes per second / requests per second", a concurrency of 1 > isn't appropriate - you want to see what happens as you approach > saturation, which is unlikely to happen with a single thread. If it's > "who can serve load without horrible lock contention in the system", > same answer. Okay. My original test plan included concurrencies of 1, 2, 4, 8, and 16. I think I'll just do 1 and 16 and maybe another one if I get the time. Maybe I should just get a faster server :) >> Below is the data I've collected so far. I'll publish everything on >> my blog, including graphs, etc. once it's finished. (Strange that >> httpd dramatically increased its transfer rate when requesting the >> 16MiB file!) > > Looks interesting. Is there any way of finding out what the > rate-limiting factor is in each case - CPU, memory bandwidth, memory > capacity, disk bandwidth? That's a good question... if the disk can't read the data any faster, than the server can't serve the bytes any faster (unless caching is being used, I suppose, but this is supposed to be out-of-the-box config). Since this is a relatively old server (1500MHz 32-bit AMD Athlon), I'm surely being limited by just about everything except memory capacity (it doesn't take much memory to serve static content). I can easily get memory timing information, and I suspect my memory timing will significantly beat the throughput of the TCP stack (shared memory be damned). I can also benchmark my disk I suppose. Since I already have the transfer rates for the HTTP responses, I can simply see if the hardware is significantly faster than the server so rule-out any real hardware difficulties. On the other hand, since both servers are running on the same hardware, the playing-field is at least level: they are already performing differently from each other, so I think I have a decent basis for comparison even without doing a detailed hardware analysis. Thanks for your thoughts, - -chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkoRfXwACgkQ9CaO5/Lv0PDGbACggVAreYAtvGoEQ6kqT9dhjgCB kEsAoIij+Lu5Z/y1MAMibpPsg0pF0JGp =BQWO -----END PGP SIGNATURE----- --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org