One interesting methodology question is whether taking an instantaneous value (even an instantaneous average) is really helpful. What would probably be more interesting (to me) is to see what different levels of simultaneous simulated requests per second caused, 1/s, 10/s, 100/s, 1000/s. What is the curve of performance degredation on a single machine, a multi-core single machine, multiple machines, etc., with geometric traffic growth.

The reason this is interesting is that sometimes one thing will perform very quickly at one tier, but poorly at another, and something slower will scale more gracefully.

The other methodology problem is that the simplest page merely shows overhead, but there may be efficiencies in the handling of complex pages, first-load, subsequent load, etc., that will be different for the different frameworks. So actually having the simple case, then a moderate, then heavy application which have similar fundamental architectures (say, hibernate + database using the same persistence strategy - heck, possibly even reusing the same DAOs, if possible). This would give a sense whether the differences you're seeing of 25/7 turn into more like 225/207, or 375/332 under a heavy application, or whether the proportion scales. It could even invert, depending on the subtleties of the application.

Benchmarking is hard. ;)

cheers,
Christian.

On 11-May-09, at 13:26 , Neil Curzon wrote:

Hi all,

I've recently taken up benchmarking Tapestry 5.0.18 against Wicket 1.3.5 and
Stripes 1.5.1. For fun, I also threw in an implementation in Model 2
Servlet/JSP. The first were a little surprising to me (Tapestry did not come
close to winning), and I'm wondering if anybody could comment on my
methodology.

I have 5 simple pages that use the same simple layout, whose middle body component has a dynamic bit (the current time), to prevent any kind of low
level caching. In tapestry, this looked like this:

Layout.tml:
<html xmlns:t="http://tapestry.apache.org/schema/tapestry_5_0_0.xsd";>
   <h1>Here's the Layout Beginning</h1>
       <t:body/>
   <h2>Here's the layout End</h2>
</html>

Page1.tml
<div t:type="layout" xmlns:t="
http://tapestry.apache.org/schema/tapestry_5_0_0.xsd";>
   Page 1 Dynamic content: ${currentTime}
</div>

In Wicket:
Layout.html:
<html>
   <wicket:border>
   <h1>Here's the Layout Beginning</h1>
       <wicket:body/>
   <h2>Here's the layout End</h2>
   </wicket:border>
</html>

Page1.html:
<span wicket:id = "layout">
   Page 1 Dynamic content: <span wicket:id="dynamic"/>
</span>

Tapestry and Wicket were both configured to run in production mode before the benchmarking. Each request went to 1 of 5 different Pages randomly (each page was similar to above). I used 20 Threads in parallel, each performing 10,000 such requests. The client was a raw socket Java program writing the
HTTP GET and reading the entire response. The results are as follows:

Tapestry 5.0.18:
Requests per second:    776
Average resp time (ms): 25.41075

Wicket 1.3.5:
Requests per second:    2574
Average resp time (ms): 7.72404

Wicket was the only framework that outperformed (slightly) the JSP/ Servlet solution. I found these results surprising, as it was my perception that Tapestry would scale more easily than Wicket. Instead, I found Tapestry to
perform about on par with Stripes.

Is my methodology flawed somehow? How could I improve it? Any input would be
greatly appreciated.

Thanks
Neil

Christian Edward Gruber
e-mail: christianedwardgru...@gmail.com
weblog: http://www.geekinasuit.com/


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org

Reply via email to