I have been doing a series of benchmarks - the latest one is at Tapestry
Load Testing - Struts Vs Tapestry Round
2<http://blog.gidley.co.uk/2009/05/tapestry-load-testing-struts-vs.html>.
My results show something that may be relevant here.
When I started this I had a really simple application (like yours) and
struts was beating tapestry (but not by as much as you are seeing), however
as I have increased the complexity tapestry has overtaken the struts app.

Benchmarking is hard (as was stated earlier), and benchmarking a framework
is even harder. The main issue is in a really application 90% of the time
will be spent in your code (e.g. accessing the db) and not in the framework.

A couple of things you probably want to check

   - Did you have tapestry in production mode - it makes a huge difference
   :)
   - You need your page pools as Howard set to have the soft wait time as
   big as the number of concurrent threads
   - Are you measuring the page + objects or just the page. Again if you are
   including objects you will mainly be measuring the containers speed.

I think your numbers do look high - but it does depend on what you ran this
on. In my test Tapestry 5 Load Testing - Struts vs Tapestry - Round
1<http://blog.gidley.co.uk/2009/05/tapestry-5-load-testing-struts-vs.html>
had
a similar application and was getting responses in 15ms under a very similar
load. I was using a dual core virtual machine in our companies cloud.

One thing I will say is I wouldn't pick your framework on performance. It
needs to perform 'enough' and not stink (e.g. have substantial bottlenecks)
- tests which I think tapestry passes easily - but you really should focus
on usability and ease of coding.


Ben Gidley

www.gidley.co.uk
b...@gidley.co.uk


On Mon, May 11, 2009 at 7:03 PM, Howard Lewis Ship <hls...@gmail.com> wrote:

> My take is that T5's model of rendering to the DOM will always impact
> performance.  I am surprised the numbers are skewed as much as they
> are. Still, at all levels (not just rendering) T5 is doing way more
> than the other frameworks.
>
> There's also a lot of room for tuning; you might want to increase the
> page pool cache to match the number of request processing threads;
> it's quite possible that the T5 application was spending a lot of its
> time waiting for active pages to return to the pool (the soft limit,
> and soft wait). Expanding the hard and soft limits to match the
> available threads will remove a lot of that.
>
> Again, adding even a tiny amount of DB access might compress the
> difference between Wicket and Tapestry to a non-issue.
>
> On Mon, May 11, 2009 at 10:39 AM, Christian Edward Gruber
> <christianedwardgru...@gmail.com> wrote:
> > One interesting methodology question is whether taking an instantaneous
> > value (even an instantaneous average) is really helpful.  What would
> > probably be more interesting (to me) is to see what different levels of
> > simultaneous simulated requests per second caused, 1/s, 10/s, 100/s,
> 1000/s.
> >  What is the curve of performance degredation on a single machine, a
> > multi-core single machine, multiple machines, etc., with geometric
> traffic
> > growth.
> >
> > The reason this is interesting is that sometimes one thing will perform
> very
> > quickly at one tier, but poorly at another, and something slower will
> scale
> > more gracefully.
> >
> > The other methodology problem is that the simplest page merely shows
> > overhead, but there may be efficiencies in the handling of complex pages,
> > first-load, subsequent load, etc., that will be different for the
> different
> > frameworks.  So actually having the simple case, then a moderate, then
> heavy
> > application which have similar fundamental architectures (say, hibernate
> +
> > database using the same persistence strategy - heck, possibly even
> reusing
> > the same DAOs, if possible).  This would give a sense whether the
> > differences you're seeing of 25/7 turn into more like 225/207, or 375/332
> > under a heavy application, or whether the proportion scales.  It could
> even
> > invert, depending on the subtleties of the application.
> >
>
> What he said :-)
>
> > Benchmarking is hard. ;)
>
> What he said :-)
>
> >
> > cheers,
> > Christian.
> >
> > On 11-May-09, at 13:26 , Neil Curzon wrote:
> >
> >> Hi all,
> >>
> >> I've recently taken up benchmarking Tapestry 5.0.18 against Wicket 1.3.5
> >> and
> >> Stripes 1.5.1. For fun, I also threw in an implementation in Model 2
> >> Servlet/JSP. The first were a little surprising to me (Tapestry did not
> >> come
> >> close to winning), and I'm wondering if anybody could comment on my
> >> methodology.
> >>
> >> I have 5 simple pages that use the same simple layout, whose middle body
> >> component has a dynamic bit (the current time), to prevent any kind of
> low
> >> level caching. In tapestry, this looked like this:
> >>
> >> Layout.tml:
> >> <html xmlns:t="http://tapestry.apache.org/schema/tapestry_5_0_0.xsd";>
> >>   <h1>Here's the Layout Beginning</h1>
> >>       <t:body/>
> >>   <h2>Here's the layout End</h2>
> >> </html>
> >>
> >> Page1.tml
> >> <div t:type="layout" xmlns:t="
> >> http://tapestry.apache.org/schema/tapestry_5_0_0.xsd";>
> >>   Page 1 Dynamic content: ${currentTime}
> >> </div>
> >>
> >> In Wicket:
> >> Layout.html:
> >> <html>
> >>   <wicket:border>
> >>   <h1>Here's the Layout Beginning</h1>
> >>       <wicket:body/>
> >>   <h2>Here's the layout End</h2>
> >>   </wicket:border>
> >> </html>
> >>
> >> Page1.html:
> >> <span wicket:id = "layout">
> >>   Page 1 Dynamic content: <span wicket:id="dynamic"/>
> >> </span>
> >>
> >> Tapestry and Wicket were both configured to run in production mode
> before
> >> the benchmarking. Each request went to 1 of 5 different Pages randomly
> >> (each
> >> page was similar to above). I used 20 Threads in parallel, each
> performing
> >> 10,000 such requests. The client was a raw socket Java program writing
> the
> >> HTTP GET and reading the entire response. The results are as follows:
> >>
> >> Tapestry 5.0.18:
> >> Requests per second:    776
> >> Average resp time (ms): 25.41075
> >>
> >> Wicket 1.3.5:
> >> Requests per second:    2574
> >> Average resp time (ms): 7.72404
> >>
> >> Wicket was the only framework that outperformed (slightly) the
> JSP/Servlet
> >> solution. I found these results surprising, as it was my perception that
> >> Tapestry would scale more easily than Wicket. Instead, I found Tapestry
> to
> >> perform about on par with Stripes.
> >>
> >> Is my methodology flawed somehow? How could I improve it? Any input
> would
> >> be
> >> greatly appreciated.
> >>
> >> Thanks
> >> Neil
> >
> > Christian Edward Gruber
> > e-mail: christianedwardgru...@gmail.com
> > weblog: http://www.geekinasuit.com/
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
> > For additional commands, e-mail: users-h...@tapestry.apache.org
> >
> >
>
>
>
> --
> Howard M. Lewis Ship
>
> Creator of Apache Tapestry
> Director of Open Source Technology at Formos
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
> For additional commands, e-mail: users-h...@tapestry.apache.org
>
>

Reply via email to