Thanks for your input everyone.

I don't exactly expect the web framework to be the bottleneck at 700+
requests per second, but I'm keeping these things in mind too. I think it
should be possible to reason about the web framework independently from the
back end. In our case it will be the exact same business logic / persistence
layer we'll be connecting.

So I tried out Tapestry 5.1.0.5, and I'm noticing some really weird behavior
that's also present in 5.0.18.

I start out by benchmarking serial requests: only one thread making 100
random requests. Tapestry is really quick here:

Requests per second:    392.15686274509807
Average resp time (ms): 2.15

I can run this many times in a row, with the same approximate result. Then,
I switch to benchmarking 10 threads each making 100 random requests:

Requests per second:    456.62100456621005
Average resp time (ms): 19.161

But now, it gets weird. When I go back to one thread making 100 requests:

Requests per second:    48.4027105517909
Average resp time (ms): 19.74

The numbers are very similar in 5.1 and 5.0, and the behavior is the same,
with a spike in parallel traffic permanently damaging the single threaded
response time. Only a server restart seemed to get back to the 2ms response
time. When I use the same environment to try the same serial, parallel,
serial test in wicket, there is no such problem, so I think Tapestry is the
cause here.

Setting the hard and soft limits to 40 and 50 respectively did not change
this behavior. Google showed this be the way:
         configuration.add("tapestry.page-pool.soft-limit", "40");
        configuration.add("tapestry.page-pool.hard-limit", "50");
.. is that correct?

I'm benchmarking in jetty, and I'm not sure what the default thread pool
size is, but on the client side, it's not more than 10, so I thought that
40/50 would be enough..?

Neil

Reply via email to