I have been doing lots of Tomcat as well and helped some people at Orange scale some of their mobile provisioning stuff. It scales. But one would scale Pharo just the same.
I remember some AJP module for Pharo/Squeak and http://book.seaside.st/book/advanced/deployment/deployment-apache/mod-proxy-ajp Lets of parms in there but can help scale things and do session affinity. https://tomcat.apache.org/tomcat-7.0-doc/config/ajp.html Phil On Fri, Dec 16, 2016 at 1:50 PM, Esteban Lorenzano <esteba...@gmail.com> wrote: > Hi, > > On 16 Dec 2016, at 10:41, Norbert Hartl <norb...@hartl.name> wrote: > > We are talking about really high numbers of requests/s. The odds you are > getting in this kind of scaling trouble are usually close to zero. It means > you need to generate an application that has really many users. Most > projects we know end up using a single image for everything. > > > amen to everything, but this in particular. 1000 /concurrent/ requests are > a HUGE amount of requests most applications will never need. > > Remember concurrent does not means simultaneous but in the same lapse… > which means in any fraction of time you measures you can count 1000 > requests being processed (no matter if that’s 1ms, 1s or 1m)… when I was > designing web-applications all the time, the count I usually was doing was: > the number of users I expect to have, grouped by time-picks then I was > dividing that per 50/s (this was an “obscure” heuristic I got from some > even more obscure general observation that have much to do with the fact > that people spend much more time looking a monitor than clicking a mouse). > > For example: to serve an application to 1000 users, > > - let’s consider 80% are connected at pick times = 800 users who I need to > serve > - = roughly 40 requests per second… > > so in general a couple of tomcats would be ok (because at the time I was > working in java). > … or 4 pharos. > > now, as I always said at the time: this are estimations that are meant to > calme the stress of customers (the ones who pays for the projects) or my > project managers (who didn’t know much about systems anyway)… > and they just worked as “pain-killers” because since I really cannot know > how much will take a request I cannot measure anything. > Even worst: I’m assuming all request take same time, which absolutely a > non sense. > > But well, since people (both customers and managers) always made that > question, I made up that number based in my own observation (20 years of > experience, not so bad) that “in general, a Tomcat can handle about 40 > req/s and Seaside can handle something around 15… and you always need to > calculate a bit more because of murphy’s law”. Fun fact: the estimation was > in general correct :P > > In conclusion: if you *really* need to serve 1000 concurrent users, you’ll > probably have the budget to make it right :) > > Esteban >