> On 17 Dec 2016, at 19:39, Michael J. Forster <m...@sharedlogic.ca> wrote:
> 
> On 16 December 2016 at 03:15, jtuc...@objektfabrik.de
> <jtuc...@objektfabrik.de> wrote:
>> Sven,
>> 
>> Am 16.12.16 um 10:05 schrieb Sven Van Caekenberghe:
>>> 
>>> I did not say we are the fastest, for from it. I absolutely do not want to
>>> go into a contest, there is no point in doing so.
>> 
>> Absolutely right.
>>> 
>>> 
>>> (The dw-bench page was meant to be generated dynamically on each request
>>> without caching, did you do that too ?).
>>> 
>>> My point was: Pharo is good enough for most web applications. The rest of
>>> the challenge is standard software architecture, design and development. I
>>> choose to do that in Pharo because I like it so much. It is perfectly fine
>>> by me that 99.xx % of the world makes other decisions, for whatever reason.
>> 
>> Exactly. Smalltalk and Seaside are perfectly suited for web applications and
>> are not per se extremely slow or anything.
>> Raw benchmarks are some indicator, but whether a web applicaton is fast or
>> slow depends so much more on your applications' architecture than the
>> underlying HTTP handling stuff.
>> 
>> The important question is not "how fast can Smalltalk serve a number of
>> bytes?" but "how fast can your application do whatever is needed to put
>> those bytes together?".
>> 
>> So your benchmarks show that Smalltalk can serve stuff more than fast enough
>> for amost all situations (let's be honest, most of us will never have to
>> server thousands of concurrent users - of course I hope I am wrong ;-) ).
>> The rest is application architecture, infrastructure and avoiding stupid
>> errors. Nothing Smalltalk specific.
>> 
>> 
>> Joachim
>> 
> [...]
> 
> 
> In our benchmarking and production experience, Pharo--even with
> Seaside--has fared well against Common Lisp, Java, Ruby, and Erlang
> web applications in terms of _page delivery speed_. Erlang positively
> embarrasses the others at handling concurrent requests, and it does so
> extremely cost-effectively in terms of hardware. Pharo (and Seaside)
> does likewise, at the other end of the spectrum, when developing
> sophisticated application workflow.
> 
> And that's the inflection point--a painful one--for us. To experience
> such effective time to market and maintenance, grow, and then trade it
> all to scale to 500+ concurrent users on a single t2.medium instance
> to keep hardware costs in check.

I think I understand your point, and in some specific situations that might be 
true. But if you can only afford to pay $35 a month for your hardware, how low 
must your income be ? Are you in a commercially viable enterprise then ?

For a couple of $1000s you can get the equivalent of 10s if not up to 100 of 
those instances. And that is still much less than office rent, let alone 1 
employee.

The challenge today is not the cost of cloud hardware, it is simply building 
and operating your application. That is assuming you can sell it enough to make 
a living from it.

> Mike


Reply via email to