Re: [Pharo-users] real world pharo web application set ups

2016-12-15 Thread jtuc...@objektfabrik.de

Victor,

Am 14.12.16 um 19:23 schrieb Vitor Medina Cruz:


If I tell you that my current estimate is that a Smalltalk image
with Seaside will not be able to handle more than 20 concurrent
users, in many cases even less. 



Seriously? That is kinda a low number, I would expect more for each 
image. Certainly it depends much on many things, but it is certainly 
very low for a rough estimate, why you say that?


seriously, I think 20 is very optimistic for several reasons.

One, you want to be fast and responsive for every single user, so there 
is absolutely no point in going too close to any limit. It's easy to 
lose users by providing bad experience.


Second, in a CRUD Application, you mostly work a lot with DB queries. 
And you connect to all kinds of stuff and do I/O. Some of these things 
simply block the VM. Even if that is only for 0.3 seconds, you postpone 
processing for each "unaffected" user by these 0.3 seconds, so this adds 
to significant delays in response time. And if you do some heavy db 
operations, 0.3 seconds is not a terribly bad estimate. Add to that the 
materialization and stuff within the Smalltalk image.


Seaside adapters usually start off green threads for each request. But 
there are things that need to be serialized (like in a critical Block). 
So in reality, users block each other way more often than you'd like.


So if you asked me to give a more realistic estimation, I'd correct 
myself down to a number between 5 and probably a maximum of 10 users. 
Everything else means you must use all those fancy tricks and tools 
people mention in this thread.
So what you absolutely need to do is start with an estimate of 5 
concurrent users per image and look for ways to distribute work among 
servers/images so that these blocking situations are down to a minimum. 
If you find your software works much better, congratulate yourself and 
stack up new machines more slowly than initially estimated.



Before you turn around and say: Smalltalk is unsuitable for the web, 
let's take a brief look at what concurrent users really means. 
Concurrent users are users that request some processing from the server 
at they very same time (maybe within an interval of 200-400msec). This 
is not the same as 5 people being currently logged on to the server and 
requesting something sometimes. 5 concurrent users can be 20, 50, 100 
users who are logged in at the same time.


Then there is this sad "share all vs. share nothing" argument. In 
Seaside you keep all your objects alive (read from db and materialized) 
between web requests. IN share nothing, you read everything back from 
disc/db whenever a request comes in. This also takes time and ressources 
(and pssibly blocks the server for the blink of an eye or two). You 
exchange RAM with CPU cycles and I/O. It is extremely hard to predict 
what works better, and I guess nobody ever made A/B tests. It's all just 
theoretical bla bla and guesses of what definitely must be better in 
one's world.


Why do I come up with this share everything stuff? Because it usually 
means that each user that is logged on holds onto a load of objects on 
the server side (session storage), like their user account, shopping 
card, settings, last purchases, account information and whatnot. That's 
easily a list of a few thousand objects (and be it only Proxies) that 
take up space and want to be inspected by the garbage collector. So each 
connected user not only needs CPU cycles whenever they send a request to 
the server, but also uses RAM. In our case, this can easily be 5-10 MB 
of objects per user. Add to that the shadow copies that your persistence 
mechanism needs for undo and stuff, and all the data Seaside needs for 
Continuations etc, and each logged on users needs 15, 20 or more MB of 
object space. Connect ten users and you have 150-200 MB. That is not a 
problem per se, but also means there is some hard limit, especially in a 
32 bit world. You don't want your server to slow down because it cannot 
allocate new memory or can't find contiguous slots for stuff and GCs all 
the time.


To sum up, I think the number of influencing factors is way too high to 
really give a good estimate. Our experience (based on our mix of 
computation and I/O) says that 5 concurrent users per image is doable 
without negative impact on other users. Some operations take so much 
time that you really need to move them out of the front-facing image and 
distribute work to backend servers. More than 5 is probably possible but 
chances are that there are operations that will affect all users and 
with every additional user there is a growing chance that you have 2 or 
more requesting the yery same operation within a very short interval. 
This will make things worse and worse.


So I trust in you guys having lots of cool tools around and knowing 
loads of tricks to wrench out much more power of a single Smalltalk 
image, but you also need to take a look at your productivity a

Re: [Pharo-users] real world pharo web application set ups

2016-12-15 Thread Sven Van Caekenberghe
Joachim,

> On 15 Dec 2016, at 11:43, jtuc...@objektfabrik.de wrote:
> 
> Victor,
> 
> Am 14.12.16 um 19:23 schrieb Vitor Medina Cruz:
>> If I tell you that my current estimate is that a Smalltalk image with 
>> Seaside will not be able to handle more than 20 concurrent users, in many 
>> cases even less. 
>> 
>> Seriously? That is kinda a low number, I would expect more for each image. 
>> Certainly it depends much on many things, but it is certainly very low for a 
>> rough estimate, why you say that?
> 
> seriously, I think 20 is very optimistic for several reasons.
> 
> One, you want to be fast and responsive for every single user, so there is 
> absolutely no point in going too close to any limit. It's easy to lose users 
> by providing bad experience.
> 
> Second, in a CRUD Application, you mostly work a lot with DB queries. And you 
> connect to all kinds of stuff and do I/O. Some of these things simply block 
> the VM. Even if that is only for 0.3 seconds, you postpone processing for 
> each "unaffected" user by these 0.3 seconds, so this adds to significant 
> delays in response time. And if you do some heavy db operations, 0.3 seconds 
> is not a terribly bad estimate. Add to that the materialization and stuff 
> within the Smalltalk image.
> 
> Seaside adapters usually start off green threads for each request. But there 
> are things that need to be serialized (like in a critical Block). So in 
> reality, users block each other way more often than you'd like. 
> 
> So if you asked me to give a more realistic estimation, I'd correct myself 
> down to a number between 5 and probably a maximum of 10 users. Everything 
> else means you must use all those fancy tricks and tools people mention in 
> this thread.
> So what you absolutely need to do is start with an estimate of 5 concurrent 
> users per image and look for ways to distribute work among servers/images so 
> that these blocking situations are down to a minimum. If you find your 
> software works much better, congratulate yourself and stack up new machines 
> more slowly than initially estimated. 
> 
> 
> Before you turn around and say: Smalltalk is unsuitable for the web, let's 
> take a brief look at what concurrent users really means. Concurrent users are 
> users that request some processing from the server at they very same time 
> (maybe within an interval of 200-400msec). This is not the same as 5 people 
> being currently logged on to the server and requesting something sometimes. 5 
> concurrent users can be 20, 50, 100 users who are logged in at the same time.
> 
> Then there is this sad "share all vs. share nothing" argument. In Seaside you 
> keep all your objects alive (read from db and materialized) between web 
> requests. IN share nothing, you read everything back from disc/db whenever a 
> request comes in. This also takes time and ressources (and pssibly blocks the 
> server for the blink of an eye or two). You exchange RAM with CPU cycles and 
> I/O. It is extremely hard to predict what works better, and I guess nobody 
> ever made A/B tests. It's all just theoretical bla bla and guesses of what 
> definitely must be better in one's world.
> 
> Why do I come up with this share everything stuff? Because it usually means 
> that each user that is logged on holds onto a load of objects on the server 
> side (session storage), like their user account, shopping card, settings, 
> last purchases, account information and whatnot. That's easily a list of a 
> few thousand objects (and be it only Proxies) that take up space and want to 
> be inspected by the garbage collector. So each connected user not only needs 
> CPU cycles whenever they send a request to the server, but also uses RAM. In 
> our case, this can easily be 5-10 MB of objects per user. Add to that the 
> shadow copies that your persistence mechanism needs for undo and stuff, and 
> all the data Seaside needs for Continuations etc, and each logged on users 
> needs 15, 20 or more MB of object space. Connect ten users and you have 
> 150-200 MB. That is not a problem per se, but also means there is some hard 
> limit, especially in a 32 bit world. You don't want your server to slow down 
> because it cannot allocate new memory or can't find contiguous slots for 
> stuff and GCs all the time. 
> 
> To sum up, I think the number of influencing factors is way too high to 
> really give a good estimate. Our experience (based on our mix of computation 
> and I/O) says that 5 concurrent users per image is doable without negative 
> impact on other users. Some operations take so much time that you really need 
> to move them out of the front-facing image and distribute work to backend 
> servers. More than 5 is probably possible but chances are that there are 
> operations that will affect all users and with every additional user there is 
> a growing chance that you have 2 or more requesting the yery same operation 
> within a very short interval. This will make things wo

Re: [Pharo-users] [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-12-15 Thread Marcus Denker

> On 12 Dec 2016, at 17:42, Marcus Denker  wrote:
> 
>> 
>> On 8 Dec 2016, at 15:59, Marcus Denker > > wrote:
>> 
>> 
>>> 
 Accepting Bitcoin payments would be a plus ;-) (https://bitpay.com/tour 
 )
 
>>> 
>> 
>> Hi,
>> 
>> I have setup bitpay. This means we can now accept bitcoin, to pay by bitcoin,
>> for now please select “offline” payment and then send a mail to 
>>  associat...@pharo.org 
>> 
>> and we will generate a bitcoin invoice. (we can integrate it a bit better 
>> later).
>> 
> 
> Hello,
> 
> I now
>   - added a link to donate by bitpay to 
> https://association.pharo.org/Donate 
>   - added payments links to the invoice that the system generates
>   for paying the membership.

Hi,

Sadly I had to remove everything as BitPay is not compatible with the legal
structure of the association right now (dealing with US based companies wrt.
money is getting very hard these days).

So for us, using bitpay is sadly no option at this point.

Marcus



Re: [Pharo-users] [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-12-15 Thread Marcus Denker
> 
> 
> Sadly I had to remove everything as BitPay is not compatible with the legal
> structure of the association right now (dealing with US based companies wrt.
> money is getting very hard these days).
> 
> So for us, using bitpay is sadly no option at this point.
> 

I will explore http://coinwidget.com  the next weeks, 
this seems to be much more what
we need actually.

Marcus



Re: [Pharo-users] real world pharo web application set ups

2016-12-15 Thread Vitor Medina Cruz
>
> > On 14 Dec 2016, at 23:29, Vitor Medina Cruz 
> wrote:
> >
> > Pharo don't have non-blocking I/O?
> It certainly does at the networking level, but some native code interfaces
> might not act so nice.


Humm, I asked because those times I experimented slow I/O processing and
the image seems to freeze. There are some situations where not even a
cntrl+. can interrupt the work that is freezing it. As I understand, I/O
can be interleaved with other work, but not in all cases because of some
native code that keeps the thread blocked, is that correct? For hi CPU
usage procedures, if it do not explicitly yield execution, then the image
will be blocked until the end of it's execution, right? Also, to take
advantage of multiple cores one must use multiple images or use the CPP library
that Dimitris talked about? Is that another way to spem OS Threads or
process inside an image?


On Wed, Dec 14, 2016 at 8:51 PM, Sven Van Caekenberghe  wrote:

>
> > On 14 Dec 2016, at 23:29, Vitor Medina Cruz 
> wrote:
> >
> > Pharo don't have non-blocking I/O?
>
> It certainly does at the networking level, but some native code interfaces
> might not act so nice.
>
> > On Wed, Dec 14, 2016 at 6:59 PM, Ramon Leon 
> wrote:
> > On 12/14/2016 12:09 PM, Esteban A. Maringolo wrote:
> > Can you extend on suspending the UI process? I never did that.
> >
> > I feed my images a start script on the command line
> >
> > pharo-vm-nox \
> > -vm-sound-null -vm-display-null \
> > /var/pharo/app.image \
> > /var/pharo/startScript
> >
> > startScript containing one line (among others) like so...
> >
> > Project uiProcess suspend.
> >
> > I'm on an older Pharo, but I presume the newer ones are the same or
> similar. No sense in wasting CPU on a UI in a headless image
> >
> > Won't the idle use add up?
> >
> > Sure eventually, but you don't run more than a 2 or so per core so
> that'll never be a problem.  You shouldn't be running 5 images on a single
> core, let alone more.
> >
> > In my case I served up to 20 concurrent users (out of ~100 total) with
> > only 5 images. Plus another two images for the REST API. In a dual
> > core server.
> >
> > That's barely a server, most laptops these days have more cores. Rent a
> virtual server with a dozen or more cores, then you can run a few images
> per core without the idle mattering at all and run 2 dozen images in total
> per 12 core server.
> >
> > Scale by adding cores and ram allowing you to run more images per box;
> or scale by running more boxes, ultimately, you need to spread out the load
> across many many cores.
> >
> > --
> > Ramon Leon
> >
> >
> >
>
>
>


Re: [Pharo-users] real world pharo web application set ups

2016-12-15 Thread Vitor Medina Cruz
Joachim


seriously, I think 20 is very optimistic for several reasons. (...)


Whoa! Thanks for the careful and insightful response, I really appreciate
that! :)

On Thu, Dec 15, 2016 at 12:00 PM, Sven Van Caekenberghe 
wrote:

> Joachim,
>
> > On 15 Dec 2016, at 11:43, jtuc...@objektfabrik.de wrote:
> >
> > Victor,
> >
> > Am 14.12.16 um 19:23 schrieb Vitor Medina Cruz:
> >> If I tell you that my current estimate is that a Smalltalk image with
> Seaside will not be able to handle more than 20 concurrent users, in many
> cases even less.
> >>
> >> Seriously? That is kinda a low number, I would expect more for each
> image. Certainly it depends much on many things, but it is certainly very
> low for a rough estimate, why you say that?
> >
> > seriously, I think 20 is very optimistic for several reasons.
> >
> > One, you want to be fast and responsive for every single user, so there
> is absolutely no point in going too close to any limit. It's easy to lose
> users by providing bad experience.
> >
> > Second, in a CRUD Application, you mostly work a lot with DB queries.
> And you connect to all kinds of stuff and do I/O. Some of these things
> simply block the VM. Even if that is only for 0.3 seconds, you postpone
> processing for each "unaffected" user by these 0.3 seconds, so this adds to
> significant delays in response time. And if you do some heavy db
> operations, 0.3 seconds is not a terribly bad estimate. Add to that the
> materialization and stuff within the Smalltalk image.
> >
> > Seaside adapters usually start off green threads for each request. But
> there are things that need to be serialized (like in a critical Block). So
> in reality, users block each other way more often than you'd like.
> >
> > So if you asked me to give a more realistic estimation, I'd correct
> myself down to a number between 5 and probably a maximum of 10 users.
> Everything else means you must use all those fancy tricks and tools people
> mention in this thread.
> > So what you absolutely need to do is start with an estimate of 5
> concurrent users per image and look for ways to distribute work among
> servers/images so that these blocking situations are down to a minimum. If
> you find your software works much better, congratulate yourself and stack
> up new machines more slowly than initially estimated.
> >
> >
> > Before you turn around and say: Smalltalk is unsuitable for the web,
> let's take a brief look at what concurrent users really means. Concurrent
> users are users that request some processing from the server at they very
> same time (maybe within an interval of 200-400msec). This is not the same
> as 5 people being currently logged on to the server and requesting
> something sometimes. 5 concurrent users can be 20, 50, 100 users who are
> logged in at the same time.
> >
> > Then there is this sad "share all vs. share nothing" argument. In
> Seaside you keep all your objects alive (read from db and materialized)
> between web requests. IN share nothing, you read everything back from
> disc/db whenever a request comes in. This also takes time and ressources
> (and pssibly blocks the server for the blink of an eye or two). You
> exchange RAM with CPU cycles and I/O. It is extremely hard to predict what
> works better, and I guess nobody ever made A/B tests. It's all just
> theoretical bla bla and guesses of what definitely must be better in one's
> world.
> >
> > Why do I come up with this share everything stuff? Because it usually
> means that each user that is logged on holds onto a load of objects on the
> server side (session storage), like their user account, shopping card,
> settings, last purchases, account information and whatnot. That's easily a
> list of a few thousand objects (and be it only Proxies) that take up space
> and want to be inspected by the garbage collector. So each connected user
> not only needs CPU cycles whenever they send a request to the server, but
> also uses RAM. In our case, this can easily be 5-10 MB of objects per user.
> Add to that the shadow copies that your persistence mechanism needs for
> undo and stuff, and all the data Seaside needs for Continuations etc, and
> each logged on users needs 15, 20 or more MB of object space. Connect ten
> users and you have 150-200 MB. That is not a problem per se, but also means
> there is some hard limit, especially in a 32 bit world. You don't want your
> server to slow down because it cannot allocate new memory or can't find
> contiguous slots for stuff and GCs all the time.
> >
> > To sum up, I think the number of influencing factors is way too high to
> really give a good estimate. Our experience (based on our mix of
> computation and I/O) says that 5 concurrent users per image is doable
> without negative impact on other users. Some operations take so much time
> that you really need to move them out of the front-facing image and
> distribute work to backend servers. More than 5 is probably possible but
> chances are that there