Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread volkert

Sven,

compare with an erlang vm (Cowboy) on a standard pc, i5-4570 CPU @ 
3.20GHz × 4, on linux ...


Conncurrent request: 8

$ ab -k -c 8 -n 10240 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1024 requests
Completed 2048 requests
Completed 3072 requests
Completed 4096 requests
Completed 5120 requests
Completed 6144 requests
Completed 7168 requests
Completed 8192 requests
Completed 9216 requests
Completed 10240 requests
Finished 10240 requests


Server Software:
Server Hostname:127.0.0.1
Server Port:8080

Document Path:  /
Document Length:7734 bytes

Concurrency Level:  8
Time taken for tests:   0.192 seconds
Complete requests:  10240
Failed requests:0
Keep-Alive requests:10143
Total transferred:  80658152 bytes
HTML transferred:   79196160 bytes
Requests per second:53414.29 [#/sec] (mean)
Time per request:   0.150 [ms] (mean)
Time per request:   0.019 [ms] (mean, across all concurrent requests)
Transfer rate:  410871.30 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:00   0.0  0   0
Processing: 00   0.2  0   3
Waiting:00   0.2  0   3
Total:  00   0.2  0   3

Percentage of the requests served within a certain time (ms)
  50%  0
  66%  0
  75%  0
  80%  0
  90%  0
  95%  1
  98%  1
  99%  1
 100%  3 (longest request)


And here with 1000 concurrent request ...

$ab -k -c 1000 -n 10240 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1024 requests
Completed 2048 requests
Completed 3072 requests
Completed 4096 requests
Completed 5120 requests
Completed 6144 requests
Completed 7168 requests
Completed 8192 requests
Completed 9216 requests
Completed 10240 requests
Finished 10240 requests


Server Software:
Server Hostname:127.0.0.1
Server Port:8080

Document Path:  /
Document Length:7734 bytes

Concurrency Level:  1000
Time taken for tests:   0.225 seconds
Complete requests:  10240
Failed requests:0
Keep-Alive requests:10232
Total transferred:  80660288 bytes
HTML transferred:   79196160 bytes
Requests per second:45583.23 [#/sec] (mean)
Time per request:   21.938 [ms] (mean)
Time per request:   0.022 [ms] (mean, across all concurrent requests)
Transfer rate:  350642.85 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:01   3.3  0  23
Processing: 06  16.1  0 198
Waiting:06  16.1  0 198
Total:  07  18.0  0 211

Percentage of the requests served within a certain time (ms)
  50%  0
  66%  2
  75%  6
  80% 10
  90% 21
  95% 32
  98% 47
  99%108
 100%211 (longest request)



Am 15.12.2016 um 15:00 schrieb Sven Van Caekenberghe:

Joachim,


On 15 Dec 2016, at 11:43, jtuc...@objektfabrik.de wrote:

Victor,

Am 14.12.16 um 19:23 schrieb Vitor Medina Cruz:

If I tell you that my current estimate is that a Smalltalk image with Seaside 
will not be able to handle more than 20 concurrent users, in many cases even 
less.

Seriously? That is kinda a low number, I would expect more for each image. 
Certainly it depends much on many things, but it is certainly very low for a 
rough estimate, why you say that?

seriously, I think 20 is very optimistic for several reasons.

One, you want to be fast and responsive for every single user, so there is 
absolutely no point in going too close to any limit. It's easy to lose users by 
providing bad experience.

Second, in a CRUD Application, you mostly work a lot with DB queries. And you connect to 
all kinds of stuff and do I/O. Some of these things simply block the VM. Even if that is 
only for 0.3 seconds, you postpone processing for each "unaffected" user by 
these 0.3 seconds, so this adds to significant delays in response time. And if you do 
some heavy db operations, 0.3 seconds is not a terribly bad estimate. Add to that the 
materialization and stuff within the Smalltalk image.

Seaside adapters usually start off green threads for each request. But there 
are things that need to be serialized (like in a critical Block). So in 
reality, users block each other way more often than you'd like.

So if you asked me to give a more realistic estimation, I'd correct myself down 
to a number between 5 and probably a maximum of 10 users. Everything else means 
you must use all th

Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread Sven Van Caekenberghe
I did not say we are the fastest, for from it. I absolutely do not want to go 
into a contest, there is no point in doing so.

(The dw-bench page was meant to be generated dynamically on each request 
without caching, did you do that too ?).

My point was: Pharo is good enough for most web applications. The rest of the 
challenge is standard software architecture, design and development. I choose 
to do that in Pharo because I like it so much. It is perfectly fine by me that 
99.xx % of the world makes other decisions, for whatever reason.

> On 16 Dec 2016, at 09:57, volkert  wrote:
> 
> Sven,
> 
> compare with an erlang vm (Cowboy) on a standard pc, i5-4570 CPU @ 3.20GHz × 
> 4, on linux ...
> 
> Conncurrent request: 8
> 
> $ ab -k -c 8 -n 10240 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
> 
> Benchmarking 127.0.0.1 (be patient)
> Completed 1024 requests
> Completed 2048 requests
> Completed 3072 requests
> Completed 4096 requests
> Completed 5120 requests
> Completed 6144 requests
> Completed 7168 requests
> Completed 8192 requests
> Completed 9216 requests
> Completed 10240 requests
> Finished 10240 requests
> 
> 
> Server Software:
> Server Hostname:127.0.0.1
> Server Port:8080
> 
> Document Path:  /
> Document Length:7734 bytes
> 
> Concurrency Level:  8
> Time taken for tests:   0.192 seconds
> Complete requests:  10240
> Failed requests:0
> Keep-Alive requests:10143
> Total transferred:  80658152 bytes
> HTML transferred:   79196160 bytes
> Requests per second:53414.29 [#/sec] (mean)
> Time per request:   0.150 [ms] (mean)
> Time per request:   0.019 [ms] (mean, across all concurrent requests)
> Transfer rate:  410871.30 [Kbytes/sec] received
> 
> Connection Times (ms)
>  min  mean[+/-sd] median   max
> Connect:00   0.0  0   0
> Processing: 00   0.2  0   3
> Waiting:00   0.2  0   3
> Total:  00   0.2  0   3
> 
> Percentage of the requests served within a certain time (ms)
>  50%  0
>  66%  0
>  75%  0
>  80%  0
>  90%  0
>  95%  1
>  98%  1
>  99%  1
> 100%  3 (longest request)
> 
> 
> And here with 1000 concurrent request ...
> 
> $ab -k -c 1000 -n 10240 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
> 
> Benchmarking 127.0.0.1 (be patient)
> Completed 1024 requests
> Completed 2048 requests
> Completed 3072 requests
> Completed 4096 requests
> Completed 5120 requests
> Completed 6144 requests
> Completed 7168 requests
> Completed 8192 requests
> Completed 9216 requests
> Completed 10240 requests
> Finished 10240 requests
> 
> 
> Server Software:
> Server Hostname:127.0.0.1
> Server Port:8080
> 
> Document Path:  /
> Document Length:7734 bytes
> 
> Concurrency Level:  1000
> Time taken for tests:   0.225 seconds
> Complete requests:  10240
> Failed requests:0
> Keep-Alive requests:10232
> Total transferred:  80660288 bytes
> HTML transferred:   79196160 bytes
> Requests per second:45583.23 [#/sec] (mean)
> Time per request:   21.938 [ms] (mean)
> Time per request:   0.022 [ms] (mean, across all concurrent requests)
> Transfer rate:  350642.85 [Kbytes/sec] received
> 
> Connection Times (ms)
>  min  mean[+/-sd] median   max
> Connect:01   3.3  0  23
> Processing: 06  16.1  0 198
> Waiting:06  16.1  0 198
> Total:  07  18.0  0 211
> 
> Percentage of the requests served within a certain time (ms)
>  50%  0
>  66%  2
>  75%  6
>  80% 10
>  90% 21
>  95% 32
>  98% 47
>  99%108
> 100%211 (longest request)
> 
> 
> 
> Am 15.12.2016 um 15:00 schrieb Sven Van Caekenberghe:
>> Joachim,
>> 
>>> On 15 Dec 2016, at 11:43, jtuc...@objektfabrik.de wrote:
>>> 
>>> Victor,
>>> 
>>> Am 14.12.16 um 19:23 schrieb Vitor Medina Cruz:
 If I tell you that my current estimate is that a Smalltalk image with 
 Seaside will not be able to handle more than 20 concurrent users, in many 
 cases even less.
 
 Seriously? That is kinda a low number, I would expect more for each image. 
 Certainly it depends much on many things, but it is certainly very low for 
 a rough estimate, why you say that?
>>> seriously, I think 20 is very optimistic for several reasons.
>>> 
>>> One, you want to be fast and responsive for every single user, so there is 
>>> absolutely no point in going too close to any limit. It's easy to lose 
>>> users by providing b

Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread jtuc...@objektfabrik.de

Sven,

Am 16.12.16 um 10:05 schrieb Sven Van Caekenberghe:

I did not say we are the fastest, for from it. I absolutely do not want to go 
into a contest, there is no point in doing so.

Absolutely right.


(The dw-bench page was meant to be generated dynamically on each request 
without caching, did you do that too ?).

My point was: Pharo is good enough for most web applications. The rest of the 
challenge is standard software architecture, design and development. I choose 
to do that in Pharo because I like it so much. It is perfectly fine by me that 
99.xx % of the world makes other decisions, for whatever reason.
Exactly. Smalltalk and Seaside are perfectly suited for web applications 
and are not per se extremely slow or anything.
Raw benchmarks are some indicator, but whether a web applicaton is fast 
or slow depends so much more on your applications' architecture than the 
underlying HTTP handling stuff.


The important question is not "how fast can Smalltalk serve a number of 
bytes?" but "how fast can your application do whatever is needed to put 
those bytes together?".


So your benchmarks show that Smalltalk can serve stuff more than fast 
enough for amost all situations (let's be honest, most of us will never 
have to server thousands of concurrent users - of course I hope I am 
wrong ;-) ). The rest is application architecture, infrastructure and 
avoiding stupid errors. Nothing Smalltalk specific.



Joachim



--

---
Objektfabrik Joachim Tuchel  mailto:jtuc...@objektfabrik.de
Fliederweg 1 http://www.objektfabrik.de
D-71640 Ludwigsburg  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1




Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread Norbert Hartl
I'm still not sure about what we are talking. There are some many opinions 
regarding totally different things. 

These benchmark don't say much. As Sven and you did the benchmark on different 
machines they are hard to compare in numbers. It is not that important because 
you can not make many conclusions from a micro benchmark. So what Sven has 
proven is the fact that there is no limit for pharo per se to handle 1k 
requests per second and more.

From these 1k req/s to Joachims 5 reqs/s is big difference. You can always 
assume there is something blocking the vm or a synchron I/O call takes a lot of 
time. But it is not helpful either because that is an edge case like Svens test 
with an app that does nothing. I would even state that it is not that easy to 
produce a situation like Joachim describes. If you have that kind of a problem 
than I'm pretty sure the reasons are mostly not pharo related. Sure if it comes 
to blocking I/O then it is pharo's fault because it cannot do async I/O, yet. 
But a slow database query is not the fault of pharo and you will experience the 
exact same thing in any other runtime. 

Whatever it will be there is no other way then to measure your exact use case 
and find the bottlenecks that prevent your app from being able to handle 1000 
concurrent requests. While I agree with a lot of points mentioned in this 
thread I cannot share the general notion of saying that you reduce the number 
of requests per image and "just" use more images and more machines. That is not 
true. 
The moment you cannot deal with all your requests in a single image you are in 
trouble. As soon as there is a second image you need to make sure there is no 
volatile shared state between those images. You need to take caution then. 
Scaling up using more images and more machines shifts problem to the database 
because it is a central component that is not easy to scale. But again it is 
not pharo's fault either.

So I would state two things:

- We are talking about really high numbers of requests/s. The odds you are 
getting in this kind of scaling trouble are usually close to zero. It means you 
need to generate an application that has really many users. Most projects we 
know end up using a single image for everything. 
- Whenever you have performance problems in your application architecture I'm 
pretty sure pharo is not in the top of the list of bottlenecks. 

So yes, you can handle pretty huge numbers using pharo. 

Norbert

> Am 16.12.2016 um 09:57 schrieb volkert :
> 
> Sven,
> 
> compare with an erlang vm (Cowboy) on a standard pc, i5-4570 CPU @ 3.20GHz × 
> 4, on linux ...
> 
> Conncurrent request: 8
> 
> $ ab -k -c 8 -n 10240 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
> 
> Benchmarking 127.0.0.1 (be patient)
> Completed 1024 requests
> Completed 2048 requests
> Completed 3072 requests
> Completed 4096 requests
> Completed 5120 requests
> Completed 6144 requests
> Completed 7168 requests
> Completed 8192 requests
> Completed 9216 requests
> Completed 10240 requests
> Finished 10240 requests
> 
> 
> Server Software:
> Server Hostname:127.0.0.1
> Server Port:8080
> 
> Document Path:  /
> Document Length:7734 bytes
> 
> Concurrency Level:  8
> Time taken for tests:   0.192 seconds
> Complete requests:  10240
> Failed requests:0
> Keep-Alive requests:10143
> Total transferred:  80658152 bytes
> HTML transferred:   79196160 bytes
> Requests per second:53414.29 [#/sec] (mean)
> Time per request:   0.150 [ms] (mean)
> Time per request:   0.019 [ms] (mean, across all concurrent requests)
> Transfer rate:  410871.30 [Kbytes/sec] received
> 
> Connection Times (ms)
>  min  mean[+/-sd] median   max
> Connect:00   0.0  0   0
> Processing: 00   0.2  0   3
> Waiting:00   0.2  0   3
> Total:  00   0.2  0   3
> 
> Percentage of the requests served within a certain time (ms)
>  50%  0
>  66%  0
>  75%  0
>  80%  0
>  90%  0
>  95%  1
>  98%  1
>  99%  1
> 100%  3 (longest request)
> 
> 
> And here with 1000 concurrent request ...
> 
> $ab -k -c 1000 -n 10240 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
> 
> Benchmarking 127.0.0.1 (be patient)
> Completed 1024 requests
> Completed 2048 requests
> Completed 3072 requests
> Completed 4096 requests
> Completed 5120 requests
> Completed 6144 requests
> Completed 7168 requests
> Completed 8192 requests
> Completed 9216 requests
> Completed 10240 requests
> Finished 10240 requests
> 
> 
> Server Software:
>

Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread Volkert
come on, i am only interested in what set up Pharo is currently used (as 
mentioned in my initial question).
This gives me a feeling, if my requirements are nearby the requirement 
found in current pharo
based systems ... if i am complete out of population of current pharo 
systems, this is for me a good

indication to bet not on it ...

On 16.12.2016 10:41, Norbert Hartl wrote:

I'm still not sure about what we are talking. There are some many opinions 
regarding totally different things.

These benchmark don't say much. As Sven and you did the benchmark on different 
machines they are hard to compare in numbers. It is not that important because 
you can not make many conclusions from a micro benchmark. So what Sven has 
proven is the fact that there is no limit for pharo per se to handle 1k 
requests per second and more.

 From these 1k req/s to Joachims 5 reqs/s is big difference. You can always 
assume there is something blocking the vm or a synchron I/O call takes a lot of 
time. But it is not helpful either because that is an edge case like Svens test 
with an app that does nothing. I would even state that it is not that easy to 
produce a situation like Joachim describes. If you have that kind of a problem 
than I'm pretty sure the reasons are mostly not pharo related. Sure if it comes 
to blocking I/O then it is pharo's fault because it cannot do async I/O, yet. 
But a slow database query is not the fault of pharo and you will experience the 
exact same thing in any other runtime.

Whatever it will be there is no other way then to measure your exact use case and find 
the bottlenecks that prevent your app from being able to handle 1000 concurrent requests. 
While I agree with a lot of points mentioned in this thread I cannot share the general 
notion of saying that you reduce the number of requests per image and "just" 
use more images and more machines. That is not true.
The moment you cannot deal with all your requests in a single image you are in 
trouble. As soon as there is a second image you need to make sure there is no 
volatile shared state between those images. You need to take caution then. 
Scaling up using more images and more machines shifts problem to the database 
because it is a central component that is not easy to scale. But again it is 
not pharo's fault either.

So I would state two things:

- We are talking about really high numbers of requests/s. The odds you are 
getting in this kind of scaling trouble are usually close to zero. It means you 
need to generate an application that has really many users. Most projects we 
know end up using a single image for everything.
- Whenever you have performance problems in your application architecture I'm 
pretty sure pharo is not in the top of the list of bottlenecks.

So yes, you can handle pretty huge numbers using pharo.

Norbert


Am 16.12.2016 um 09:57 schrieb volkert :

Sven,

compare with an erlang vm (Cowboy) on a standard pc, i5-4570 CPU @ 3.20GHz × 4, 
on linux ...

Conncurrent request: 8

$ ab -k -c 8 -n 10240 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1024 requests
Completed 2048 requests
Completed 3072 requests
Completed 4096 requests
Completed 5120 requests
Completed 6144 requests
Completed 7168 requests
Completed 8192 requests
Completed 9216 requests
Completed 10240 requests
Finished 10240 requests


Server Software:
Server Hostname:127.0.0.1
Server Port:8080

Document Path:  /
Document Length:7734 bytes

Concurrency Level:  8
Time taken for tests:   0.192 seconds
Complete requests:  10240
Failed requests:0
Keep-Alive requests:10143
Total transferred:  80658152 bytes
HTML transferred:   79196160 bytes
Requests per second:53414.29 [#/sec] (mean)
Time per request:   0.150 [ms] (mean)
Time per request:   0.019 [ms] (mean, across all concurrent requests)
Transfer rate:  410871.30 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:00   0.0  0   0
Processing: 00   0.2  0   3
Waiting:00   0.2  0   3
Total:  00   0.2  0   3

Percentage of the requests served within a certain time (ms)
  50%  0
  66%  0
  75%  0
  80%  0
  90%  0
  95%  1
  98%  1
  99%  1
100%  3 (longest request)


And here with 1000 concurrent request ...

$ab -k -c 1000 -n 10240 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1024 requests
Completed 2048 requests
Completed 3072 requests
Co

Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread Sven Van Caekenberghe

> On 16 Dec 2016, at 11:33, Volkert  wrote:
> 
> come on, i am only interested in what set up Pharo is currently used (as 
> mentioned in my initial question).
> This gives me a feeling, if my requirements are nearby the requirement found 
> in current pharo
> based systems ... if i am complete out of population of current pharo 
> systems, this is for me a good
> indication to bet not on it ...

Well, yes.

Norbert's conclusion (last two points) was spot on.

You can do in the order of 1K req/s on a single image. If you want more, you 
need to scale (horizontally). Either you don't share state and you can do that 
easily. Or you do share state and you will have to build something custom 
(perfectly doable, but you will have to architect/design for that, preferably 
upfront).

Note that the initial versions of all successful web apps that now serve 
millions of people on thousands of servers all started with very simple, 
inferior technology stacks. Make something great first, scale later.

> On 16.12.2016 10:41, Norbert Hartl wrote:
>> I'm still not sure about what we are talking. There are some many opinions 
>> regarding totally different things.
>> 
>> These benchmark don't say much. As Sven and you did the benchmark on 
>> different machines they are hard to compare in numbers. It is not that 
>> important because you can not make many conclusions from a micro benchmark. 
>> So what Sven has proven is the fact that there is no limit for pharo per se 
>> to handle 1k requests per second and more.
>> 
>> From these 1k req/s to Joachims 5 reqs/s is big difference. You can always 
>> assume there is something blocking the vm or a synchron I/O call takes a lot 
>> of time. But it is not helpful either because that is an edge case like 
>> Svens test with an app that does nothing. I would even state that it is not 
>> that easy to produce a situation like Joachim describes. If you have that 
>> kind of a problem than I'm pretty sure the reasons are mostly not pharo 
>> related. Sure if it comes to blocking I/O then it is pharo's fault because 
>> it cannot do async I/O, yet. But a slow database query is not the fault of 
>> pharo and you will experience the exact same thing in any other runtime.
>> 
>> Whatever it will be there is no other way then to measure your exact use 
>> case and find the bottlenecks that prevent your app from being able to 
>> handle 1000 concurrent requests. While I agree with a lot of points 
>> mentioned in this thread I cannot share the general notion of saying that 
>> you reduce the number of requests per image and "just" use more images and 
>> more machines. That is not true.
>> The moment you cannot deal with all your requests in a single image you are 
>> in trouble. As soon as there is a second image you need to make sure there 
>> is no volatile shared state between those images. You need to take caution 
>> then. Scaling up using more images and more machines shifts problem to the 
>> database because it is a central component that is not easy to scale. But 
>> again it is not pharo's fault either.
>> 
>> So I would state two things:
>> 
>> - We are talking about really high numbers of requests/s. The odds you are 
>> getting in this kind of scaling trouble are usually close to zero. It means 
>> you need to generate an application that has really many users. Most 
>> projects we know end up using a single image for everything.
>> - Whenever you have performance problems in your application architecture 
>> I'm pretty sure pharo is not in the top of the list of bottlenecks.
>> 
>> So yes, you can handle pretty huge numbers using pharo.
>> 
>> Norbert
>> 
>>> Am 16.12.2016 um 09:57 schrieb volkert :
>>> 
>>> Sven,
>>> 
>>> compare with an erlang vm (Cowboy) on a standard pc, i5-4570 CPU @ 3.20GHz 
>>> × 4, on linux ...
>>> 
>>> Conncurrent request: 8
>>> 
>>> $ ab -k -c 8 -n 10240 http://127.0.0.1:8080/
>>> This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
>>> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
>>> Licensed to The Apache Software Foundation, http://www.apache.org/
>>> 
>>> Benchmarking 127.0.0.1 (be patient)
>>> Completed 1024 requests
>>> Completed 2048 requests
>>> Completed 3072 requests
>>> Completed 4096 requests
>>> Completed 5120 requests
>>> Completed 6144 requests
>>> Completed 7168 requests
>>> Completed 8192 requests
>>> Completed 9216 requests
>>> Completed 10240 requests
>>> Finished 10240 requests
>>> 
>>> 
>>> Server Software:
>>> Server Hostname:127.0.0.1
>>> Server Port:8080
>>> 
>>> Document Path:  /
>>> Document Length:7734 bytes
>>> 
>>> Concurrency Level:  8
>>> Time taken for tests:   0.192 seconds
>>> Complete requests:  10240
>>> Failed requests:0
>>> Keep-Alive requests:10143
>>> Total transferred:  80658152 bytes
>>> HTML transferred:   79196160 bytes
>>> Requests per second:53414.29 [#/sec] (mean)
>>> Time per request:   0.150 [m

Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread Norbert Hartl

> Am 16.12.2016 um 11:50 schrieb Sven Van Caekenberghe :
> 
> 
>> On 16 Dec 2016, at 11:33, Volkert  wrote:
>> 
>> come on, i am only interested in what set up Pharo is currently used (as 
>> mentioned in my initial question).
>> This gives me a feeling, if my requirements are nearby the requirement found 
>> in current pharo
>> based systems ... if i am complete out of population of current pharo 
>> systems, this is for me a good
>> indication to bet not on it ...
> 
> Well, yes.
> 
> Norbert's conclusion (last two points) was spot on.
> 
> You can do in the order of 1K req/s on a single image. If you want more, you 
> need to scale (horizontally). Either you don't share state and you can do 
> that easily. Or you do share state and you will have to build something 
> custom (perfectly doable, but you will have to architect/design for that, 
> preferably upfront).
> 
> Note that the initial versions of all successful web apps that now serve 
> millions of people on thousands of servers all started with very simple, 
> inferior technology stacks. Make something great first, scale later.

So true! Because there is nothing that just scales. The perfect solution for 
100 concurrent requests is likely to be very different to the perfect solution 
for 1000 concurrent requests etc. And expect to discover bottlenecks you 
couldn't really anticipate. 

Norbert

> 
>> On 16.12.2016 10:41, Norbert Hartl wrote:
>>> I'm still not sure about what we are talking. There are some many opinions 
>>> regarding totally different things.
>>> 
>>> These benchmark don't say much. As Sven and you did the benchmark on 
>>> different machines they are hard to compare in numbers. It is not that 
>>> important because you can not make many conclusions from a micro benchmark. 
>>> So what Sven has proven is the fact that there is no limit for pharo per se 
>>> to handle 1k requests per second and more.
>>> 
>>> From these 1k req/s to Joachims 5 reqs/s is big difference. You can always 
>>> assume there is something blocking the vm or a synchron I/O call takes a 
>>> lot of time. But it is not helpful either because that is an edge case like 
>>> Svens test with an app that does nothing. I would even state that it is not 
>>> that easy to produce a situation like Joachim describes. If you have that 
>>> kind of a problem than I'm pretty sure the reasons are mostly not pharo 
>>> related. Sure if it comes to blocking I/O then it is pharo's fault because 
>>> it cannot do async I/O, yet. But a slow database query is not the fault of 
>>> pharo and you will experience the exact same thing in any other runtime.
>>> 
>>> Whatever it will be there is no other way then to measure your exact use 
>>> case and find the bottlenecks that prevent your app from being able to 
>>> handle 1000 concurrent requests. While I agree with a lot of points 
>>> mentioned in this thread I cannot share the general notion of saying that 
>>> you reduce the number of requests per image and "just" use more images and 
>>> more machines. That is not true.
>>> The moment you cannot deal with all your requests in a single image you are 
>>> in trouble. As soon as there is a second image you need to make sure there 
>>> is no volatile shared state between those images. You need to take caution 
>>> then. Scaling up using more images and more machines shifts problem to the 
>>> database because it is a central component that is not easy to scale. But 
>>> again it is not pharo's fault either.
>>> 
>>> So I would state two things:
>>> 
>>> - We are talking about really high numbers of requests/s. The odds you are 
>>> getting in this kind of scaling trouble are usually close to zero. It means 
>>> you need to generate an application that has really many users. Most 
>>> projects we know end up using a single image for everything.
>>> - Whenever you have performance problems in your application architecture 
>>> I'm pretty sure pharo is not in the top of the list of bottlenecks.
>>> 
>>> So yes, you can handle pretty huge numbers using pharo.
>>> 
>>> Norbert
>>> 
 Am 16.12.2016 um 09:57 schrieb volkert :
 
 Sven,
 
 compare with an erlang vm (Cowboy) on a standard pc, i5-4570 CPU @ 3.20GHz 
 × 4, on linux ...
 
 Conncurrent request: 8
 
 $ ab -k -c 8 -n 10240 http://127.0.0.1:8080/
 This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Licensed to The Apache Software Foundation, http://www.apache.org/
 
 Benchmarking 127.0.0.1 (be patient)
 Completed 1024 requests
 Completed 2048 requests
 Completed 3072 requests
 Completed 4096 requests
 Completed 5120 requests
 Completed 6144 requests
 Completed 7168 requests
 Completed 8192 requests
 Completed 9216 requests
 Completed 10240 requests
 Finished 10240 requests
 
 
 Server Software:
 Server Hostname:127.0.0.1
 S

Re: [Pharo-users] [Pharo-dev] Pharo poster

2016-12-16 Thread Vitor Medina Cruz
That is soo cool!!

Database shouldn't be join as subset of Data?

On Sat, Dec 10, 2016 at 1:09 PM, stepharong  wrote:

>
> **tx**
> I added the two files to the media folder on file.pharo.org
>
> On Sat, 10 Dec 2016 12:12:36 +0100, Cyril Ferlicot D. <
> cyril.ferli...@gmail.com> wrote:
>
> On 10/12/2016 12:08, stepharong wrote:
>>
>>> Hi
>>>
>>> I'm brainstorming about a pharo poster and I remember that one person
>>> produce a poster with all the pharo technologies.
>>> I'm that I saved it somewhere but I do not know where. I tried to find
>>> it with google but no chance for now.
>>>
>>> Stef
>>>
>>>
>>>
>> Hi!
>>
>> See join files:
>>
>> http://forum.world.st/Pharo-family-update-td4857661.html
>>
>>
>
> --
> Using Opera's mail client: http://www.opera.com/mail/
>
>


Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread p...@highoctane.be
That, just that.
There is something in Pharo that I just do not experience elsewhere.

Phil

On Fri, Dec 16, 2016 at 10:05 AM, Sven Van Caekenberghe 
wrote:

>
> I choose to do that in Pharo because I like it so much. It is perfectly
> fine by me that 99.xx % of the world makes other decisions, for whatever
> reason.
>
>


Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread Esteban Lorenzano
Hi, 

> On 16 Dec 2016, at 10:41, Norbert Hartl  wrote:
> 
>  We are talking about really high numbers of requests/s. The odds you are 
> getting in this kind of scaling trouble are usually close to zero. It means 
> you need to generate an application that has really many users. Most projects 
> we know end up using a single image for everything. 

amen to everything, but this in particular. 1000 /concurrent/ requests are a 
HUGE amount of requests most applications will never need. 

Remember concurrent does not means simultaneous but in the same lapse… which 
means in any fraction of time you measures you can count 1000 requests being 
processed (no matter if that’s 1ms, 1s or 1m)… when I was designing 
web-applications all the time, the count I usually was doing was: the number of 
users I expect to have, grouped by time-picks then I was dividing that per 50/s 
(this was an “obscure” heuristic I got from some even more obscure general 
observation that have much to do with the fact that people spend much more time 
looking a monitor than clicking a mouse). 

For example: to serve an application to 1000 users,

- let’s consider 80% are connected at pick times = 800 users who I need to serve
- = roughly 40 requests per second… 

so in general a couple of tomcats would be ok  (because at the time I was 
working in java). 
… or 4 pharos. 

now, as I always said at the time: this are estimations that are meant to calme 
the stress of customers (the ones who pays for the projects) or my project 
managers (who didn’t know much about systems anyway)… 
and they just worked as “pain-killers” because since I really cannot know how 
much will take a request I cannot measure anything. 
Even worst: I’m assuming all request take same time, which absolutely a non 
sense. 

But well, since people (both customers and managers) always made that question, 
I made up that number based in my own observation (20 years of experience, not 
so bad) that “in general, a Tomcat can handle about 40 req/s and Seaside can 
handle something around 15… and you always need to calculate a bit more because 
of murphy’s law”. Fun fact: the estimation was in general correct :P

In conclusion: if you *really* need to serve 1000 concurrent users, you’ll 
probably have the budget to make it right :)

Esteban

[Pharo-users] NeoJSON

2016-12-16 Thread stepharong

Hi sven

I'm trying to adapt the teapot library example to get a simple item  
collector (as a project for a future book) and

so that I can learn and use it for my PS2/PS3 game collection :)

Now when I set up teapot to emit JSON I get an NeoJSONMappingNotFound
In the library example attila does not manipulate objects but dictionaries.
So I imagine that I have to do something :)
I read the NeoJSON chapter but I did not find the solution.

Should I implement
neoJsonOn:
on my domain?

Stef

--
Using Opera's mail client: http://www.opera.com/mail/



Re: [Pharo-users] real world pharo web application set ups

2016-12-16 Thread p...@highoctane.be
I have been doing lots of Tomcat as well and helped some people at Orange
scale some of their mobile provisioning stuff.
It scales. But one would scale Pharo just the same.

I remember some AJP module for Pharo/Squeak and
http://book.seaside.st/book/advanced/deployment/deployment-apache/mod-proxy-ajp

Lets of parms in there but can help scale things and do session affinity.

https://tomcat.apache.org/tomcat-7.0-doc/config/ajp.html

Phil


On Fri, Dec 16, 2016 at 1:50 PM, Esteban Lorenzano 
wrote:

> Hi,
>
> On 16 Dec 2016, at 10:41, Norbert Hartl  wrote:
>
>  We are talking about really high numbers of requests/s. The odds you are
> getting in this kind of scaling trouble are usually close to zero. It means
> you need to generate an application that has really many users. Most
> projects we know end up using a single image for everything.
>
>
> amen to everything, but this in particular. 1000 /concurrent/ requests are
> a HUGE amount of requests most applications will never need.
>
> Remember concurrent does not means simultaneous but in the same lapse…
> which means in any fraction of time you measures you can count 1000
> requests being processed (no matter if that’s 1ms, 1s or 1m)… when I was
> designing web-applications all the time, the count I usually was doing was:
> the number of users I expect to have, grouped by time-picks then I was
> dividing that per 50/s (this was an “obscure” heuristic I got from some
> even more obscure general observation that have much to do with the fact
> that people spend much more time looking a monitor than clicking a mouse).
>
> For example: to serve an application to 1000 users,
>
> - let’s consider 80% are connected at pick times = 800 users who I need to
> serve
> - = roughly 40 requests per second…
>
> so in general a couple of tomcats would be ok  (because at the time I was
> working in java).
> … or 4 pharos.
>
> now, as I always said at the time: this are estimations that are meant to
> calme the stress of customers (the ones who pays for the projects) or my
> project managers (who didn’t know much about systems anyway)…
> and they just worked as “pain-killers” because since I really cannot know
> how much will take a request I cannot measure anything.
> Even worst: I’m assuming all request take same time, which absolutely a
> non sense.
>
> But well, since people (both customers and managers) always made that
> question, I made up that number based in my own observation (20 years of
> experience, not so bad) that “in general, a Tomcat can handle about 40
> req/s and Seaside can handle something around 15… and you always need to
> calculate a bit more because of murphy’s law”. Fun fact: the estimation was
> in general correct :P
>
> In conclusion: if you *really* need to serve 1000 concurrent users, you’ll
> probably have the budget to make it right :)
>
> Esteban
>


Re: [Pharo-users] NeoJSON

2016-12-16 Thread stepharong
I looked at the Neo code and I do not see how I can specify a mapping at  
the class level

because I cannot control the json writer creation.
So I should probably shortcut everything at the neoJsonOn: level.

Stef



Hi sven

I'm trying to adapt the teapot library example to get a simple item  
collector (as a project for a future book) and

so that I can learn and use it for my PS2/PS3 game collection :)

Now when I set up teapot to emit JSON I get an NeoJSONMappingNotFound
In the library example attila does not manipulate objects but  
dictionaries.

So I imagine that I have to do something :)
I read the NeoJSON chapter but I did not find the solution.

Should I implement
neoJsonOn:
on my domain?

Stef




--
Using Opera's mail client: http://www.opera.com/mail/



Re: [Pharo-users] NeoJSON

2016-12-16 Thread Sven Van Caekenberghe
Stef,

> On 16 Dec 2016, at 15:00, stepharong  wrote:
> 
> Hi sven
> 
> I'm trying to adapt the teapot library example to get a simple item collector 
> (as a project for a future book) and
> so that I can learn and use it for my PS2/PS3 game collection :)
> 
> Now when I set up teapot to emit JSON I get an NeoJSONMappingNotFound
> In the library example attila does not manipulate objects but dictionaries.
> So I imagine that I have to do something :)
> I read the NeoJSON chapter but I did not find the solution.
> 
> Should I implement
>   neoJsonOn:
> on my domain?

Section 5 of 
https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/NeoJSON/NeoJSON.html
 explains most of this.

You either add the mapping to the writer, builder style, or you add it to the 
class side of your model objects as #neoJsonMapping: (search for implementors 
as examples). I see that this second aspect is not well explained in the book.

In the simplest case, the following is enough:

neoJsonMapping: mapper
mapper for: self do: [ :mapping |
mapping mapInstVars: #(id width height data) ] 

But is gets a bit more complicated with inheritance.

In a last resort you could also overwrite #neoJsonOn:

Now, this is all for the writer side. Reading is harder because JSON has no 
type info (that what STON adds, among others), so you have to tell the reader 
what (static) type you want the parser to create. This is based on the same 
mapping. This is what #nextAs: does.

If you have a more complex graph with collection values, you need to type all 
of them 'statically'. There are some examples in the unit tests.

Sven

> Stef
> 
> -- 
> Using Opera's mail client: http://www.opera.com/mail/
> 




Re: [Pharo-users] NeoJSON

2016-12-16 Thread Sven Van Caekenberghe

> On 16 Dec 2016, at 15:10, stepharong  wrote:
> 
> I looked at the Neo code and I do not see how I can specify a mapping at the 
> class level
> because I cannot control the json writer creation.

See my previous message (sent at the same time ;-) 

<< add it to the class side of your model objects as #neoJsonMapping: (search 
for implementors as examples). >>

> So I should probably shortcut everything at the neoJsonOn: level.
> 
> Stef
> 
> 
>> Hi sven
>> 
>> I'm trying to adapt the teapot library example to get a simple item 
>> collector (as a project for a future book) and
>> so that I can learn and use it for my PS2/PS3 game collection :)
>> 
>> Now when I set up teapot to emit JSON I get an NeoJSONMappingNotFound
>> In the library example attila does not manipulate objects but dictionaries.
>> So I imagine that I have to do something :)
>> I read the NeoJSON chapter but I did not find the solution.
>> 
>> Should I implement
>>  neoJsonOn:
>> on my domain?
>> 
>> Stef
>> 
> 
> 
> -- 
> Using Opera's mail client: http://www.opera.com/mail/
> 




Re: [Pharo-users] NeoJSON

2016-12-16 Thread stepharong

Ok I see.

I was doing

GameCollection >> neoJsonOn: neoJSONWriter

neoJSONWriter
writeObject: games


GameItem >> neoJsonOn: neoJSONWriter
self class instanceVariables
do: [ :each |
neoJSONWriter writeObject: (self instVarNamed: 
each) ]


Now it did not work because byteString is not covered and I found that  
strange.


I will use your solution.

I read the section 5 before so there is something missing there.
I will see how I can add an example.

May be we should improve the class comment of mapper stating this class  
methods.


Stef



Re: [Pharo-users] NeoJSON

2016-12-16 Thread Sven Van Caekenberghe

> On 16 Dec 2016, at 15:23, stepharong  wrote:
> 
> Ok I see.
> 
> I was doing
> 
> GameCollection >> neoJsonOn: neoJSONWriter
> 
>   neoJSONWriter
>   writeObject: games
> 
> 
> GameItem >> neoJsonOn: neoJSONWriter
>   self class instanceVariables
>   do: [ :each |
>   neoJSONWriter writeObject: (self instVarNamed: 
> each) ]
> 
> 
> Now it did not work because byteString is not covered and I found that 
> strange.
> 
> I will use your solution.
> 
> I read the section 5 before so there is something missing there.
> I will see how I can add an example.
> 
> May be we should improve the class comment of mapper stating this class 
> methods.

It is right there, in NeoJSONMapper's class comment

...
A mapping can be specified explicitely on a mapper, or can be resolved using 
the #neoJsonMapping: class method.
...

This is the superclass of both NeoJSONReader and NeoJSONWriter.

But we should add it to the book chapter too.

> Stef




Re: [Pharo-users] NeoJSON

2016-12-16 Thread stepharong

Ok it works :)

And I read the class comment but I do not see it.

Now do you have an idea why I got this other missing class mapping for  
bytestring?


Stef

On Fri, 16 Dec 2016 15:25:40 +0100, Sven Van Caekenberghe   
wrote:





On 16 Dec 2016, at 15:23, stepharong  wrote:

Ok I see.

I was doing

GameCollection >> neoJsonOn: neoJSONWriter

neoJSONWriter
writeObject: games


GameItem >> neoJsonOn: neoJSONWriter
self class instanceVariables
do: [ :each |
neoJSONWriter writeObject: (self instVarNamed: 
each) ]


Now it did not work because byteString is not covered and I found that  
strange.


I will use your solution.

I read the section 5 before so there is something missing there.
I will see how I can add an example.

May be we should improve the class comment of mapper stating this class  
methods.


It is right there, in NeoJSONMapper's class comment

...
A mapping can be specified explicitely on a mapper, or can be resolved  
using the #neoJsonMapping: class method.

...

This is the superclass of both NeoJSONReader and NeoJSONWriter.

But we should add it to the book chapter too.


Stef





--
Using Opera's mail client: http://www.opera.com/mail/



Re: [Pharo-users] NeoJSON

2016-12-16 Thread Sven Van Caekenberghe

> On 16 Dec 2016, at 15:47, stepharong  wrote:
> 
> Ok it works :)

Good.

> And I read the class comment but I do not see it.

Strange.

In Neo-JSON-Core-SvenVanCaekenberghe.37 in the class comment of NeoJSONMapper 
last paragraph before the examples.

> Now do you have an idea why I got this other missing class mapping for 
> bytestring?

You must have done something wrong ;-)

NeoJSONWriter toString: { 'string'. #symbol. 1. Float pi }.

'["string","symbol",1,3.141592653589793]'

> Stef
> 
> On Fri, 16 Dec 2016 15:25:40 +0100, Sven Van Caekenberghe  
> wrote:
> 
>> 
>>> On 16 Dec 2016, at 15:23, stepharong  wrote:
>>> 
>>> Ok I see.
>>> 
>>> I was doing
>>> 
>>> GameCollection >> neoJsonOn: neoJSONWriter
>>> 
>>> neoJSONWriter
>>> writeObject: games
>>> 
>>> 
>>> GameItem >> neoJsonOn: neoJSONWriter
>>> self class instanceVariables
>>> do: [ :each |
>>> neoJSONWriter writeObject: (self instVarNamed: 
>>> each) ]
>>> 
>>> 
>>> Now it did not work because byteString is not covered and I found that 
>>> strange.
>>> 
>>> I will use your solution.
>>> 
>>> I read the section 5 before so there is something missing there.
>>> I will see how I can add an example.
>>> 
>>> May be we should improve the class comment of mapper stating this class 
>>> methods.
>> 
>> It is right there, in NeoJSONMapper's class comment
>> 
>> ...
>> A mapping can be specified explicitely on a mapper, or can be resolved using 
>> the #neoJsonMapping: class method.
>> ...
>> 
>> This is the superclass of both NeoJSONReader and NeoJSONWriter.
>> 
>> But we should add it to the book chapter too.
>> 
>>> Stef
>> 
> 
> 
> -- 
> Using Opera's mail client: http://www.opera.com/mail/




Re: [Pharo-users] NeoJSON

2016-12-16 Thread stepharong



Strange.

In Neo-JSON-Core-SvenVanCaekenberghe.37 in the class comment of  
NeoJSONMapper last paragraph before the examples.


sure I just meant that it was not visible enough.
We should add for example

XXX >> neoJsonMapping: aMapper

aMapper for: self do: [ :mapping |
mapping mapInstVars:
 #(#title #kind #hasDoc #grade #stars #isCollectorEdition #paidPrice  
#language #zone)]






Now do you have an idea why I got this other missing class mapping for  
bytestring?


You must have done something wrong ;-)

NeoJSONWriter toString: { 'string'. #symbol. 1. Float pi }.

'["string","symbol",1,3.141592653589793]'


I do not think so because my objects totally stupid.
So this is why I was surprised.


here are two

exampleKlonoa

^ self new  
title: 'Klonoa';
ps2;
threeStar;
grade: '(16/15)';
paidPrice: 1


exampleWildArm

^ self new  
title: 'Wild Arm';
ps2;
threeStar;
grade: '(16/15)';
paidPrice: 1

only strings, numbers and symbols.
this ByteString looked strange.





Stef

On Fri, 16 Dec 2016 15:25:40 +0100, Sven Van Caekenberghe  
 wrote:





On 16 Dec 2016, at 15:23, stepharong  wrote:

Ok I see.

I was doing

GameCollection >> neoJsonOn: neoJSONWriter

neoJSONWriter
writeObject: games


GameItem >> neoJsonOn: neoJSONWriter
self class instanceVariables
do: [ :each |
neoJSONWriter writeObject: (self instVarNamed: 
each) ]


Now it did not work because byteString is not covered and I found  
that strange.


I will use your solution.

I read the section 5 before so there is something missing there.
I will see how I can add an example.

May be we should improve the class comment of mapper stating this  
class methods.


It is right there, in NeoJSONMapper's class comment

...
A mapping can be specified explicitely on a mapper, or can be resolved  
using the #neoJsonMapping: class method.

...

This is the superclass of both NeoJSONReader and NeoJSONWriter.

But we should add it to the book chapter too.


Stef





--
Using Opera's mail client: http://www.opera.com/mail/





--
Using Opera's mail client: http://www.opera.com/mail/