tested on windows the "hello world" and a complete app's index page, 
results are quite similar.... in some ways at least on windows seems that 
having a few threads also if in relatively high concurrency environment 
lead to faster response times (forget about requests served per second, I'm 
also talking about new requests served in n seconds). 

My point is: if a normal web-app ships responses in max 1 second (slower 
would be a pain in the *** for the users navigating to those pages), then 
having the user wait 5 sec because his request has been queued - because 
there are a few threads actually serving pages - or having the user wait 7 
sec because the server is "busy" switching threads equals the user waiting 
n seconds.
Tests seems to point to the fact that on this computer, with 1000 
concurrent requests served by a few threads (down to just one) the users 
would wait (in average) less than having them served by 10 to 20 threads 
(and this is the bit getting me a little confused). This happens both on 
"hello world" super-fast responses and on a complete "index" page (complex 
db query, some math, a little bit of markup, no session, returns the 
response (40.2kb of html) in something like 800ms). BTW, as always, the 
more "real" the app is, the more the gap between tornado and rocket/cherry 
reduces itself. 
Motor seems to handle concurrency better, if not "pushed" too high (then it 
stops responding)

Knowing that the server is holding back the response to the user A:
- because has put away the request in its queue and forgot about it (it is 
processing requests coming from B, that requested the page before )
- because is currently processing A's request within other 20 coming from 
[B-Z] users

it's fine, but then again "academically" I expected it to behave better 
with 10 threads than 1.

I'm beginning to think that ab in windows doesn't behave the way it's 
supposed to, but alas ab.exe is shipped from years within the apache win32 
build.

PS back in the thread: motor looks good, also on windows, it's just not as 
stable as rocket or cherrypy.

Il giorno mercoledì 3 ottobre 2012 17:49:56 UTC+2, Massimo Di Pierro ha 
scritto:
>
> Threads in python are slower then single threads on multicore machines 
> (any modern computer). So 2 threads on 2 cores is almost twice as slow 
> instead of twice as fast because of the GIL. Yet there are advantages. If a 
> thread blocks (because it is streaming data or doing a computation), a 
> multithreaded server is still responsive. A non threaded server can only do 
> one thing at the same.
>
> In some languages on multicore concurrency means speed. In python this is 
> not true for threads.
>
> One can tweak things to make faster benchmarks for simple apps by 
> serializing all requests but this is not good in a real life scenario where 
> you need concurrency.
>
>

-- 



Reply via email to