On Sep 11, 11:51 pm, Timbo <tfarr...@swgen.com> wrote:
> According to the Performance section of their documentation, they
> recommend running one instance of Tornado per processor core on your
> server and then joining them together behind a nginx reverse proxy.
> Looking at the graph, this makes the top bar an apples to oranges
> comparison with the rest of the servers compared. We call that
> propaganda.
>
> So basically it's ~33% faster than Apache/mod_wsgi. That's not a big
> deal. Everyone knows that Apache is the work-horse, not the race-
> horse.
That claim is also dubious.
They are comparing running Django on top of Apache/mod_wsgi to running
there own lightweight framework. The real comparison would be running
Django on top of Tornado in comparison to Django on top of Apache/
mod_wsgi. Django is quite a heavy weight, so not surprising that they
could show something running faster. At the least, they could have
shown results for the basic WSGI hello world example on top of Apache/
mod_wsgi instead.
Their results also look wrong from the perspective that they claim
that Django runs faster than web.py. The web.py package is known for
being lightweight and capable of better throughput than Django. So,
that web.py is shown to be slower casts even more suspicious on their
claims.
Finally, running WSGI on top of Tornado will have the same flaws as
running nginx/mod_wsgi as I have previously explained in:
http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html
This doesn't mean their server isn't good for purpose built event
driven systems, it would though likely suck for a real WSGI
application.
Graham
> Also note that the CherryPy numbers are for full CherryPy, just simply
> the wsgiserver that web2py uses.
>
> It's great that facebook has found a setup that works for them. But
> it's probably a bad setup (i.e. overly complicated) for the average
> web2py user. One of web2py's virtues is easy.
>
> Not everything that sparkles is gold. =)
>
> -tim
>
> On Sep 11, 8:28 am, mdipierro <mdipie...@cs.depaul.edu> wrote:
>
>
>
> > There are two things that do not convince me.
>
> > - For a complex web app the time of the web server is negligible over
> > the time to perform SQL queries. I guess those tests were for a
> > minimal hello world app.
>
> > - If I understand this (and please correct me) Tornado is not
> > multithreaded. It is a well not fact that non-multithreaded servers
> > are faster but they not always the best choice for the job. In
> > particular they delay all other connection when a connection take long
> > time to process.
>
> > Massimo
>
> > On Sep 11, 7:36 am, "Sebastian E. Ovide" <sebastianov...@gmail.com>
> > wrote:
>
> > > according to that benchmark CherryPy is far the slowest...
>
> > > what about web2py (apache/mod_wsgi)
>
> > > would be nice if we can place it before Django (apache/mod_wsgi)
>
> > > On Fri, Sep 11, 2009 at 12:56 PM, JorgeR <jorgeh...@gmail.com> wrote:
>
> > > > do you mean cherrypy vs tornado?
>
> > > > On Sep 11, 3:48 am, "Sebastian E. Ovide" <sebastianov...@gmail.com>
> > > > wrote:
> > > > > waw.... accordingly tohttp://
> > > >www.tornadoweb.org/documentation#performanceitperformesvery
> > > > > well....
>
> > > > > do we have any numbers about web2py to compare to them ?
>
> > > > > On Fri, Sep 11, 2009 at 2:09 AM, Joe Barnhart <joe.barnh...@gmail.com>
> > > > > wrote:
>
> > > > > > Looks kinda like Twisted to me, but without the generality of other
> > > > > > protocols. But it supports epoll on Linux (and Mac?). It *can*
> > > > support
> > > > > > WSGI but you lose the cool asynchronous stuff so why do it?
>
> > > > > > In short, it sounds like an excellent solution for someone else's
> > > > problem!
>
> > > > > > -- Joe B.
>
> > > > > > On Thu, Sep 10, 2009 at 5:48 PM, Anand Vaidya <
> > > > anandvaidya...@gmail.com>
> > > > > > wrote:
>
> > > > > >> Facebook has released Tornado Serverhttp://www.tornadoweb.org/
>
> > > > > >> Any comments?
>
> > > > > >> Regards
> > > > > >> Anand
>
> > > > > >> Tornado is an open source version of the scalable, non-blocking web
> > > > > >> server and tools that power FriendFeed. The FriendFeed application
> > > > > >> is
> > > > > >> written using a web framework that looks a bit like web.py or
> > > > > >> Google's
> > > > > >> webapp, but with additional tools and optimizations to take
> > > > > >> advantage
> > > > > >> of the underlying non-blocking infrastructure.
>
> > > > > >> The framework is distinct from most mainstream web server
> > > > > >> frameworks
> > > > > >> (and certainly most Python frameworks) because it is non-blocking
> > > > > >> and
> > > > > >> reasonably fast. Because it is non-blocking and uses epoll, it can
> > > > > >> handle thousands of simultaneous standing connections, which means
> > > > > >> it
> > > > > >> is ideal for real-time web services. We built the web server
> > > > > >> specifically to handle FriendFeed's real-time features — every
> > > > > >> active
> > > > > >> user of FriendFeed maintains an open connection to the FriendFeed
> > > > > >> servers. (For more information on scaling servers to support
> > > > > >> thousands
> > > > > >> of clients, see The C10K problem.)
>
> > > > > --
>
> > > > > Sent from Dublin, Ireland
>
> > > --
>
> > > Sent from Dublin, Ireland
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To post to this group, send email to web2py@googlegroups.com
To unsubscribe from this group, send email to
web2py+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/web2py?hl=en
-~----------~----~----~----~------~----~------~--~---