I should stop feeding the troll.

It is obvious Graham has no relevant benchmarks to prove his case, has
no intention of providing them.

None of the links provided make a relevant case.

The bizarre 'Nginx + Apache/mod_wsgi' reference to refer to using
Nginx for static content and Apache with mod_wgsi for 'dynamaic
content', according to a link.

How bizarre. It appears Apache is so bloated for static content it is
unusable for tight VPS configurations.

John Heenan

On Feb 12, 6:30 pm, Graham Dumpleton <graham.dumple...@gmail.com>
wrote:
> On Feb 12, 6:16 pm, John Heenan <johnmhee...@gmail.com> wrote:
>
> > For lower memory footprint on a tight VPS I do not believe any
> > configuration of Apache with web2py (using mod_wgsi or otherwise) will
> > beat a good lighttpd configuration with a FastCGI UNIX socket
> > interface to web2py.
>
> I have helped people in the past setup nginx+Apache/mod_wsgi systems
> running in 64MB in the past and they have been more than happy. The
> mod_wsgi module even supports some special configuration parameters
> that can be used to work around strange rules for calculating memory
> limitations on certain VPS providers. See:
>
>  http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Memory_Constr...
>
> WebFaction has plans which are quite memory constrained as well and
> people run it their fine as well.
>
> > I also believe trying to argue otherwise without credible benchmarks
> > is liable to get anyone regarded as not worth taking seriously, by
> > those with know what they are talking about.
>
> > If Graham was a professional he would present benchmarks,
>
> Other people have already done this, you just need to Google for them.
>
> If I were to present them myself people like yourself wouldn't believe
> me anyway and am likely to be accused of rigging it.
>
> One such comparison to other hosting mechanisms is:
>
>  http://basicverbs.com/benchmark-of-django-deployment-techniques/
>
> Unfortunately it doesn't show lighttpd so am sure you still will not
> be happy. Their observation about lighttpd is going to be right
> though, which is:
>
>   """When I have the time I may update this post with some
> configurations based on lighty, although I wouldn’t expect the results
> to be much different than the ones for the Nginx or Cherokee
> configurations."""
>
> That there isn't much difference between the different solutions is
> shown by those graphs and as I said, this is because the web server
> and network performance is nearly never the bottleneck once you load
> on the typical Python web frameworks.
>
> Some analysis about this specific benchmarks as well as others has
> been posted on the mod_wsgi mailing list in the past and available
> through Google Groups.
>
> > provide or
> > facilitate a balanced discussion,
>
> And you are?
>
> You just want to say that Apache is bloated and nothing else. I at
> least identify that different servers and implementation mechanisms
> have their strength and weaknesses. I even go to the extent of
> suggesting architectures which make use of Apache as well as distinct
> servers such as nginx, or even lighttpd or Cherokee if you really want
> to, where the latter is for static file serving and proxying. So, I
> look at the bigger picture and use what tool is right for different
> roles, but do so with the knowledge and understanding of why using
> that tool is sensible in that situation rather than just doing it
> because someone else said to. For example, I already describe in part
> why use of nginx as front end to Apache/mod_wsgi gives benefits.
>
> > not jump into a thread with ignorant
> > and insulting remarks and not expect to be a lauded by those he
> > regards as 'inferiors' or have others email me privately lauding him.
>
> You waved the red flag to begin with. What are you going to expect by
> coming out with an unjustified claim yourself against which you
> haven't shown any genuine valid reason and for which you haven't even
> tried to show why my counter examples are incorrect.
>
> > Graham can continue bore his 'inferior underlings' to tears with his
> > theories. Results is all that counts in the end, not theory.
>
> Yes, results is what counts and compared to other WSGI hosting
> mechanisms you will find that Apache/mod_wsgi is generally
> acknowledged as the best platform around at the current time. At that
> isn't just me saying that.
>
> As far as professionalism goes, go look up how much documentation
> there is about mod_wsgi, then go point to me where there is any
> reputable documentation from someone other than in a minor blog post
> about hosting WSGI applications via lighttpd/FASTCGI/flup. You'll be
> lucky to find a page or two, after that you are on your own. At least
> with mod_wsgi you can get help for it from a community people who are
> knowledgeable about it. Often posts I make like this, which you likely
> would just label as a rant, contain more useful information than what
> you can find on using WSGI with FASTCGI.
>
> Graham> John Heenan
>
> > On Feb 12, 3:14 pm, Graham Dumpleton <graham.dumple...@gmail.com>
> > wrote:
>
> > > On Feb 12, 2:00 pm, John Heenan <johnmhee...@gmail.com> wrote:
>
> > > > From the confused manner Graham is conducting himself, he appears to
> > > > think web2py runs as a process of a web server with threads that
> > > > belong to the web server. This is not correct. web2py always runs as
> > > > an independent process, unless web2py uses its internal web server.
>
> > > No it doesn't and this is where you are confused. The web2py
> > > application can be hosted on mod_python and mod_wsgi when using
> > > Apache. The mod_python module and mod_wsgi when used in embedded mode
> > > both run the web2py application embedded within the existing Apache
> > > server child processes. Even if you use mod_wsgi in daemon mode
> > > whereby web2py is run in a separate process, that model is still
> > > somewhat different to FASTCGI because the process is still only a fork
> > > from the Apache server and not a separate invocation of a program like
> > > with FASTCGI. Yes mod_wsgi in daemon mode may still talk over a socket
> > > connection to talk with the process like in FASTCGI, but mod_wsgi
> > > controls both ends of that socket connection and so the protocol over
> > > the connection is completely irrelevant. In FASTCGI the socket
> > > protocol is the interface point. In mod_wsgi the socket isn't the
> > > interface point, it is still the Python WSGI API interface. In
> > > mod_wsgi you don't need a separate bridge to WSGI like with FASTCGI
> > > and you don't need a separate infrastructure to startup the
> > > application process as that is all handled by Apache and/or mod_wsgi
> > > depending on the mode it is used.
>
> > > > An external web server needs a pipe to the web2y process.
>
> > > No it doesn't.
>
> > > > That pipe
> > > > needs to be efficient.
>
> > > No argument that if a pipe is used that it would need to be efficient,
> > > but with mod_python or mod_wsgi in embedded mode it is irrelevant
> > > because there is no pipe. In both cases the adaptation to Python WSGI
> > > layer sits directly on top of the internal C API of Apache itself
> > > given that the Apache code that accepts the request is in the same
> > > process as the web2py application itself. In other words, no pipe,
> > > just C API translation.
>
> > > > They don't come any more efficiently than using
> > > > a UNIX socket.
>
> > > It does when there is no socket as is the case with mod_python and
> > > mod_wsgi in embedded mode. In mod_wsgi in daemon mode there is still a
> > > UNIX socket but given that the internal protocol it uses is simpler
> > > than that for FASTCGI and that on the application side the final
> > > bridge to WSGI is also implemented in C code, unlike flup for FASTCGI
> > > it has less overhead than other FASTCGI/WSGI hosting mechanisms in
> > > that part of the pipeline.
>
> > > > Also using an event model for the other end of the pipe
> > > > to service requests is far more efficient than using threads to
> > > > service requests, something Apace does not provide.
>
> > > Except that as I pointed out in prior post, that is irrelevant
> > > considering that WSGI is synchronous and must use threads for
> > > concurrency. So the limitations in the WSGI application as far as
> > > thread performance and the Python GIL predominate over any benefits
> > > that may come from a front end being event driven.
>
> > > Graham
>
> > > > John Heenan
>
> > > > On Feb 12, 12:44 pm, John Heenan <johnmhee...@gmail.com> wrote:
>
> > > > > Where is the perspective?
>
> > > > > 1) Even with an 'ideal configuration that does not use MPM pre-
> > > > > forking, Apache still uses threads to service each request (for static
> > > > > files). This is still more inefficient than lightttpd and nginx that
> > > > > use an event model.
>
> > > > > 2) No one is going to take anyone seriously when they imply Apache
> > > > > bloatware can be configured to take a lower memory footprint than
> > > > > Lighttpd for the same job.
>
> > > > > 3) How Python and web2py use threads to process information
> > > > > transferred through a socket has nothing to do with the web server.
> > > > > There is just a single socket or 'pipe'. Essentially the web server
> > > > > acts as a pretty dumb pipe. The web server should not be a big issue.
> > > > > It needs to just do its job quickly and efficiently and then get out
> > > > > of the way.
>
> > > > > 4) FastCGI is not WGSI. In web2py the internal FastCGI server upgrades
> > > > > the FastCGI socket information to WGSI to use existing WGSI
> > > > > infrastructure but this is irrelevant. The code is short and simple.
> > > > > This is all irrelevant to the web server.
>
> > > > > 5) Using the internal web server with web2py is not recommended. The
> > > > > question remains what is the best choice for an external web server.
> > > > > The answer is certainly not bloatware like Apache.
>
> > > > > John Heenan
>
> > > > > On Feb 12, 12:16 pm, Graham Dumpleton <graham.dumple...@gmail.com>
> > > > > wrote:
>
> > > > > > On Feb 12, 1:04 pm, John Heenan <johnmhee...@gmail.com> wrote:
>
> > > > > > > Hello Graham, whoever you are.
>
> > > > > > > You sound highly confused, clueless aboout how to present 
> > > > > > > objective
> > > > > > > data and a fairly typical bombastic nerd of the type that clogs 
> > > > > > > up and
> > > > > > > plagues forums.
>
> > > > > > > Get a life
>
> > > > > > I think you will find that I have a lot more credibility over this
> > > > > > issue than you might because of the work I have done in the past 
> > > > > > which
> > > > > > relates specifically to Python and WSGI hosting mechanisms, 
> > > > > > including
> > > > > > the many posts in various forums explaining where people get it 
> > > > > > wrong
> > > > > > in setting up Apache.
>
> > > > > > In future you might want to do your home work and perhaps look into
> > > > > > why I might say what I have before you dismiss it off hand.
>
> > > > > > Graham
>
> > > > > > > John Heenan
>
> > > > > > > On Feb 12, 11:32 am, Graham Dumpleton <graham.dumple...@gmail.com>
> > > > > > > wrote:
>
> > > > > > > > On Feb 12, 9:59 am, John Heenan <johnmhee...@gmail.com> wrote:
>
> > > > > > > > > How about web2py in a VPS using less than 40MB RAM?
>
> > > > > > > > > You can reduce web2py memory usage by using a newer 
> > > > > > > > > generation web
> > > > > > > > > server with web2py instead of the internal web server with 
> > > > > > > > > web2py.
>
> > > > > > > > Not really.
>
> > > > > > > > > Apache gets trashed in tests by newer generation web servers 
> > > > > > > > > such as
> > > > > > > > > lightttpd and nginx.
>
> > > > > > > > Only for static file serving.
>
> > > > > > > > > Apache also uses far more memory.
>
> > > > > > > > For hosting a dynamic Python web application it doesn't have 
> > > > > > > > to. The
> > > > > > > > problem is that the majority of people have no clue about how to
> > > > > > > > configure Apache properly and will leave it as the default 
> > > > > > > > settings.
> > > > > > > > Worse, they load up PHP as well which forces use of prefork MPM 
> > > > > > > > which
> > > > > > > > compounds the problems.
>
> > > > > > > > > The reason is simple. Apache services each request with a 
> > > > > > > > > thread.
> > > > > > > > > Nginx amd lightttpd service each request with an event model.
>
> > > > > > > > A WSGI application like web2py however isn't event based and 
> > > > > > > > requires
> > > > > > > > the threaded model. You are therefore still required to run 
> > > > > > > > web2py in
> > > > > > > > a threaded system, or at least a system which uses a thread 
> > > > > > > > pool on
> > > > > > > > top of an underlying thread system. Your arguments are thus 
> > > > > > > > moot, as
> > > > > > > > as soon as you have to do that, you end up with the same memory 
> > > > > > > > usage
> > > > > > > > profile issues as with Apache's threaded model.
>
> > > > > > > > > I only use lightttpd for static pages and to remap URLs.
>
> > > > > > > > > This is my memory usage with lighthttpd and web2py from 
> > > > > > > > > command 'ps
> > > > > > > > > aux'.
>
> > > > > > > > > resident memory units are in KB
> > > > > > > > > virtual memory units are 1024 byte units
>
> > > > > > > > > lighttpd: resident memory 3660, virtual memory 59568
> > > > > > > > > python for web2py: resident memory 32816, virtual memory 
> > > > > > > > > 225824
>
> > > > > > > > So, 32MB for web2py.
>
> > > > > > > > Now configure Apache with a comparable configuration, presumed 
> > > > > > > > single
> > > > > > > > process which is multithreaded and guess what, it will be 
> > > > > > > > pretty close
> > > > > > > > to 32MB still.
>
> > > > > > > > If you are stupid enough to leave Apache with prefork MPM 
> > > > > > > > because of
> > > > > > > > PHP and use embedded mode with mod_python or mod_wsgi, then of 
> > > > > > > > course
> > > > > > > > you will get up to 100 processes each of 32MB, because that is 
> > > > > > > > what
> > > > > > > > the PHP biased configuration will give.
>
> > > > > > > > Even in that situation you could used mod_wsgi daemon mode and 
> > > > > > > > shift
> > > > > > > > web2py to its own process, which means again that all it takes 
> > > > > > > > is
> > > > > > > > 32MB. The memory of Apache server child process handling static 
> > > > > > > > and
> > > > > > > > proxying will still be an issue if using prefork, but if you 
> > > > > > > > ditch PHP
> > > > > > > > and change to worker MPM you can get away with a single or 
> > > > > > > > maybe two
> > > > > > > > such processes and drastically cut back memory usage.
>
> > > > > > > > For some background on these issues read:
>
> > > > > > > >  http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usa...
>
> > > > > > > > Anyway, if you aren't up to configuring Apache properly, by all 
> > > > > > > > means
> > > > > > > > use lighttpd or nginx.
>
> > > > > > > > Graham
>
> > > > > > > > > This is the memory usage of a python console WITHOUT any 
> > > > > > > > > imports:
> > > > > > > > > resident memory 3580, virtual memory 24316
>
> > > > > > > > > John Heenan
>
> > > > > > > > > On Feb 11, 10:30 pm, raven
>
> > ...
>
> > read more »

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.

Reply via email to