>From the confused manner Graham is conducting himself, he appears to think web2py runs as a process of a web server with threads that belong to the web server. This is not correct. web2py always runs as an independent process, unless web2py uses its internal web server.
An external web server needs a pipe to the web2y process. That pipe needs to be efficient. They don't come any more efficiently than using a UNIX socket. Also using an event model for the other end of the pipe to service requests is far more efficient than using threads to service requests, something Apace does not provide. John Heenan On Feb 12, 12:44 pm, John Heenan <johnmhee...@gmail.com> wrote: > Where is the perspective? > > 1) Even with an 'ideal configuration that does not use MPM pre- > forking, Apache still uses threads to service each request (for static > files). This is still more inefficient than lightttpd and nginx that > use an event model. > > 2) No one is going to take anyone seriously when they imply Apache > bloatware can be configured to take a lower memory footprint than > Lighttpd for the same job. > > 3) How Python and web2py use threads to process information > transferred through a socket has nothing to do with the web server. > There is just a single socket or 'pipe'. Essentially the web server > acts as a pretty dumb pipe. The web server should not be a big issue. > It needs to just do its job quickly and efficiently and then get out > of the way. > > 4) FastCGI is not WGSI. In web2py the internal FastCGI server upgrades > the FastCGI socket information to WGSI to use existing WGSI > infrastructure but this is irrelevant. The code is short and simple. > This is all irrelevant to the web server. > > 5) Using the internal web server with web2py is not recommended. The > question remains what is the best choice for an external web server. > The answer is certainly not bloatware like Apache. > > John Heenan > > On Feb 12, 12:16 pm, Graham Dumpleton <graham.dumple...@gmail.com> > wrote: > > > On Feb 12, 1:04 pm, John Heenan <johnmhee...@gmail.com> wrote: > > > > Hello Graham, whoever you are. > > > > You sound highly confused, clueless aboout how to present objective > > > data and a fairly typical bombastic nerd of the type that clogs up and > > > plagues forums. > > > > Get a life > > > I think you will find that I have a lot more credibility over this > > issue than you might because of the work I have done in the past which > > relates specifically to Python and WSGI hosting mechanisms, including > > the many posts in various forums explaining where people get it wrong > > in setting up Apache. > > > In future you might want to do your home work and perhaps look into > > why I might say what I have before you dismiss it off hand. > > > Graham > > > > John Heenan > > > > On Feb 12, 11:32 am, Graham Dumpleton <graham.dumple...@gmail.com> > > > wrote: > > > > > On Feb 12, 9:59 am, John Heenan <johnmhee...@gmail.com> wrote: > > > > > > How about web2py in a VPS using less than 40MB RAM? > > > > > > You can reduce web2py memory usage by using a newer generation web > > > > > server with web2py instead of the internal web server with web2py. > > > > > Not really. > > > > > > Apache gets trashed in tests by newer generation web servers such as > > > > > lightttpd and nginx. > > > > > Only for static file serving. > > > > > > Apache also uses far more memory. > > > > > For hosting a dynamic Python web application it doesn't have to. The > > > > problem is that the majority of people have no clue about how to > > > > configure Apache properly and will leave it as the default settings. > > > > Worse, they load up PHP as well which forces use of prefork MPM which > > > > compounds the problems. > > > > > > The reason is simple. Apache services each request with a thread. > > > > > Nginx amd lightttpd service each request with an event model. > > > > > A WSGI application like web2py however isn't event based and requires > > > > the threaded model. You are therefore still required to run web2py in > > > > a threaded system, or at least a system which uses a thread pool on > > > > top of an underlying thread system. Your arguments are thus moot, as > > > > as soon as you have to do that, you end up with the same memory usage > > > > profile issues as with Apache's threaded model. > > > > > > I only use lightttpd for static pages and to remap URLs. > > > > > > This is my memory usage with lighthttpd and web2py from command 'ps > > > > > aux'. > > > > > > resident memory units are in KB > > > > > virtual memory units are 1024 byte units > > > > > > lighttpd: resident memory 3660, virtual memory 59568 > > > > > python for web2py: resident memory 32816, virtual memory 225824 > > > > > So, 32MB for web2py. > > > > > Now configure Apache with a comparable configuration, presumed single > > > > process which is multithreaded and guess what, it will be pretty close > > > > to 32MB still. > > > > > If you are stupid enough to leave Apache with prefork MPM because of > > > > PHP and use embedded mode with mod_python or mod_wsgi, then of course > > > > you will get up to 100 processes each of 32MB, because that is what > > > > the PHP biased configuration will give. > > > > > Even in that situation you could used mod_wsgi daemon mode and shift > > > > web2py to its own process, which means again that all it takes is > > > > 32MB. The memory of Apache server child process handling static and > > > > proxying will still be an issue if using prefork, but if you ditch PHP > > > > and change to worker MPM you can get away with a single or maybe two > > > > such processes and drastically cut back memory usage. > > > > > For some background on these issues read: > > > > > > > > > http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usa... > > > > > Anyway, if you aren't up to configuring Apache properly, by all means > > > > use lighttpd or nginx. > > > > > Graham > > > > > > This is the memory usage of a python console WITHOUT any imports: > > > > > resident memory 3580, virtual memory 24316 > > > > > > John Heenan > > > > > > On Feb 11, 10:30 pm, raven <ravenspo...@yahoo.com> wrote: > > > > > > > It seems that everyone is running with Apache and gobs of memory > > > > > > available. -- You received this message because you are subscribed to the Google Groups "web2py-users" group. To post to this group, send email to web...@googlegroups.com. To unsubscribe from this group, send email to web2py+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/web2py?hl=en.