On Thu, May 14, 2015 at 4:36 AM, reduxionist <jonathan.barr...@gmail.com> wrote:
> The question you asked Tom was "Doesn't Apache create new process for each
> request [thus eating memory when serving large amounts of static files
> during traffic peaks]?", and the reason that Tom correctly answers "No" is
> that as far as "serving large amounts of static files" goes you should be
> using mpm-worker (multi-threaded Apache) which most definitely does not
> spawn a new process for each request.
>
> The reason for those search results is that mpm-prefork does, however, spawn
> a process per request,

No, really, it does not. It only spawns a new process when there are
no available workers to process an incoming request, and you have not
reached the maximum number of workers that you have configured it to
start. You can configure it to start all the worker processes you want
when it starts up, and never to kill them off, and it will never spawn
a new process.

Apache processes are small, unless you do daft things like embed your
web application in each worker process (mod_php style). This is the
main complaint "Apache is eating all my memory" - it isn't, your web
application you've embedded into Apache is eating all your memory.

All of this is irrelevant for django, because with Apache you should
use mod_wsgi in daemon mode, which separates out your web application
processes from the web server.

> but it is only needed for non-thread-safe
> environments (most notoriously mod_php) and you shouldn't have to use it as
> long as you've been a good coder and avoided global state in your Django app
> (e.g. keep request-specific shared-state thread-local).
>
> I think the reason a lot of people seem to run mpm-prefork is just that it's
> the default multi-processing module for Apache on most (all?) *nix platforms
> and they don't know any better.

Quite. We run a pair of Apache 2.4 reverse proxies in front of all of
our (400+) domains, serving around 40 million requests per day,
providing SSL termination and static file serving. We use event MPM
and we have it scaled to support a peak of 2048 simultaneous
connections. Load on the server never goes above 0.2, memory usage
never goes above 1GB for the entire OS + applications, the rest of the
RAM is used by the OS to cache the aforementioned static files.

On our app servers we typically use Apache with worker MPM and
mod_wsgi, although we have a few nginx+uwsgi sites, and I would dearly
love some time to play around with a circusd + chausette + celery
setup.

The choice of web server is, these days, irrelevant. If it uses too
much memory or can't handle enough users, it is never the fault of the
web server, but instead of your application and/or configuration.
Which is why I return to my original advice:

> I am new to Django. I am building a app which will have to handle several
> concurrent requests. Which web server is suitable for this?

Any and all.

Leave the fanboyism to the phone guys.

Cheers

Tom

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAFHbX1KEVRM6WU7OCcLRSkJhpMS%2BfHpd7%2BWo7LO8XcEt8_f0Nw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to