On May 12, 8:04 am, Pigletto <[EMAIL PROTECTED]> wrote:
> > Obviously be aware that in daemon mode you will still have multithread
> > issues to contend with if they run with multiple threads. To avoid it
> > you would need a config such as:
>
> >   ... processes=5 threads=1
>
> > Ie., use prefork model with mod_wgsi daemon mode. This isn't going to
> > get you much if anything over running prefork MPM and using embedded
> > mode though as you still need to have sufficient processes to handle
> > request. At least with embedded mode Apache will create additional
> > child processes to meet demand, where as withmod_wsgidaemon mode
> > that number of processes is fixed.
>
> > Anyway, if you see the same sorts of issues as you did with
> > mod_python, perhaps bring them up again as maybe can work out where
> > the multithread issues may be.
>
> Thanks for all the help and for wonderful wiki atmod_wsgisite :)
>
> Today I've finished my production setup and my application(s) is now
> running Apache in MPM Worker mode andmod_wsgiwith daemon process.
> Daemon is set up as:
>
> WSGIDaemonProcess procgroup user=usr group=usr processes=2 threads=1
> maximum-requests=500 inactivity-timeout=300 stack-size=524288 display-
> name=%{GROUP}
> WSGIRestrictStdout Off  # some print statements in my app and
> modules...
>
> Memory usage is in general lower than with mod_python :), but it is
> still rather big in terms of limits on my shared account. Top memory
> usage I've seen so far was that every process consumed about 55 MB of
> memory,

Such is life when using Python web applications, they can be quite
fat, and thus each process can consume a lot of memory. If thought
this is from 5 virtual hosts as you suggest below, that is actually
pretty good. Many people have single Django instances getting up to
40MB or more.

The only solution besides trimming your application memory usage
somehow, is to validate it is multithread safe and then use:

  WSGIDaemonProcess procgroup user=usr group=usr threads=15

That is, default of one process with 15 threads.

This will halve your memory usage as only 1 process instead of 2. With
15 threads you also improve your ability to handle concurrent
requests.

Down side of using multithreading a memory constrained VPS, is that
with concurrent requests, there is a risk of memory usage spiking if
two requests come at same time which have a requirement for large
amount of transient memory.

> but
> there are 5 django applications (5 virtual hosts defined) running
> under this configuration. I've set this this way that every vitual
> host uses same process group:
> WSGIProcessGroup procgroup
>
> I've also tried to set same Application group for virtual hosts but
> this caused my apps to share settings and was unusable.

That is a limitation of Django, in as much as it relies on global
variables, staring with DJANGO_SETTINGS_MODULE in os.environ.

Good WSGI applications have configuration data come in through the
WSGI request environment. This means that it can be set on a per
request basis, making it somewhat practical to start having the
concept of hosting multiple instances of an application within same
Python interpreter context. The best example of an application able to
do this is Trac.

The benefit of being able to do this is that the different instances
can share the same common Python modules and it is necessary to be
loading multiple copies in different Python interpreter instances
within the same process as you have happening now.

> Thanks to inactivity-timeout and maximum-requests settings memory
> consumptions usually remains at acceptable level, so I consider wsgi
> setup much better than one with mod_python :)

At the moment, because you are running multiple Django instances
within the same process, but in different Python sub interpreters
(application groups), the benefit of inactivity-timeout may not be
getting realised. This is because a request against any instance will
keep all others in memory.

If instead you give each virtual host its own process(es), then the
inactivity-timeout will more readily be tripped and the process for an
instance recycled and brought back to base memory level.

The overhead of running each in its own process shouldn't be too much
more over current levels. But if some are infrequently used, average
memory usage across all could be somewhat less if some instances are
dormant.

Graham
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to