Basically, you just need to run multiple instances of your Django
web-application. I've never used Apache for this (I'm an Nginx + uWSGI fan,
myself) but you'd just run multiple worker processes and your WSGI handler
should "balance" between them.

Like the others mentioned, make sure to use a common cache like Memcache.

On Fri, Jun 22, 2012 at 10:43 AM, Javier Guerra Giraldez <jav...@guerrag.com
> wrote:

> On Fri, Jun 22, 2012 at 9:35 AM, Oleg Korsak
> <kamikaze.is.waiting....@gmail.com> wrote:
> >
> > That's what mod_wsgi does!
>
> exactly.
>
> specifically, Django is by itself, a shared-nothing library.  that
> means that you can run many instances of it.  not only on several
> cores but also on many different hosts.
>
> Just be sure that anything that works 'outside' the request/response
> cycle is is managed in a properly shared storage.  mostly the
> database, cache (memcached/Redis), queue manager...
>
> stay clear from in-process memory storage (global variables, thread
> local storage, etc) and your app will be trivially scalable.  your
> bottleneck would likely be the shared data, but any good RDMS go far
> on big machines, especially if you handle right the cache and queue
> layers.
>
>
> --
> Javier
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To post to this group, send email to django-users@googlegroups.com.
> To unsubscribe from this group, send email to
> django-users+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/django-users?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en.

Reply via email to