Hi guys,

It does seem that I designed it poorly for apache deployment.
Memcached seems like a good option. After trying it out, it's decently
faster than disk, but not quite as fast as memory from the same python
process, mostly because of the serialization overhead.

- Build from original data sources (slowest)
- Rebuild from database with indexed tables
- Cached on disk
- Cached in memcache
- Cached in memory (fastest)

It's been working well so far, and only needs some tweaking on my
production server. Thanks for your suggestions!

Lars

On Aug 15, 12:24 pm, "James Bennett" <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Lars <[EMAIL PROTECTED]> wrote:
>
> > My first thought was: I've missed a debugging flag somewhere that
> > needs to be off. Here's what I roughly have:
>
> Have you checked the Apache directives which control how many requests
> a process may server before it gets recycled?
>
> Remember that Apache processes do not live forever -- they serve a
> certain maximum number of requests, then are killed and replaced by
> new ones (which will, then, need to perform the same intensive
> up-front calculation). If you need to permanently store something (or
> at least, store it more permanently than what you're got now), try
> memcached.
>
> --
> "Bureaucrat Conrad, you are technically correct -- the best kind of correct."


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to