On 5/19/07, Will Fould <[EMAIL PROTECTED]> wrote:
I'm afraid that:
1. hashes get really big (greater than a few MB's each)
2. re-caching entire hash just b/c 1 key updated (waste).
3. latency for pulling cache data from remote DB.
4. doing this for all children.
The most common way
my .02¢
• ldap would be silly unless you're clustering -- most
implementations use bdb as their backend
• bdb and cache::fastmmap would make more sense if you're on 1 machine
also
i think your hash system may be better off rethought
you have:
$CACHE_1{id}='foo'
Thanks a lot Perrin -
I really like the current method (if it were to stay on 1 machine and not
grow). Caching per child has not really been a problem once I got beyond the
emotional hangup of what seemed to be duplicative, waste of memory. I am
totally amazed how fast and efficient using modper
On 5/19/07, Will Fould <[EMAIL PROTECTED]> wrote:
Here's the situation: We have a fully normalized relational database
(mysql) now being accessed by a web application and to save a lot of complex
joins each time we grab rows from the database, I currently load and cache a
few simple hashes (1-10
Maybe I should restate this question -- I'm wondering if BerkleyDB, LDAP, or
something like IPC::MM will help me with this but I have little experience
with these, in heavy practice.
Here's the situation: We have a fully normalized relational database
(mysql) now being accessed by a web applicat