Maybe I should restate this question -- I'm wondering if BerkleyDB, LDAP, or something like IPC::MM will help me with this but I have little experience with these, in heavy practice.
Here's the situation: We have a fully normalized relational database (mysql) now being accessed by a web application and to save a lot of complex joins each time we grab rows from the database, I currently load and cache a few simple hashes (1-10MB) in each apache processes with the corresponding lookup data: $CACHE_1{id}='foo' and $CACHE_2{ida}{idb}='bar' Basically, this lets me just grab and loop through the normalized (non-joined) DB rows and print something like: "This row belongs to $CACHE_1{$a}" and is about $CACHE_2{$y}{$z}, please call $CACHE_1{$b}; "This row belongs to $CACHE_1{$a}" and is about $CACHE_2{$y}{$z}, please call $CACHE_1{$b}; "This row belongs to $CACHE_1{$a}" and is about $CACHE_2{$y}{$z}, please call $CACHE_1{$b}; More importantly, if the value of $a,$b,$y or $z ever change, all rows in all tables need not be updated. For large datasets (100-1000 rows), this is working great but would it would be prohibitively expensive to query each value in the database separately, forcing me to rethink a more complex data-joining strategy. The lookup hashes are are very simple name=value paires and rarely change (if ever) during the lifetime of any child process but they'll continue to grow and change over time. For now, when they do change, child processed knows to reload them from the database. Is anyone doing something similar? I'm wondering if implementing a BerkleyDB or another slave store on each web node with a tied hash (or something similar) is feasible and if not, what a better solution might be. On 5/7/07, Perrin Harkins <[EMAIL PROTECTED]> wrote:
On 5/7/07, Will Fould <[EMAIL PROTECTED]> wrote: > C/Would anyone recommend any of the IPC::*** shared memory packages for what > I'm doing? No, they have terrible performance for any significant amount of data. Much worse than a simple shared file approach. If you can break up your data into a hash-like form, you might be able to use Cache::FastMmap. It's a cache though, and will drop data when it gets full, so you have to keep the database as the master source and fall back to it for data not found in the cache. - Perrin