On Tue, 2006-03-07 at 21:05 -0800, Will Fould wrote:
> we have a tool that loads a huge store of data (25-50Mb+) from a
> database into many perl hashes at start up: each session needs
> access to all these data but it would be prohibitive to use mysql or
> another databases for multiple, large lo
how big are these data structures?
200k? 2mb? 20mb?
if they're not too big, you could just use memcached.
http://danga.com:80/memcached/
http://search.cpan.org/~bradfitz/Cache-Memcached-1.15/Memcached.pm
its ridiculously painless to implement. i found it easier that a lot
at this point, the application is on a single machine, but I'm being tasked with moving our database onto another machine and implement load balancing b/w 2 webservers.
william
On 3/7/06, Will Fould <[EMAIL PROTECTED]> wrote:
an old issue:
"a dream solution would be if all child processes
an old issue:
"a dream solution would be if all child processes could *update* a large global structure."
we have a tool that loads a huge store of data (25-50Mb+) from a database into many perl hashes at start up: each session needs access to all these data but it would be prohibitive to use