On Fri, 24 Jun 2005 18:09:16 -0400
Arshavir Grigorian <[EMAIL PROTECTED]> wrote:

> Hello list,
> 
> I coded a caching system using BerkeleyDB::Hash as the backend. It was
> 
> working fine until the database file became fairly large (850M).
> At some point the performance degraded and the web server process 
> accessing the database started hanging. Someone suggested locking
> issues  being the cause for the hangups, but trying to access the db
> from a  single script even when there were no other processes
> accessing it still  hung.
> 
> I am sure someone has done a similar thing before and would be very 
> interested to hear any success/failure stories. I starting to wonder 
> whether I would be better off just using an RDBMS table (2 columns - 
> key,value) as the cache backend to avoid these types of issues.
> 
> Thanks for any ideas, pointers.

  I've never used BerkeleyDB with that large of an amount of data,
  but personally if I were getting over a few hundred MBs of data
  I would put it into an RDBMS, split the cache up into
  multiple db files (if approrpiate), or use something like
  Cache::Memcached

 ---------------------------------
   Frank Wiles <[EMAIL PROTECTED]>
   http://www.wiles.org
 ---------------------------------

Reply via email to