I have recently run into "Cannot allocate memory" problems with elephant on
a production server.  Unfortunately when one transaction is too large, it
seems to blow the database until a manual recover is done.

The occasional failure is slightly worrisome but it the whole database
requiring a manual recover from one extra-large transaction is a scary
thought for a live application with thousands of users.

Why does the memory allocation failure sour the whole database instead of
aborting a single transaction?  I think that elephant should try to
encapsulate this failure, recovering the database or whatever is necessary
to make the store usable for the next transaction.

Best,
Red Daly


On Sat, Jan 5, 2008 at 5:02 PM, Victor Kryukov <[EMAIL PROTECTED]>wrote:

> On Jan 4, 2008 2:54 AM, Ian Eslick <[EMAIL PROTECTED]> wrote:
> > Hi Victor,
> >
> > Sounds like your transaction is blowing out the shared memory
> > allocated by Berkeley DB to store dirty pages.  This is caused by
> > transactions that are too large; putting an entire file of data could
> > well accomplish this.  (We really should change the error message to
> > be more informative in these cases).
> >
> > Try pushing with-transaction into the loop in import-movie as follows:
>
> Thanks for your suggestion, Ian - the problem was solved once I've
> moved with-transaction inside the collect-rating-info.
> _______________________________________________
> elephant-devel site list
> elephant-devel@common-lisp.net
> http://common-lisp.net/mailman/listinfo/elephant-devel
>
_______________________________________________
elephant-devel site list
elephant-devel@common-lisp.net
http://common-lisp.net/mailman/listinfo/elephant-devel

Reply via email to