Well, based on the stat output it appears that you don't have any
resource issues so something else must be going on.
Can you break this down to a test case? Also, have you tried running
this with elephant-unstable?
Thanks,
Ian
On Oct 2, 2008, at 2:57 AM, Red Daly wrote:
Unfortunately I
Unfortunately I am still getting allocation errors even after increasing the
cache size to 1.2 gigabytes and eliminating all transactions from my
application. I have included db_stat output--the only alarming aspect is
the section about mutexes. Do I need more memory dedicated to storing
mutexes?
M> that sounds like the key for this specific mismatch of the BDB locking
M> philosophy – which for the scenarios you describe is perfect – and our
M> quite different needs.
there is other sort of locking philosophy in BDB too -- so-called "snapshot
isolation"
(also known as multi-version concu
Ian Eslick wrote:
>
>> I'm a bit surprised about this. Most relational databases use
>> journalling and / or rollback files to ensure that they can commit or
>> rollback large transactions as one atomic unit. The procedure is quite
>> neatly described in http://www.sqlite.org/atomiccommit.html and
the main problem for us is that we automatically import data files and
data fragments, where one of them can easily result in the creation of
thousands or even tens of thousands heavily cross-referencing objects,
each with a number of index entries associated to it. The complexity
of
imports is
Ian Eslick wrote:
>> but not for larger logical transactions where they'd
>> really be called for. Raising the limits at best slightly lessens the
>> pain, but is not really a solution.
>
>
> If raising the limits isn't a solution, then your application is
> probably mismatched with Elephant/BDB.
Comments on original "out of memory problems"
- Running out of disk space for logs can also be a problem as well as
transaction size
- The max # of locks needed will grow with the log of the DB size and
linearly with the number of concurrent transactions. You can run
db_stat to see what yo
Ian Eslick wrote:
> You could be running out of cache or locks. I believe there are now
> parameters in config.sexp you can set to raise the default limits.
> The lack of robustness to allocation failures is a problem with
> Berkeley DB.
>
> Unfortunately, running recovery isn't a trivial process
You could be running out of cache or locks. I believe there are now
parameters in config.sexp you can set to raise the default limits.
The lack of robustness to allocation failures is a problem with
Berkeley DB.
Unfortunately, running recovery isn't a trivial process. You have to
guara
On Sun, Sep 21, 2008 at 07:25:10PM -0700, Red Daly wrote:
>
> Why does the memory allocation failure sour the whole database instead of
> aborting a single transaction? I think that elephant should try to
> encapsulate this failure, recovering the database or whatever is necessary
> to make the st
I have recently run into "Cannot allocate memory" problems with elephant on
a production server. Unfortunately when one transaction is too large, it
seems to blow the database until a manual recover is done.
The occasional failure is slightly worrisome but it the whole database
requiring a manual
On Jan 4, 2008 2:54 AM, Ian Eslick <[EMAIL PROTECTED]> wrote:
> Hi Victor,
>
> Sounds like your transaction is blowing out the shared memory
> allocated by Berkeley DB to store dirty pages. This is caused by
> transactions that are too large; putting an entire file of data could
> well accomplish
Hi Victor,
Sounds like your transaction is blowing out the shared memory
allocated by Berkeley DB to store dirty pages. This is caused by
transactions that are too large; putting an entire file of data could
well accomplish this. (We really should change the error message to
be more inf
On Jan 4, 2008 2:15 AM, Leslie P. Polzer <[EMAIL PROTECTED]> wrote:
>
> > I've decided to put elephant 0.9.1 under some heavy load test, and
> > play with Netflix data set a little bit. The attached program that
> > tries to import everything in BerkeleyDB fails when trying to import
> > movie file
> I've decided to put elephant 0.9.1 under some heavy load test, and
> play with Netflix data set a little bit. The attached program that
> tries to import everything in BerkeleyDB fails when trying to import
> movie file number 8 with the following traceback. Do you have any idea
> what could be
Hello list,
I've decided to put elephant 0.9.1 under some heavy load test, and
play with Netflix data set a little bit. The attached program that
tries to import everything in BerkeleyDB fails when trying to import
movie file number 8 with the following traceback. Do you have any idea
what could b
16 matches
Mail list logo