Hi Germain,
It's difficult to say if 3 GB of RAM is not enough. Bitcask maintains an in
memory structure called a keydir. The bitcask intro describes the keydir as
follows:
"A keydir is simply a hash table that maps every key in a Bitcask to a
fixed-size structure giving the file, offset, and siz
Dan,
I switched back to Bitcask, i have a n_value = 2 bucket over two nodes
(each node has 3GB RAM)
i requested :
curl "http://10.0.0.40:8098/riak/test?keys=stream";
and i got this error
[...]"81902","50339","592","56815","55290","8857","251curl: (18)
transfer closed with outstanding read dat
Dan,
I don't know exactly when the errors occured because there are no
timestamps :(
I'm sure that's during a basho_bench test (no map-reduce was launched).
I see these errors on the all nodes of my cluster.
However, the riak nodes kept running, now i'm using riak 0.11.0 with
innostore, previo
Hi Germain,
I'm sorry if I've missed any background information that may have been
provided earlier but can you tell me the scenario that led up to these
errors? Are you running a load test against a cluster? Are you seeing these
out of memory errors in one of your nodes? How much memory does the
340.870895] Free swap = 0kB
[601340.870897] Total swap = 979956kB
[601340.883162] 786344 pages RAM
[601340.883165] 13514 pages reserved
[601340.883167] 1383 pages shared
[601340.883169] 766384 pages non-shared
[601340.883173] Out of memory: kill process 8713 (beam.smp) score 1279595 or a
child
[601