Hi,
We encounter the exact same problem. We're trying to insert into a 4 node
cluster 20M documents, and eventually the Erlang VM crashes (not enough heap
space). Tracking the Riak process, we can see the memory level steadily
rising, well above 80% of available RAM.
IT configuration:
4 Ubuntu 10
I've only just now noticed that I have missed the "Planning Parameters"
section in: http://wiki.basho.com/LevelDB.html#Configuring-eLevelDB.
According to the information there and using the Excel calculator, it seems
that I'm 12% short on memory. I'll try to play around with the parameters in
orde
Another update: I was mistaken to think I was 12% short. The default value in
the calculator for max_open_file was 150 while the default value in Riak is
actually 20.
So it seems that, at least according to the calculator, we're OK. Back to
square one...
Any ideas?
Thanks,
Shahar
--
View thi
Hello,
I'm a complete newbie in the riak world, and I'd like to understand a
basic thing.
I'm testing and trying to query riak with a basic map, which is *very*
similar to the basho fast-track first example
(http://wiki.basho.com/attachments/simple-map.json)
The thing is that I can't seem
Hi Brian,
Any process in terms of supporting pagination and row limit support for Java
client?
We are moving to performance testing phase and I've loaded 30 million objects
into Riak so far and Riak search is
Returning a lot of objects for wild card searches, which causes JVM to throw
out of mem