Re: LevelDB memory problem crashes Erlang
Hi, We encounter the exact same problem. We're trying to insert into a 4 node cluster 20M documents, and eventually the Erlang VM crashes (not enough heap space). Tracking the Riak process, we can see the memory level steadily rising, well above 80% of available RAM. IT configuration: 4 Ubuntu 10.4 servers with 8GB RAM each. Riak configuration: -leveldb backend -n_val=2 -w=quorum -pw=one -riak search enabled We also tweaked the JS VM parameters, but this doesn't seem to be relevant. Let me know if I'm wrong. What's even more alarming is that after we restart the nodes we fail to run MR jobs that ran successfully before on a small scale. We get a very vague error message almost immediately after executing the MR request: {"phase":0,"error":"[timeout]","input":"{<<\"users_info\">>,<<\"103631217508103\">>}","type":"forward_preflist","stack":"[]"} Deleting the leveldb directory and re-inserting 10K documents will resolve the MR problem. It is as if there is some corruption in the data due to the crashes. Anyway, solving the crashes is obviously on higher priority at this moment, and I'm guessing the solving the latter will will solve the MR problem. I read the page on Basho wiki regarding leveldb tuning, but nothing there seems to be relevant to the problem we encounter (except for the cache_size which, if I understand correctly, is set to 8MB by default which should be OK). Thanks, Shahar -- View this message in context: http://riak-users.197444.n3.nabble.com/LevelDB-memory-problem-crashes-Erlang-tp4024974p4025168.html Sent from the Riak Users mailing list archive at Nabble.com. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: LevelDB memory problem crashes Erlang
I've only just now noticed that I have missed the "Planning Parameters" section in: http://wiki.basho.com/LevelDB.html#Configuring-eLevelDB. According to the information there and using the Excel calculator, it seems that I'm 12% short on memory. I'll try to play around with the parameters in order to get better memory allocation, but I would appreciate your advice as to which parameters affect which function (write throughput, read throughput, MR latency). Thanks, Shahar -- View this message in context: http://riak-users.197444.n3.nabble.com/LevelDB-memory-problem-crashes-Erlang-tp4024974p4025169.html Sent from the Riak Users mailing list archive at Nabble.com. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: LevelDB memory problem crashes Erlang
Another update: I was mistaken to think I was 12% short. The default value in the calculator for max_open_file was 150 while the default value in Riak is actually 20. So it seems that, at least according to the calculator, we're OK. Back to square one... Any ideas? Thanks, Shahar -- View this message in context: http://riak-users.197444.n3.nabble.com/LevelDB-memory-problem-crashes-Erlang-tp4024974p4025170.html Sent from the Riak Users mailing list archive at Nabble.com. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
M/R query with riak-defined keys
Hello, I'm a complete newbie in the riak world, and I'd like to understand a basic thing. I'm testing and trying to query riak with a basic map, which is *very* similar to the basho fast-track first example (http://wiki.basho.com/attachments/simple-map.json) The thing is that I can't seem to manage to make this work with riak-defined keys. Here's a gist to illustrate my problem : https://gist.github.com/3636321 1 - I POST data with cURL (the content of the two lines are identical except for the keys) 2- I try to MapReduce (currently without any filters) 3- the Output I get, the first one's got an error that I cannot (yet) understand :) Could anyone help me to make my first steps in MapReducing ? :-) Thanks! Antoine ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak Java Client KV Search pagination and row limit support?
Hi Brian, Any process in terms of supporting pagination and row limit support for Java client? We are moving to performance testing phase and I've loaded 30 million objects into Riak so far and Riak search is Returning a lot of objects for wild card searches, which causes JVM to throw out of memory error and no way to gracefully recover from this. Any chance of addressing this in the future? The solr API does have this support so I don't know how easy or difficult to add this to the HTTP layer. Your input is highly appreciated. Thanks. -- Lei The information contained in this electronic mail transmission is intended only for the use of the individual or entity named in this transmission. If you are not the intended recipient of this transmission, you are hereby notified that any disclosure, copying or distribution of the contents of this transmission is strictly prohibited and that you should delete the contents of this transmission from your system immediately. Any comments or statements contained in this transmission do not necessarily reflect the views or position of GSI Commerce, Inc. or its subsidiaries and/or affiliates. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com