Hi Patrick, You may be running out of ports which erlang uses for TCP sockets - try increasing ERL_MAX_PORTS in etc/vm.args
Cheers, Jon Basho Technologies. On Mon, Sep 26, 2011 at 12:17 PM, Patrick Van Stee <vans...@highgroove.com>wrote: > We're running a small, 2 node riak cluster (on 2 m1.large boxes) using the > LevelDB backend and have been trying to write ~250 keys a second at it. With > a small dataset everything was running smoothly. However, after storing > several hundred thousand keys some performance issues started to show up. > > * We're running out of file descriptors which is causing nodes to crash > with the following error: > > 2011-09-24 00:23:52.097 [error] <0.110.0> CRASH REPORT Process [] with 0 > neighbours crashed with reason: {error,accept_failed} > 2011-09-24 00:23:52.098 [error] <0.121.0> application: mochiweb, "Accept > failed error", "{error,emfile} > > Setting the max_open_files limit in the app.config doesn't seem to help. > > * Writes have slowed down by an order of magnitude. I even set the n_val, > w, and dw bucket properties to 1 without any noticeable difference. Also we > switched to using protocol buffers to make sure there wasn't any extra > overhead when using HTTP. > > * Running map reduce jobs that use a range query on a secondary index > started returning an error, {"error":"map_reduce_error"}, once our dataset > increased in size. Feeding a list of keys works fine, but querying the index > for keys seems to be timing out: > > 2011-09-26 16:37:57.192 [error] <0.136.0> Supervisor riak_pipe_fitting_sup > had child undefined started with {riak_pipe_fitting,start_link,undefined} at > <0.3497.0> exit with reason > {timeout,{gen_server,call,[{riak_pipe_vnode_master,'riak@10.206.105.52 > '},{return_vnode,{'riak_vnode_req_v1',502391187832497878132516661246222288006726811648,{raw,#Ref<0.0.1.88700>,<0.3500.0>},{cmd_enqueue,{fitting,<0.3499.0>,#Ref<0.0.1.88700>,#Fun<riak_kv_mrc_pipe.0.133305895>,#Fun<riak_kv_mrc_pipe.1.125635227>},{<<"ip_queries">>,<<"uaukXZn5rZQ0LrSED3pi-fE-JjU">>},infinity,[{502391187832497878132516661246222288006726811648,' > riak@10.206.105.52'}]}}}]}} in context child_terminated > > Is anyone familiar with these problems or is there anything else I can try > to increase the performance when using LevelDB? > _______________________________________________ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com