I added a 6th node to a 5 node cluster, hoping to rebalance the cluster
since I was approaching maximum disk usage on the original 5 nodes. Looks
like the rebalancing is not taking place, and I see a whole bunch of these
in the console logs:
688728495783936 was terminated for reason: {shutdown,ma
I have a 5 node Riak cluster running on AWS. I noticed that disk usage is
now at 80% for all nodes. I started deleting content but it doesn't appear
to be making much of a dent. If I add a 6th node, will it start receiving
all new content until it has approximately the same usage as the original 5
My RIak installation has been running successfully for about a year. This
week nodes suddenly started randomly crashing. The machines have plenty of
memory and free disk space, and looking in the ring directory nothing
appears to amiss:
[ec2-user@ip-10-196-72-247 ~]$ ls -l /vol/lib/riak/ring
tot
I am continuously getting the following types of errors in my riak logs:
2014-01-27 00:06:39.735 [error] <0.220.0> Supervisor riak_pipe_builder_sup
had child undefined started with {riak_pipe_builder,start_link,undefined}
at <0.18590.125> exit with reason
{{modfun,riak_search,mapred_search,[<<"Med