Hi Riak users,

Before adding new nodes, the cluster only have five nodes. The member list are 
as below:
10.21.136.66,10.21.136.71,10.21.136.76,10.21.136.81,10.21.136.86.
We did not setup http proxy for the cluster, only one node of the cluster 
provide the http service.  so the CPU load is always high on this node.

After that, I added four nodes (10.21.136.[91-94]) to those cluster. During the 
ring/data balance progress, each node failed(riak stopped) because of disk 100% 
full.
I used multi-disk path to "data_root" parameter in '/etc/riak/app.config'. Each 
disk is only 580MB size. 
As you know, bitcask storage engine did not support multi-disk path. After one 
of the disks is 100% full, it can not switch next idle disk. So the "riak" 
service is down.

After that, I removed the new add four nodes at active nodes with "riak-admin 
cluster leave riak@'10.21.136.91'".
and then stop "riak" service on other active new nodes, reformat the above new 
nodes with LVM disk management (bind 6 disk with virtual disk group).
Replace the "data-root" parameter with one folder, and then start "riak" 
service again. After that, the cluster began the data balance again. 
That's the whole story.

my question are as below:
1. what's the current status of the whole cluster? Is't doing data balance?
2. there's so many errors during one of the node error log. how to handle it?
2015-08-05 01:38:59.717 [error] 
<0.23000.298>@riak_core_handoff_sender:start_fold:262 ownership_transfer 
transfer of riak_kv_vnode from 'riak@10.21.136.81' 
525227150915793236229449236757414210188850757632 to 'riak@10.21.136.94' 
525227150915793236229449236757414210188850757632 failed because of enotconn
2015-08-05 01:38:59.718 [error] 
<0.195.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of 
partition riak_kv_vnode 525227150915793236229449236757414210188850757632 was 
terminated for reason: {shutdown,{error,enotconn}}

During the last 5 days, there's no changes of the "riak-admin member status" 
output.
3. how to accelerate the data balance? 

Amao

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to