Here's the answer I received via Riak Support:

According to the stats, nodes 1 and 2 are members of a cluster, sharing a ring 
and node 3 has connected to the same cluster but is part of a different ring.
We have noticed this issue when there are configuration errors while setting up 
nodes in a cluster or while trying to start a node that is already joined to a 
cluster.
Unfortunately, we do not have an easy fix to it but by deleting the ring info 
(rm -rf /var/lib/riak/ring/*) from node 3 and then restarting and trying to 
join it to the cluster again should resolve this issue.

Please let us know if you have further questions.

Thanks,
Sowjanya 



----- Original Message -----
> From: "Ray Cote" <rgac...@appropriatesolutions.com>
> To: "riak-users" <riak-users@lists.basho.com>
> Sent: Friday, June 29, 2012 3:57:43 PM
> Subject: Cannot get third node joined to cluster. Says it is already in a 
> cluster of it's own.
> 
> Hello all:
> 
> I've managed to get my new deployment into an odd state.
> 
> I have a three-node cluster.
> After installation, I was running the riak-admin join commands.
> Node #3 happend to be down because of a configuration error -- but
> something seems to have been configured.
> 
> Now, when I run stats on my first node, I see
>   "nodename": "riak@192.168.231.231",
>     "connected_nodes": [
>         "riak@192.168.231.232",
>         "riak@192.168.231.233"
>     ],
>    "ring_members": [
>         "riak@192.168.231.231",
>         "riak@192.168.231.232"
>     ],
>    "ring_ownership":
>    "[{'riak@192.168.231.231',32},{'riak@192.168.231.232',32}]",
>   
> On the problemmatic ring, I see:
>     "connected_nodes": [
>         "riak@192.168.231.231",
>         "riak@192.168.231.232"
>     ],
>     "ring_members": [
>         "riak@192.168.231.233"
>     ],
>     "ring_ownership": "[{'riak@192.168.231.233',64}]",
> 
> My understanding is that all three should show in the ring_ownership.
> 
> Now, when I try to add node #3, I'm told it is already a member of a
> cluster.
> When I try to force-remove node #3, I'm told it is not a member of
> the cluster.
> When I try to use leave on node #3 I'm told it is the only member.
> 
> Any recommendations/thoughts on how to correct this?
> (short of re-installing node #3)
> 
> Thanks
> --Ray
> 
> When I try a riak-admin leave on node #3, it says it is the only
> --
> Ray Cote, President Appropriate Solutions, Inc.
> We Build Software
> www.AppropriateSolutions.com 603.924.6079
> 

-- 
Ray Cote, President Appropriate Solutions, Inc. 
We Build Software 
www.AppropriateSolutions.com 603.924.6079 

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to