Hello all:

I've managed to get my new deployment into an odd state.

I have a three-node cluster. 
After installation, I was running the riak-admin join commands. 
Node #3 happend to be down because of a configuration error -- but something 
seems to have been configured. 

Now, when I run stats on my first node, I see 
  "nodename": "riak@192.168.231.231",
    "connected_nodes": [
        "riak@192.168.231.232",
        "riak@192.168.231.233"
    ],
   "ring_members": [
        "riak@192.168.231.231",
        "riak@192.168.231.232"
    ],
   "ring_ownership": 
"[{'riak@192.168.231.231',32},{'riak@192.168.231.232',32}]",
  
On the problemmatic ring, I see:
    "connected_nodes": [
        "riak@192.168.231.231",
        "riak@192.168.231.232"
    ],
    "ring_members": [
        "riak@192.168.231.233"
    ],
    "ring_ownership": "[{'riak@192.168.231.233',64}]",

My understanding is that all three should show in the ring_ownership. 

Now, when I try to add node #3, I'm told it is already a member of a cluster. 
When I try to force-remove node #3, I'm told it is not a member of the cluster. 
When I try to use leave on node #3 I'm told it is the only member.

Any recommendations/thoughts on how to correct this? 
(short of re-installing node #3)

Thanks
--Ray

When I try a riak-admin leave on node #3, it says it is the only 
-- 
Ray Cote, President Appropriate Solutions, Inc. 
We Build Software 
www.AppropriateSolutions.com 603.924.6079 

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to