Hi Sebastian,

Wow that’s a really old version, I know with modern versions the ring file can 
be nuked at the expense of a ton of transfer activity when you join the nodes 
back into a single cluster, shouldn’t lose data though. Anyone want to chime in 
with an opinion on this version ? 

Bryan

On 20 Aug 2014, at 20:28, Sebastian Wittenkamp <swittenk...@vmware.com> wrote:

> Hi Bryan, thanks for getting back to me.
> 
> We are running riak as a service on CentOS.
> 
> We are running individual VMs with a riak node on each VM. 
> 
> We are running version 1.0.0 (yeah, I know...).
> 
> Regarding the -name parameter, it may be helpful to understand what happened 
> here:
> 
> One of our customer cloned 3 VMs running a riak cluster. He accidentally 
> booted up the three new VMs with the network interfaces hot before he had a 
> chance to re-ip them. The nodes joined themselves to the existing cluster and 
> things went south. He changed the -name parameter to be something distinct 
> for each node it may have been done after the nodes were already joined to 
> the cluster.
> 
> When we did a 'riak member_status' we see output that looks like this:
> 
> ================================= Membership 
> ==================================
> Status     Ring    Pending    Node
> -------------------------------------------------------------------------------
> valid      25.0%     25.0%    'riak@192.168.0.19'
> valid      25.0%     25.0%    'riak@192.168.0.20'
> valid      25.0%     25.0%    'riak@192.168.0.21'
> valid      25.0%     25.0%    'riak@192.168.0.22'
> valid      25.0%     25.0%    'riak@192.168.0.22'
> valid      25.0%     25.0%    'riak@192.168.0.22'
> 
> I looked at the docs you sent me and I think the version we are running is 
> too old to have the 'riak-admin cluster replace' command. 
> 
> I'm wondering - if we nuke the ring metadata on all the nodes that shouldn't 
> cause any data loss, correct?
> 
> Thanks so much! Let me know if there is any other information I can provide.
> 
> From: Bryan Hunt <bh...@basho.com>
> Sent: Wednesday, August 20, 2014 1:08 AM
> To: Sebastian Wittenkamp
> Subject: Re: How to call riak_core_ring:remove_member/3 from erlang shell?
>  
> How are you running Riak?
> Are you running individual vm's, using 'make devrel', or other?
> What version are you running?
> Have you set the -name parameter in vm.args to a different value for each 
> node in your cluster?
> This page gives information about ring manipulation and should provide you 
> with what you need to get back up and running:
> http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/
> On 19 Aug 2014 22:54, "Sebastian Wittenkamp" <swittenk...@vmware.com> wrote:
> Hello all, riak shell newbie here. I have a cluster running Riak 1.0.0 which 
> is showing duplicate entries in its ring_member list. 
> 
> E.g. I have 'riak@192.168.10.22' listed multiple times when I do 'riak-admin  
> member_status'. If I tell the node to leave or force-remove it, only one 
> entry is removed from the list.
> 
> From looking at the docs it appears that there's a function 
> http://basho.github.io/riak_core/riak_core_ring.html#remove_member-3 which 
> can be used to remove a member from the ring. I'm wondering how to call that 
> from an erlang shell and also if that's the best/only option? 
> 
> Basically, I just want to forcibly remove the node through any means 
> necessary. Please let me know if more information is needed. Thanks in 
> advance.
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to