---
>>> ---
>>>
>>>
>>> NOTE: Applying these changes will result in 1 cluster transition
>>>
>>> ####################
>>> #
not do it I'll try switching to claim v3.
If my logic is correct, then you are also correct: it appears this problem
has been present since a previous cluster transition. I reviewed the logs
from the previous transition and I did not get "WARNING: Not all replicas
will be on distinct nodes&q
:49, Daniel Miller wrote:
> I have a 6 node cluster (now 7) with ring size 128. On adding the most
> recent node I got the WARNING: Not all replicas will be on distinct nodes.
> After the initial plan I ran the following sequence many times, but always
> got the same plan outp
I have a 6 node cluster (now 7) with ring size 128. On adding the most
recent node I got the WARNING: Not all replicas will be on distinct nodes.
After the initial plan I ran the following sequence many times, but always
got the same plan output:
sudo riak-admin cluster clear && \
will use the version 3 claim functions.
Joe
From: Guillermo
Date: Wednesday, August 7, 2013 5:17 AM
To:
Subject: WARNING: Not all replicas will be on distinct nodes (with 5 nodes)
Hi. I saw before this warning with clusters with not enough nodes.
On this case, the environment is amazon ec2
"roles:riak" "riak-admin cluster join riak@node01"
ssh node01 riak-admin cluster plan
ssh node01 riak-admin cluster commit
In the plan face, it already says:
WARNING: Not all replicas will be on distinct nodes
And after commit, riak-admin diag confirms that:
[warning] The follow
06 PM, Drew Broadley wrote:
> Hi there,
>
> Could someone please iterate in more detail what the following warning
> means:
>
> WARNING: Not all replicas will be on distinct nodes
>
> My assumption is that there is not a clear replication of data across
> nodes, if o
Hi there,
Could someone please iterate in more detail what the following warning
means:
WARNING: Not all replicas will be on distinct nodes
My assumption is that there is not a clear replication of data across
nodes, if one goes down, data could be lost.
Cheers,
Drew Broadley
*PAPERKUT