Hello Jonathan,

No, the new node is not a seed in my cluster.

When I ran nodetool bootstrap resume
Node is already bootstrapped.

Cheers,

Bertrand

On Sun, Nov 20, 2016 at 1:43 PM, Jonathan Haddad <j...@jonhaddad.com> wrote:

> Did you add the new node as a seed? If you did, it wouldn't bootstrap, and
> you should run repair.
> On Sun, Nov 20, 2016 at 10:36 AM Bertrand Brelier <
> bertrand.brel...@gmail.com> wrote:
>
>> Hello everybody,
>>
>> I am using a 3-node Cassandra cluster with Cassandra 3.0.10.
>>
>> I recently added a new node (to make it a 3-node cluster).
>>
>> I am using a replication factor of 3 , so I expected to have a copy of
>> the same data on each node :
>>
>> CREATE KEYSPACE mydata WITH replication = {'class': 'SimpleStrategy',
>> 'replication_factor': '3'}  AND durable_writes = true;
>>
>> But the new node has  less data that the other 2 :
>>
>> Datacenter: datacenter1
>> =======================
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address       Load       Tokens       Owns (effective)  Host
>> ID                               Rack
>> UN  XXX.XXX.XXX.XXX  53.28 GB   256          100.0% xxxxxx  rack1
>> UN  XXX.XXX.XXX.XXX  64.7 GB    256          100.0% xxxxxx  rack1
>> UN  XXX.XXX.XXX.XXX  1.28 GB    256          100.0% xxxxxx  rack1
>>
>>
>> On the new node :
>>
>> /XXXXXX/data-6d674a40efab11e5b67e6d75503d5d02/:
>> total 1.2G
>>
>> on one of the old nodes :
>>
>> /XXXXXX/data-6d674a40efab11e5b67e6d75503d5d02/:
>> total 52G
>>
>>
>> I am monitoring the amount of data on each node, and they grow at the
>> same rate. So I suspect that my new data are replicated on the 3 nodes
>> but the old data stored on the first 2 nodes are not replicated on the
>> new node.
>>
>> I ran nodetool repair (on each node, one at a time), but the new node
>> still does not have a copy of the old data.
>>
>> Could you please help me understand why the old data is not replicated
>> to the new node ? Please let me know if you need further information.
>>
>> Thank you,
>>
>> Cheers,
>>
>> Bertrand
>>
>>

Reply via email to