AFAIK it should be ok after the repair completed (it was missing all writes
while it was decommissioning and while it was offline, and nobody would
have been keeping hinted handoffs for it, so repair was the right thing to
do).  Unless RF=N you're now due for a cleanup on the other nodes.

Generally speaking though this was probably not a good idea.  When the node
came back online, it rejoined the cluster immediately and would have been
serving client requests without having a consistent view of the data.  A
safer approach would be to wipe the data directory and bootstrap it as a
clean new member.

I'm curious what prompted that cycle of decommission then recommission.

On Tue, Feb 10, 2015 at 10:13 PM, Stefano Ortolani <ostef...@gmail.com>
wrote:

> Hi,
>
> I recommissioned a node after decommissioningit.
> That happened (1) after a successfull decommission (checked), (2) without
> wiping the data directory on the node, (3) simply by restarting the
> cassandra service. The node now reports himself healty and up and running
>
> Knowing that I issued the "repair" command and patiently waited for its
> completion, can I assume the cluster, and its internals (replicas, balance
> between those) to be healthy and "as new"?
>
> Regards,
> Stefano
>

Reply via email to