Yes, including the system and commitlog directory.  Then when it starts,
it's like a brand new node and will bootstrap to join.

On Wed, Feb 11, 2015 at 8:56 AM, Stefano Ortolani <ostef...@gmail.com>
wrote:

> Hi Eric,
>
> thanks for your answer. The reason why it got recommissioned was simply
> because the machine got restarted (with auto_bootstrap set to to true). A
> cleaner, and correct, recommission would have just required wiping the data
> folder, am I correct? Or would I have needed to change something else in
> the node configuration?
>
> Cheers,
> Stefano
>
> On Wed, Feb 11, 2015 at 6:47 AM, Eric Stevens <migh...@gmail.com> wrote:
>
>> AFAIK it should be ok after the repair completed (it was missing all
>> writes while it was decommissioning and while it was offline, and nobody
>> would have been keeping hinted handoffs for it, so repair was the right
>> thing to do).  Unless RF=N you're now due for a cleanup on the other nodes.
>>
>> Generally speaking though this was probably not a good idea.  When the
>> node came back online, it rejoined the cluster immediately and would have
>> been serving client requests without having a consistent view of the data.
>> A safer approach would be to wipe the data directory and bootstrap it as a
>> clean new member.
>>
>> I'm curious what prompted that cycle of decommission then recommission.
>>
>> On Tue, Feb 10, 2015 at 10:13 PM, Stefano Ortolani <ostef...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I recommissioned a node after decommissioningit.
>>> That happened (1) after a successfull decommission (checked), (2)
>>> without wiping the data directory on the node, (3) simply by restarting the
>>> cassandra service. The node now reports himself healty and up and running
>>>
>>> Knowing that I issued the "repair" command and patiently waited for its
>>> completion, can I assume the cluster, and its internals (replicas, balance
>>> between those) to be healthy and "as new"?
>>>
>>> Regards,
>>> Stefano
>>>
>>
>>
>

Reply via email to