A-yup. Got burned this too some time ago myself. If you do accidentally try to 
bootstrap a seed node, the solution is to run repair after adding the new node 
but before removing the old one. However, during this time the node will 
advertise itself as owning a range, but when queried, it'll return no data 
until the repair has completed :-(.

Honestly, with reference to the JIRA ticket, I just don't see a situation where 
the current behaviour would really be useful. It's a nasty thing that you "just 
have to know" when upgrading your cluster - there's no warning, no logging, no 
documentation; just something that you might accidentally do and which will 
manifest itself as random data loss.

/Janne

On 26 Nov 2013, at 21:20, Robert Coli <rc...@eventbrite.com> wrote:

> On Tue, Nov 26, 2013 at 9:48 AM, Christopher J. Bottaro 
> <cjbott...@academicworks.com> wrote:
> One thing that I didn't mention, and I think may be the culprit after doing a 
> lot or mailing list reading, is that when we brought the 4 new nodes into the 
> cluster, they had themselves listed in the seeds list.  I read yesterday that 
> if a node has itself in the seeds list, then it won't bootstrap properly.
> 
> https://issues.apache.org/jira/browse/CASSANDRA-5836
> 
> =Rob 

Reply via email to