Load and ownership didn’t correlate nearly as well as I expected. I have lots 
and lots of very small records. I would expect very high correlation.

I think the moral of the story is that I shouldn’t delete the system directory. 
If I have issues with a node, I should recommission it properly.

Robert

On Dec 3, 2014, at 10:23 AM, Eric Stevens 
<migh...@gmail.com<mailto:migh...@gmail.com>> wrote:

How does the difference in load compare to the effective ownership?  If you 
deleted the system directory as well, you should end up with new ranges, so I'm 
wondering if perhaps you just ended up with a really bad shuffle. Did you run 
removenode on the old host after you took it down (I assume so since all nodes 
are in UN status)?  Is the test node in its own seeds list?

On Tue Dec 02 2014 at 4:10:10 PM Robert Wille 
<rwi...@fold3.com<mailto:rwi...@fold3.com>> wrote:
I didn’t do anything except kill the server process, delete /var/lib/cassandra, 
and start it back up again. nodetool status shows all nodes as UN, and doesn’t 
display any unexpected nodes.

I don’t know if this sheds any light on the issue, but I’ve added a 
considerable amount of data to the cluster since I did the aforementioned test. 
The difference in size between the nodes is shrinking. The other nodes are 
growing more slowly than the one I recommissioned. That was definitely not 
something that I expected, and I don’t have any explanation for that either.

Robert

On Dec 2, 2014, at 3:38 PM, Tyler Hobbs 
<ty...@datastax.com<mailto:ty...@datastax.com>> wrote:


On Tue, Dec 2, 2014 at 2:21 PM, Robert Wille 
<rwi...@fold3.com<mailto:rwi...@fold3.com>> wrote:
As a a test, I took down a node, deleted /var/lib/cassandra and restarted it.

Did you decommission or removenode it when you took it down?  If you didn't, 
the "old" node is still in the ring, and affects the replication.


--
Tyler Hobbs
DataStax<http://datastax.com/>


Reply via email to