Robert, is it possible you've changed the partitioner during the upgrade?
(e.g. from RandomPartitioner to Murmur3Partitioner ?)


On Sat, Jan 4, 2014 at 9:32 PM, Mullen, Robert <robert.mul...@pearson.com>wrote:

> The nodetool repair command (which took about 8 hours) seems to have
> sync'd the data in us-east, all 3 nodes returning 59 for the count now.
>  I'm wondering if this has more to do with changing the replication factor
> from 2 to 3 and how 2.0.2 reports the % owned rather than the upgrade
> itself.  I still don't understand why it's reporting 16% for each node when
> 100% seems to reflect the state of the cluster better.  I didn't find any
> info in those issues you posted that would relate to the % changing from
> 100% ->16%.
>
>
> On Sat, Jan 4, 2014 at 12:26 PM, Mullen, Robert <robert.mul...@pearson.com
> > wrote:
>
>> from cql
>> cqlsh>select count(*) from topics;
>>
>>
>>
>> On Sat, Jan 4, 2014 at 12:18 PM, Robert Coli <rc...@eventbrite.com>wrote:
>>
>>> On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert <
>>> robert.mul...@pearson.com> wrote:
>>>
>>>> I have a column family called "topics" which has a count of 47 on one
>>>> node, 59 on another and 49 on another node. It was my understanding with a
>>>> replication factor of 3 and 3 nodes in each ring that the nodes should be
>>>> equal so I could lose a node in the ring and have no loss of data.  Based
>>>> upon that I would expect the counts across the nodes to all be 59 in this
>>>> case.
>>>>
>>>
>>> In what specific way are you counting rows?
>>>
>>> =Rob
>>>
>>
>>
>


-- 
Or Sher

Reply via email to