gt; --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/4-20-nodes-get-disproportionate-amount-of-mutations-tp6714958p6724943.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
little more?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/4-20-nodes-get-disproportionate-amount-of-mutations-tp6714958p6724943.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
As somewhat of a conclusion to this thread, we have resolved the major issue
having to do with the hotspots. We were balanced between availability zones in
aws/ec2 us-east - a,b,c with the number of nodes in our cluster. However we
didn't alternate by rack with the token order. We are using t
On Aug 23, 2011, at 3:43 AM, aaron morton wrote:
> Dropped messages in ReadRepair is odd. Are you also dropping mutations ?
>
> There are two tasks performed on the ReadRepair stage. The digests are
> compared on this stage, and secondly the repair happens on the stage.
> Comparing digests is
Dropped messages in ReadRepair is odd. Are you also dropping mutations ?
There are two tasks performed on the ReadRepair stage. The digests are compared
on this stage, and secondly the repair happens on the stage. Comparing digests
is quick. Doing the repair could take a bit longer, all the cf'
On Aug 23, 2011, at 2:25 AM, Peter Schuller wrote:
>> We've been having issues where as soon as we start doing heavy writes (via
>> hadoop) recently, it really hammers 4 nodes out of 20. We're using random
>> partitioner and we've set the initial tokens for our 20 nodes according to
>> the ge
> We've been having issues where as soon as we start doing heavy writes (via
> hadoop) recently, it really hammers 4 nodes out of 20. We're using random
> partitioner and we've set the initial tokens for our 20 nodes according to
> the general spacing formula, except for a few token offsets as
We've been having issues where as soon as we start doing heavy writes (via
hadoop) recently, it really hammers 4 nodes out of 20. We're using random
partitioner and we've set the initial tokens for our 20 nodes according to the
general spacing formula, except for a few token offsets as we've re