Thanx for the insight
On Mon, Aug 2, 2010 at 4:48 PM, Benjamin Black wrote:
> you have insufficient i/o bandwidth and are seeing reads suffer due to
> competition from memtable flushes and compaction. adding additional
> nodes will help some, but i recommend increasing the disk i/o
> bandwidth,
sent last msg before reading this. it confirms what i said, i/o is
your problem:
On Mon, Aug 2, 2010 at 4:05 PM, Artie Copeland wrote:
> sdb 335.50 0.00 70.50 0.00 3248.00 0.00 46.07
> 0.78 11.01 7.40 52.20
> sdc 330.00 0.00 70.00 0.50 3180.00
you have insufficient i/o bandwidth and are seeing reads suffer due to
competition from memtable flushes and compaction. adding additional
nodes will help some, but i recommend increasing the disk i/o
bandwidth, regardless.
b
On Mon, Aug 2, 2010 at 11:47 AM, Artie Copeland wrote:
> i have a qu
On Mon, Aug 2, 2010 at 2:39 PM, Aaron Morton wrote:
> You may need to provide some more information on how many reads your
> sending to the cluster. Also...
>
> How many nodes do you have in the cluster ?
>
We have a cluster of 4 nodes.
> When you are seeing high response times on one node, wha
You may need to provide some more information on how many reads your sending to the cluster. Also...How many nodes do you have in the cluster ?When you are seeing high response times on one node, what's the load like on the others ?Is the data load evenly distributed around the cluster ? Are your c
i have a question on what are the signs from cassandra that new nodes should
be added to the cluster. We are currently seeing long read times from the
one node that has about 70GB of data with 60GB in one column family. we are
using a replication factor of 3. I have tracked down the slow to occu