I am running tests again across different number of client threads and
number of nodes but this time I tweaked some of the timeouts configured for
the nodes in the cluster.  I was able to get better performance on the
nodes at 10 client threads by upping 4 timeout values in cassandra.yaml to
240000:


   - read_request_timeout_in_ms
   - range_request_timeout_in_ms
   - write_request_timeout_in_ms
   - request_timeout_in_ms


I did this because of my interpretation of the cfhistograms output on one
of the nodes.

So 3 questions that come to mind:


   1. Did I interpret the histogram information correctly in cassandra
   2.0.6 nodetool output?  That the 2 column read latency output is the offset
   or left column is the time in milliseconds and the right column is number
   of requests that fell into that bucket range.
   2. Was it reasonable for me to boost those 4 timeouts and just those?
   3. What are reasonable timeout values for smaller vm sizes (i.e. 8GB
   RAM, 4 CPUs)?

If anyone has any  insight it would be appreciated.

Thanks,
Diane


On Fri, Jul 18, 2014 at 2:23 PM, Tyler Hobbs <ty...@datastax.com> wrote:

>
> On Fri, Jul 18, 2014 at 8:01 AM, Diane Griffith <dfgriff...@gmail.com>
> wrote:
>
>>
>> Partition Size (bytes)
>> 1109 bytes: 18000000
>>
>> Cell Count per Partition
>> 8 cells: 18000000
>>
>> meaning I can't glean anything about how it partitioned or if it broke a
>> key across partitions from this right?  Does it mean for 18000000 (the
>> number of unique keys) that each has 8 cells?
>>
>
> Yes, your interpretation is correct.  Each of your 18000000 partitions has
> 8 cells (taking up 1109 bytes).
>
>
> --
> Tyler Hobbs
> DataStax <http://datastax.com/>
>

Reply via email to