On Mon, Feb 20, 2012 at 12:00 PM, aaron morton <aa...@thelastpickle.com>wrote:

> Aside from iostats..
>
> nodetool cfstats will give you read and write latency for each CF. This is
> the latency for the operation on each node. Check that to see if latency is
> increasing.
>
> Take a look at nodetool compactionstats to see if compactions are running
> at the same time. The IO is throttled but if you are on aws it may not be
> throttled enough.
>
>
compaction had finished


> The sweet spot for non netflix deployments seems to be a m1.xlarge with
> 16GB. THe JVM can have 8 and the rest can be used for memmapping files.
> Here is a good post about choosing EC2 sizes…
> http://perfcap.blogspot.co.nz/2011/03/understanding-and-using-amazon-ebs.html
>

Thanks - good article. I'll go up to m1.xlarge and explore that behaviour

cheers



>
> Cheers
>
>   -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/02/2012, at 9:31 AM, Franc Carter wrote:
>
> On Mon, Feb 20, 2012 at 4:10 AM, Philippe <watche...@gmail.com> wrote:
>
>> Perhaps your dataset can no longer be held in memory. Check iostats
>>
>
> I have been flushing the keycache and dropping the linux disk caches
> before each to avoid testing memory reads.
>
> One possibility that I thought of is that the success keys are now 'far
> enough away' that they are not being included in the previous read and
> hence the seek penalty has to be paid a lot more often  - viable ?
>
> cheers
>
>>
>> Le 19 févr. 2012 11:24, "Franc Carter" <franc.car...@sirca.org.au> a
>> écrit :
>>
>>
>>> I've been testing Cassandra - primarily looking at reads/second for our
>>> fairly data model - one unique key with a row of columns that we always
>>> request. I've now setup the cluster with with m1.large (2 cpus 8GB)
>>>
>>> I had loaded a months worth of data in and was doing random requests as
>>> a torture test - and getting very nice results. I then loaded another days
>>> worth of day and repeated the tests while the load was running - still good.
>>>
>>> I then started loading more days and at some point the performance
>>> dropped by close to an order of magnitude ;-(
>>>
>>> Any ideas on what to look for ?
>>>
>>> thanks
>>>
>>> --
>>> *Franc Carter* | Systems architect | Sirca Ltd
>>>  <marc.zianideferra...@sirca.org.au>
>>> franc.car...@sirca.org.au | www.sirca.org.au
>>> Tel: +61 2 9236 9118
>>>  Level 9, 80 Clarence St, Sydney NSW 2000
>>> PO Box H58, Australia Square, Sydney NSW 1215
>>>
>>>
>
>
> --
> *Franc Carter* | Systems architect | Sirca Ltd
>  <marc.zianideferra...@sirca.org.au>
> franc.car...@sirca.org.au | www.sirca.org.au
> Tel: +61 2 9236 9118
>  Level 9, 80 Clarence St, Sydney NSW 2000
> PO Box H58, Australia Square, Sydney NSW 1215
>
>
>


-- 

*Franc Carter* | Systems architect | Sirca Ltd
 <marc.zianideferra...@sirca.org.au>

franc.car...@sirca.org.au | www.sirca.org.au

Tel: +61 2 9236 9118

Level 9, 80 Clarence St, Sydney NSW 2000

PO Box H58, Australia Square, Sydney NSW 1215

Reply via email to