With > heap size = 4 gigs
I would check for GC activity in the logs and consider setting it to 8 given you have 16 GB. You can also check if the IO system is saturated (http://spyced.blogspot.co.nz/2010/01/linux-performance-basics.html) Also take a look at nodetool cfhistogram perhaps to see how many sstables are involved. I would start by looking at the latency reported on the server, then work back to the client…. I may have missed it in the email but what recent latency for the CF is reported by nodetool cfstats ? That's latency for a single request on a single read thread. The default settings give you 32 read threads. If you know the latency for a single request, and you know you have 32 concurrent read threads, you can get an idea of the max throughput for a single node. Once you get above that throughput the latency for a request will start to include wait time. It's a bit more complicated, because when you request 40 rows that turns into 40 read tasks. So if two clients send a request for 40 rows at the same time there will be 80 read tasks to be processed by 32 threads. Hope that helps. ----------------- Aaron Morton Freelance Developer @aaronmorton http://www.thelastpickle.com On 20/05/2012, at 4:10 PM, Radim Kolar wrote: > Dne 19.5.2012 0:09, Gurpreet Singh napsal(a): >> Thanks Radim. >> Radim, actually 100 reads per second is achievable even with 2 disks. > it will become worse as rows will get fragmented. >> But achieving them with a really low avg latency per key is the issue. >> >> I am wondering if anyone has played with index_interval, and how much of a >> difference would it make to reads on reducing the index_interval. > close to zero. but try it yourself too and post your findings.