This may be due to https://issues.apache.org/jira/browse/CASSANDRA-10249 / 
https://issues.apache.org/jira/browse/CASSANDRA-8894 - whether or not this is 
really the case depends on how much of your data is in page cache, and whether 
or not you’re using mmap. Since the original question was asked by someone 
using small RAM instances, it’s possible. 

We mitigate this by dropping compression_chunk_size in order to force a smaller 
buffer on reads, so we don’t over read very small blocks. This has other side 
effects (lower compression ratio, more garbage during streaming), but 
significantly speeds up read workloads for us.


From:  Zhiyan Shao
Date:  Thursday, January 14, 2016 at 9:49 AM
To:  "user@cassandra.apache.org"
Cc:  Jeff Jirsa, "Agrawal, Pratik"
Subject:  Re: Slow performance after upgrading from 2.0.9 to 2.1.11

Praveen, if you search "Read is slower in 2.1.6 than 2.0.14" in this forum, you 
can find another thread I sent a while ago. The perf test I did indicated that 
read is slower for 2.1.6 than 2.0.14 so we stayed with 2.0.14.

On Tue, Jan 12, 2016 at 9:35 AM, Peddi, Praveen <pe...@amazon.com> wrote:
Thanks Jeff for your reply. Sorry for delayed response. We were running some 
more tests and wanted to wait for the results.

So basically we saw higher CPU with 2.1.11 was higher compared to 2.0.9 (see 
below) for the same exact load test. Memory spikes were also aggressive on 
2.1.11.

So we wanted to rule out any of our custom setting so we ended up doing some 
testing with Cassandra stress test and default Cassandra installation. Here are 
the results we saw between 2.0.9 and 2.1.11. Both are default installations and 
both use Cassandra stress test with same params. This is the closest 
apple-apple comparison we can get. As you can see both read and write latencies 
are 30 to 50% worse in 2.1.11 than 2.0.9. Since we are using default 
installation.

Highlights of the test:
Load: 2x reads and 1x writes
CPU:  2.0.9 (goes upto 25%)  compared to 2.1.11 (goes upto 60%)
Local read latency: 0.039 ms for 2.0.9 and 0.066 ms for 2.1.11

Local write Latency: 0.033 ms for 2.0.9 Vs 0.030 ms for 2.1.11

One observation is, As the number of threads are increased, 2.1.11 read 
latencies are getting worse compared to 2.0.9 (see below table for 24 threads 
vs 54 threads)

Not sure if anyone has done this kind of comparison before and what their 
thoughts are. I am thinking for this same reason 

2.0.9 Plain type      total ops    op/s    pk/s   row/s    mean     
med0.950.990.999     max   time
 16 threadCount READ668547205720572051.61.32.83.59.685.39.3
 16 threadCount WRITE331463572357235721.312.63.37206.59.3
 16 threadCount total1000001077710777107771.51.32.73.47.9206.59.3
2.1.11 Plain            
 16 threadCount READ670966818681868181.61.52.63.57.961.79.8
 16 threadCount WRITE329043344334433441.41.32.336.556.79.8
 16 threadCount total1000001016210162101621.61.42.53.2661.79.8
2.0.9 Plain            
 24 threadCount READ6641481678167816721.63.77.516.72088.1
 24 threadCount WRITE335864130413041301.71.33.45.425.645.48.1
 24 threadCount total1000001229712297122971.91.53.56.215.22088.1
2.1.11 Plain            
 24 threadCount READ666287433743374332.22.13.44.38.438.39
 24 threadCount WRITE3337237233723372321.93.13.821.937.29
 24 threadCount total1000001115511155111552.123.34.18.838.39
2.0.9 Plain            
 54 threadCount READ671151341913419134192.82.64.26.436.982.45
 54 threadCount WRITE328856575657565752.52.33.95.615.981.55
 54 threadCount total1000001999319993199932.72.54.15.713.982.45
2.1.11 Plain            
 54 threadCount READ667808951895189514.33.96.89.749.469.97.5
 54 threadCount WRITE332204453445344533.53.25.78.236.8687.5
 54 threadCount total10000013404134041340443.76.69.24869.97.5

From: Jeff Jirsa <jeff.ji...@crowdstrike.com>
Date: Thursday, January 7, 2016 at 1:01 AM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>, Peddi Praveen 
<pe...@amazon.com>
Subject: Re: Slow performance after upgrading from 2.0.9 to 2.1.11

Anecdotal evidence typically agrees that 2.1 is faster than 2.0 (our experience 
was anywhere from 20-60%, depending on workload).

However, it’s not necessarily true that everything behaves exactly the same – 
in particular, memtables are different, commitlog segment handling is 
different, and GC params may need to be tuned differently for 2.1 than 2.0.

When the system is busy, what’s it actually DOING? Cassandra exposes a TON of 
metrics – have you plugged any into a reporting system to see what’s going on? 
Is your latency due to pegged cpu, iowait/disk queues or gc pauses? 

My colleagues spent a lot of time validating different AWS EBS configs (video 
from reinvent at https://www.youtube.com/watch?v=1R-mgOcOSd4), 2.1 was faster 
in almost every case, but you’re using an instance size I don’t believe we 
tried (too little RAM to be viable in production).  c3.2xl only gives you 15G 
of ram – most “performance” based systems want 2-4x that (people running G1 
heaps usually start at 16G heaps and leave another 16-30G for page cache), 
you’re running fairly small hardware – it’s possible that 2.1 isn’t “as good” 
on smaller hardware. 

(I do see your domain, presumably you know all of this, but just to be sure):

You’re using c3, so presumably you’re using EBS – are you using GP2? Which 
volume sizes? Are they the same between versions? Are you hitting your iops 
limits? Running out of burst tokens? Do you have enhanced networking enabled? 
At load, what part of your system is stressed? Are you cpu bound? Are you 
seeing GC pauses hurt latency? Have you tried changing memtable_allocation_type 
-> offheap objects  (available in 2.1, not in 2.0)? 

Tuning gc_grace is weird – do you understand what it does? Are you overwriting 
or deleting a lot of data in your test (that’d be unusual)? Are you doing a lot 
of compaction?


From: "Peddi, Praveen"
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, January 6, 2016 at 11:41 AM
To: "user@cassandra.apache.org"
Subject: Slow performance after upgrading from 2.0.9 to 2.1.11

Hi,
We have upgraded Cassandra from 2.0.9 to 2.1.11 in our loadtest environment 
with pretty much same yaml settings in both (removed unused yaml settings and 
renamed few others) and we have noticed performance on 2.1.11 is worse compared 
to 2.0.9. After more investigation we found that the performance gets worse as 
we increase replication factor on 2.1.11 where as on 2.0.9 performance is more 
or less same. Has anything architecturally changed as far as replication is 
concerned in 2.1.11?

All googling only suggested 2.1.11 should be FASTER than 2.0.9 so we are 
obviously doing something different. However the client code, load test is all 
identical in both cases.

Details:
Nodes: 3 ec2 c3.2x large
R/W Consistency: QUORUM
Renamed memtable_total_space_in_mb to memtable_heap_space_in_mb and removed 
unused properties from yaml file.
We run compaction aggressive compaction with low gc_grace (15 mins) but this is 
true for both 2.0.9 and 2.1.11.

As you can see, all p50, p90 and p99 latencies stayed with in 10% difference on 
2.0.9 when we increased RF from 1 to 3, where as on 2.1.11 latencies almost 
doubled (especially reads are much slower than writes).

# Nodes RF# of rows2.0.92.1.11
READ        
   P50P90P99P50P90P99
314503065947474258491085
3345035863487770812742642
         
WRITE        
3110268017937131196
3310319618446166468
Any pointers on how to debug performance issues will be appreciated.

Praveen


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to