I am benchmarking with the YCSB tool doing 1k writes.

Below are my server specs
2 sockets
12 core hyperthreaded processor
64GB memory

Cassandra settings
32GB heap
Concurrent_reads: 128
Concurrent_writes:256

From what we are seeing it looks like the kernel writing to the disk causes 
degrading performance.

[cid:image001.png@01D3864E.B5034DA0]

Please let me know


From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, January 5, 2018 5:50 PM
To: user@cassandra.apache.org
Subject: Re: NVMe SSD benchmarking with Cassandra

Second the note about compression chunk size in particular.
--
Jeff Jirsa


On Jan 5, 2018, at 5:48 PM, Jon Haddad 
<j...@jonhaddad.com<mailto:j...@jonhaddad.com>> wrote:
Generally speaking, disable readahead.  After that it's very likely the issue 
isn’t in the settings you’re using the disk settings, but is actually in your 
Cassandra config or the data model.  How are you measuring things?  Are you 
saturating your disks?  What resource is your bottleneck?

*Every* single time I’ve handled a question like this, without exception, it 
ends up being a mix of incorrect compression settings (use 4K at most), some 
crazy readahead setting like 1MB, and terrible JVM settings that are the bulk 
of the problem.

Without knowing how you are testing things or *any* metrics whatsoever whether 
it be C* or OS it’s going to be hard to help you out.

Jon



On Jan 5, 2018, at 5:41 PM, Justin Sanciangco 
<jsancian...@blizzard.com<mailto:jsancian...@blizzard.com>> wrote:

Hello,

I am currently benchmarking NVMe SSDs with Cassandra and am getting very bad 
performance when my workload exceeds the memory size. What mount settings for 
NVMe should be used? Right now the SSD is formatted as XFS using noop 
scheduler. Are there any additional mount options that should be used? Any 
specific kernel parameters that should set in order to make best use of the 
PCIe NVMe SSD? Your insight would be well appreciated.

Thank you,
Justin Sanciangco

Reply via email to