Hi All,

Does anyone have any suggestions about how to improve performance in the
below use case?

I have a very simple table with a single Partition Key, and one Cluster
key. My app is periodically writing new entries in the table and deleting
old ones.

There are a lot more reads than writes on this particular table. All of the
queries are just for the partition key. Most of the queries are for
partition keys that don't exists, more than 99% of the queries.

I was looking at ways to tune the performance. At present there aren't that
many records. So it would all fit in memory nicely. However presumably the
key cache miss is likely to still require a read anyway?


Below is a copy of cfstats:

Read Count: 21795477
Read Latency: 0.009573296147636502 ms.
Write Count: 11673
Write Latency: 0.03859205002998373 ms.
Pending Tasks: 0
Table: inputs
SSTable count: 1
Space used (live), bytes: 1655036
Space used (total), bytes: 1662004
SSTable Compression Ratio: 0.6104794381809562
Number of keys (estimate): 10240
Memtable cell count: 3058
Memtable data size, bytes: 1969808
Memtable switch count: 4
Local read count: 21795477
Local read latency: NaN ms
Local write count: 11673
Local write latency: 0.040 ms
Pending tasks: 0
Bloom filter false positives: 183
Bloom filter false ratio: 0.00000
Bloom filter space used, bytes: 13136
Compacted partition minimum bytes: 104
Compacted partition maximum bytes: 149
Compacted partition mean bytes: 149
Average live cells per slice (last five minutes): 0.0
Average tombstones per slice (last five minutes): 0.0


Thanks,

Charlie M

Reply via email to