Still no go. Oddly, I can use trino and do a count OK, but with spark I
get the timeouts. I don't believe tombstones are an issue:
nodetool cfstats doc.doc
Total number of tables: 82
Keyspace : doc
Read Count: 1514288521
Read Latency: 0.5080819034089475 ms
I've tried several different GC settings - but still getting timeouts.
Using openJDK 11 with:
-XX:+UseG1GC
-XX:+ParallelRefProcEnabled
-XX:G1RSetUpdatingPauseTimePercent=5
-XX:MaxGCPauseMillis=500
-XX:InitiatingHeapOccupancyPercent=70
-XX:ParallelGCThreads=24
-XX:ConcGCThreads=24
Machine has 40 c