I wanted to share this with the community in the hopes that it might help
someone with their schema design.
I didn't get any red flags early on to limit the number of columns we use.
If anything the community pushes for dynamic schema because Cassandra has
super nice online ALTER TABLE.
However,
Hi,
Running C* 2.1.8 cluster in two data centers with 6 nodes each. I've
started running repair sequentially on each node (`nodetool repair
--parallel --in-local-dc`).
While running repair number of SSTables grows radically as well as pending
compaction tasks. It's fine as node usually recovers w
Maybe compaction not keeping up - since you are hitting so many sstables?
Read heavy... are you using LCS?
Plenty of resources... tune to increase memtable size?
On Sat, Sep 26, 2015 at 9:19 AM, Eric Stevens wrote:
> Since you have most of your reads hitting 5-8 SSTables, it's probably
> relat
Since you have most of your reads hitting 5-8 SSTables, it's probably
related to that increasing your latency. That makes this look like your
write workload is either overwrite-heavy or append-heavy. Data for a
single partition key is being written to repeatedly over long time periods,
and this w