The way compaction works, "x" same-sized files are merged into a new SSTable. This repeats itself and the SSTable get bigger and bigger.
So what is the upper limit?? If you are not deleting stuff fast enough, wouldn't the SSTable sizes grow indefinitely? I ask because we have some rather large SSTable files (80-100 GB) and I'm starting to worry about future compactions. Second, compacting such large files is an IO killer. What can be tuned other than compaction_threshold to help optimize this and prevent the files from getting too big? Thanks!