Hi, Forcing a major compaction (using nodetool compact <http://datastax.com/documentation/cassandra/2.1/cassandra/tools/toolsCompact.html>) with STCS will result in a single sstable (ignoring repair data). However this seems like it could be a problem for large JBOD setups. For example if I have 12 disks, 1T each, then it seems like on this node I cannot have one column family store more than 1T worth of data (more or less), because all the data will end up in a single sstable that can exist only on one disk. Is this accurate? The compaction write path docs <http://datastax.com/documentation/cassandra/2.1/cassandra/dml/dml_write_path_c.html> give a bit of hope that cassandra could split the one final sstable across the disks, but I doubt it is able to and want to confirm.
I imagine that RAID/LLVM, using LCS, or multiple cassandra instances not in JBOD mode could be solutions to this (with their own problems), but want to verify that this actually is a problem. -dan