I think this is a great more generally useful than the two scenarios you've outlined. I think it could / should be possible to use an object store as the primary storage for sstables and rely on local disk as a cache for reads.
I don't know the roadmap for TCM, but imo if it allowed for more stable, pre-allocated ranges that compaction will always be aware of (plus a bunch of plumbing I'm deliberately avoiding the details on), then you could bootstrap a new node by copying s3 directories around rather than streaming data between nodes. That's how we get to 20TB / node, easy scale up / down, etc, and always-ZCS for non-object store deployments. Jon On 2023/09/25 06:48:06 "Claude Warren, Jr via dev" wrote: > I have just filed CEP-36 [1] to allow for keyspace/table storage outside of > the standard storage space. > > There are two desires driving this change: > > 1. The ability to temporarily move some keyspaces/tables to storage > outside the normal directory tree to other disk so that compaction can > occur in situations where there is not enough disk space for compaction and > the processing to the moved data can not be suspended. > 2. The ability to store infrequently used data on slower cheaper storage > layers. > > I have a working POC implementation [2] though there are some issues still > to be solved and much logging to be reduced. > > I look forward to productive discussions, > Claude > > [1] > https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-36%3A+A+Configurable+ChannelProxy+to+alias+external+storage+locations > [2] https://github.com/Claudenw/cassandra/tree/channel_proxy_factory >