Re: Re: streaming stuck on joining a node with TBs of data

2020-08-05 Thread onmstester onmstester
OK. Thanks I'm using STCS. Anyway, IMHO, this is one of the main bottlenecks for using big/dense node in Cassandra (which reduces cluster size and data center costs) and it could be almost solved (at least for me), if we could reduce number of sstables at receiver side (either by sending

Re: Re: streaming stuck on joining a node with TBs of data

2020-08-04 Thread onmstester onmstester
OK. Thanks I'm using STCS. Anyway, IMHO, this is one of the main bottlenecks for using big/dense node in Cassandra (which reduces cluster size and data center costs) and it could be almost solved (at least for me), if we could eliminate number of sstables at receiver side (either by sending big

Re: Re: streaming stuck on joining a node with TBs of data

2020-08-03 Thread Jeff Jirsa
Memtable really isn't involved here, each data file is copied over as-is and turned into a new data file, it doesn't read into the memtable (though it does deserialize and re-serialize, which temporarily has it in memory, but isn't in the memtable itself). You can cut down on the number of data fi