It doesn’t. However, I like to know things. Thus, I wanted to know what
determines which nodes send their data in the order they do.
Similarly, when the cluster was created, I added the seeds nodes in numerically
ascending order and then the other nodes in a similar fashion. So why doesn’t
node
I see. I personally don't know the order, I would suggest you check the
source code, try to understand from that.
Regarding seed order, I don't know of any significance to the order of the
seeds in the yaml, I don't think you should expect to see them appearing
elsewhere by that order.
As for nodet
A new node joining will receive (replication factor) streams for each token it
has. If you use single token and RF=3, three hosts will send data at the same
time (the data sent is the “losing” replica of the data based on the next/new
topology that will exist after the node finishes bootstrappin
Sorry, I think the comment below is right, but there's some ambiguity, so
adding more words.
Each sending host will send each set of tables/keyspaces serially. So the
number of concurrent streams is capped by the number of hosts in the
cluster (not hosts * RF or hosts * tokens * RF, it's just one
Hi all,
Since zstd compression is a very good compression algorithm, it is available in
Cassandra 4.0. Because the overall performance and ratio are excellent
There is open source available for Cassandra 3.x.
https://github.com/MatejTymes/cassandra-zstd
Do you have any experience applying this
Is there something preventing you from upgrading to 4.0? It is backward
compatible with 3.0 so clients don’t need to change.
If you don’t want to absolutely upgrade you can extract the implementation from
4.0 and use it. I would advise against this path though as zstd implementation
is nuanced.
Thank you for your response.
I'll consider upgrading to 4.x.
> 2022. 9. 13. 오후 2:41, Dinesh Joshi 작성:
>
> Is there something preventing you from upgrading to 4.0? It is backward
> compatible with 3.0 so clients don’t need to change.
>
> If you don’t want to absolutely upgrade you can extract
I patched this on 3.11.2 easily:
1. build jar file from src and put in cassandra/lib directory
2. restart cassandra service
3. alter table using compression zstd and rebuild sstables
But it was in a time when 4.0 was not available yet and after that i upgraded
to 4.0 immidiately.
Sent usi