If I use var a as primary key and var b as second key, and a and b are 16
bytes and 8 bytes.
And other data are 32 bytes.
In one row, I have a+b+data = 16+8+32 = 56 bytes.
If I have 100,000 rows to store in cassandra, will it occupy space
56x10 bytes in my disk? Or data will be compressed?
On the systems side of things, I've found that using the new BBR TCP
congestion algorithm results in far better behavior in cases of low to
moderate packet loss compared to any of the older strategies. It can't fix
totally broken, but it takes good advantage of "usable but lossy". 0.5-2%
loss wou
Hello,
We have our cassandra cluster running on aws and we have 2 dc’s 6 and 6 nodes
in both regions with RF=3 and cassandra version 3.11.3 . One of the ec2
instance had issues with network issues for 9 minutes. Since it was network
issue neither ec2 was down nor cassandra down. But this cause