I've noticed this new feature of 4.0: Streaming optimizations
(https://cassandra.apache.org/blog/2018/08/07/faster_streaming_in_cassandra.html)
Is this mean that we could have much more data density with Cassandra 4.0
(less problems than 3.X)? I mean > 10 TB of data on each node without worrying
Messenger can allow for some losses in degenerate infra cases, given a
given infra footprint. Also some ability to handle scale up faster as
demand increases, peak loads, etc. It therefore becomes a use case specific
optimization. Also hBase can run in Hadoop more easily, leveraging blobs
(HDFS), e
Hi Vitaliy,
That method
(https://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/ExecutionInfo.html#getAchievedConsistencyLevel--)
is a bit confusing as it will return null when your desired
consistency level is achieved:
> If the query returned without achieving the request