Hi all
We keep having a problem with hint files on one of our Cassandra nodes (v
3.11.6 ), there keeps being the following error messages repeated for same file.
INFO [HintsDispatcher:25] 2021-11-02 08:55:29,830
HintsDispatchExecutor.java:289 - Finished hinted handoff of file
72a18469-b7d2-499
For memory-sake, you do not want “too many” tables in a single cluster (~200 is
a reasonable rule of thumb). But I don’t see a major concern with a few very
large tables in the same cluster. The client side, at least in Java, could get
large (memory-wise) holding a Cluster object for multiple cl
We have apps like this, also. For straight Cassandra, I think it is just the
nature of how it works. DataStax provides some interesting solutions in
different directions: BigNode (for handling 10-20 TB nodes) or Astra
(cloud-based/container-driven solution that DOES separate read, write, and
st
It sounds like you can downsize your cluster but increase your drive capacity.
Depending on how your cluster is deployed, it’s very possible that disks larger
than 5TB per node are available. Could you reduce the number of nodes and
increase your disk sizes?
—
Abe
I can, but i thought with 5TB per node already violated best practices (1-2 TB
per node) and won't be a good idea to 2X or 3X that?
Sent using https://www.zoho.com/mail/
On Mon, 15 Nov 2021 20:55:53 +0330 wrote
It sounds like you can downsize your cluster but increase your dri
> I can, but i thought with 5TB per node already violated best practices (1-2
> TB per node) and won't be a good idea to 2X or 3X that?
The main downside of larger disks is that it takes longer to replace a host
that goes down, since there’s less network capacity to move data from surviving
ins
Thank You
Sent using https://www.zoho.com/mail/
On Tue, 16 Nov 2021 10:00:19 +0330 wrote
> I can, but i thought with 5TB per node already violated best practices (1-2
>TB per node) and won't be a good idea to 2X or 3X that?
The main downside of larger disks is that it takes