(Yes, just somewhat less likely to be the same order of speed-up in STCS
where sstables are more likely to cross token boundaries, modulo some stuff
around sstable splitting at token ranges a la 6696)
On Mon, Aug 21, 2023 at 11:35 AM Dinesh Joshi wrote:
> Minor correction, zero copy streaming ak
Minor correction, zero copy streaming aka faster streaming also works for STCS.DineshOn Aug 21, 2023, at 8:01 AM, Jeff Jirsa wrote:There's a lot of questionable advice scattered in this thread. Set aside most of the guidance like 2TB/node, it's old and super nuanced.If you're bare metal, do what
- k8s
1. depending on the version and networking, number of containers per
node, nodepooling, etc. you can expect to see 1-2% additional storage IO
latency (depends on whether all are on the same network vs. separate
storage IO TCP network)
2. System overhead may be 3-15% depending
...and a shameless plug for the Cassandra Summit in December. We have a
talk from somebody that is doing 70TB per node and will be digging into all
the aspects that make that work for them. I hope everyone in this thread is
at that talk! I can't wait to hear all the questions.
Patrick
On Mon, Aug
There's a lot of questionable advice scattered in this thread. Set aside
most of the guidance like 2TB/node, it's old and super nuanced.
If you're bare metal, do what your organization is good at. If you have
millions of dollars in SAN equipment and you know how SANs work and fail
and get backed u
For our scenario, the goal is to minimize down-time for a single (at
least initially) data center system. Data-loss is basically
unacceptable. I wouldn't say we have a "rusty slow data center" - we
can certainly use SSDs and have servers connected via 10G copper to a
fast back-plane. For our