On 11/29/25 13:36, Ilan Ginzburg wrote:

Fat data nodes with fast disks and current SolrCloud code would not support
thousands or tens of thousands cores per node (unless they also have very
fat memory).


The wisdom of putting tens of thousands of cores on a single node aside, I strongly suspect putting data on ZFS with a good PCIe x.N SSD for L2ARC will give you better bang for the buck than any storage that has to fetch stuff over the network, and over the "fast disks" too. And with its incremental snapshot transfers you could probably keep your data replicas in sync no worse than most distributed FSes do. Yes, you will want more RAM. You always want more RAM, what else's new.

Of course that only works if you're running on bare metal and are in charge of your purchasing decisions. If you're running on virtual/cloud infra, you get to live with the latencies your provider provides.

Dima

Reply via email to