I am wondering what others are doing in terms of cluster separation. (if at
all)  For example let’s say I need 24 nodes to support a given workload.
What are the tradeoffs between a single 24 node cluster vs 2 x 12 node
clusters for example.  The application I support can support separation of
data fairly easily as the data is all processed in the same way but can be
sharded isolated based on customers.  I understand the standard tradeoffs,
for example putting all your eggs in one basket but curious as if there are
any details specific to Kafka in terms of cluster scale out.



Somewhat related is the use of RAID vs JBOD, I have reviewed the documents
on the Kafka site and understand the tradeoff between space as well as
sequential IO vs random and the fact a RAID rebuild might kill the system.
I am specifically asking the question as it relates to larger cluster and
the impact on the number of partitions a topic might need.



Take an example of a 24 node cluster with 12 drives each the cluster would
have 288 drives.  To ensure a topic is distributed across all drives a
topic would require 288 partitions.  I am planning to test some of this but
wanted to know if there was a rule of thumb.  The following link
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowdoIchoosethenumberofpartitionsforatopic?
Talks about supporting up to 10K partitions but its not clear if this is
for a cluster as a whole vs topic based


Those of you running larger clusters what are you doing?


Bert

Reply via email to