Hi,

I apologize if this question has been addressed before.  We are
currently evaluating kafka for our high volume data ingestion
infrastructure.
I would like to understand why consistent hashing was not implemented
given its inherent ability to dynamically balance the load across
brokers.
The current scheme if I understand correctly is to compute a hash on
the message key (default or user-given) modulo the number of brokers.
This is bound to yield poor (broker) load distribution in the face of failure.

Thanks in advance,

stan

Reply via email to