Given a business application that resorts into a message queue solution like 
Kafka, what is the best number of partitions to select for a given topic? what 
influences such a decision?


On the other hand, say we want to achieve a maximal throughput of message 
consumption but at minimal resource consumption? what is the best number of 
topic consumers to configure statically?



If dynamic scale up/down of topic consumers is enabled what would better : (1) 
start with one consumer and scale up consumers until a desired metric is 
achieved, or (2) start with consumers equal to the number of partitions and 
then scale down until the desired metric is achieved?


Are you aware of any cloud provider that offers a message broker service 
(namely, Kafka) that support automatic scale of consumers?



Thank you.

Reply via email to