Hi Ramz,

A good rule of thumb has been no more than 4,000 partitions per broker and no 
more than 100,000 in a cluster.
This includes all replicas and it's related more to Kafka internals then it is 
resource usage so I strongly advise not pushing these limits.

Otherwise, the usual reasons for scaling is:

* Disk space;
* CPU usage;
* IO; and
* Bandwidth.

You can tune your way out of most other bottlenecks by configuring thread 
counts and other parameters but if you hit one of the above you either need to 
scale up or out.
You can also increase RAM to to decrease IO and CPU usage as Kafka will very 
effectively use the extra memory through the OS page cache.

If you are concerned you are hitting a bottleneck and its not one of the above 
a good place to start looking is thread utilisation.
The Apache Kafka documentation list the mbeans for monitoring these:
https://kafka.apache.org/documentation/ 
<https://kafka.apache.org/documentation/>

In particular look at:

* kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent; 
and
* kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent.

Regards,
Evelyn.
> On 4 Apr 2019, at 1:59 pm, Rammohan Vanteru <ramz.moha...@gmail.com> wrote:
> 
> Hi users,
> 
> On what basis should we scale kafka cluster what would be symptoms for
> scaling kafka.
> 
> I have a 3 node kafka cluster upto how many max partitions a single broker
> or kafka cluster can support?
> 
> If any article or knowledge share would be help on scaling kafka.
> 
> Thanks,
> Ramz.

Reply via email to