I don't think this would be the right approach. from broker side, this
would mean creating 1M/10M/100M/1B directories, this would be too much for
the file system itself.

For most cases, even some thousand partitions per node should be
sufficient.

For more details, please refer to
https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/

-Sameer.

On Tue, May 16, 2017 at 2:40 PM, kant kodali <kanth...@gmail.com> wrote:

> Forgot to mention: The question in this thread is for one node which has 8
> CPU's 16GB RAM & 500GB hard disk space.
>
> On Tue, May 16, 2017 at 2:06 AM, kant kodali <kanth...@gmail.com> wrote:
>
> > Hi All,
> >
> > 1. I was wondering if anyone has seen or heard or able to create 1M or
> 10M
> > or 100M or 1B partitions in a topic? I understand lot of this depends on
> > filesystem limitations (we are using ext4) and the OS limitations but I
> > just would like to know what is the scale one had seen in production?
> > 2. Is it advisable?
> >
> > Thanks!
> >
>

Reply via email to