Each partition is a directory, whose name is topicName-partitionNumber
In each directory, there are likely many files, based on how you configure
the Kafka log rotation and retention (check log.roll.* parameters, for
starter), plus some internal files (indexing etc)
You can easily try it out yourself - create a topic with some partitions,
send some data and observe the log dir(s) (based on log.dir or log.dirs in
the broker config)


Ofir Manor

Co-Founder & CTO | Equalum

Mobile: +972-54-7801286 | Email: ofir.ma...@equalum.io

On Tue, May 16, 2017 at 12:44 PM, kant kodali <kanth...@gmail.com> wrote:

> Got it! but do you mean directories or files? I thought partition = file
> and topic = directory
>
> If there are 10000 partitions in a topic that means there are 10000 files
> in one directory?
>
> On Tue, May 16, 2017 at 2:29 AM, Sameer Kumar <sam.kum.w...@gmail.com>
> wrote:
>
> > I don't think this would be the right approach. from broker side, this
> > would mean creating 1M/10M/100M/1B directories, this would be too much
> for
> > the file system itself.
> >
> > For most cases, even some thousand partitions per node should be
> > sufficient.
> >
> > For more details, please refer to
> > https://www.confluent.io/blog/how-to-choose-the-number-of-
> > topicspartitions-in-a-kafka-cluster/
> >
> > -Sameer.
> >
> > On Tue, May 16, 2017 at 2:40 PM, kant kodali <kanth...@gmail.com> wrote:
> >
> > > Forgot to mention: The question in this thread is for one node which
> has
> > 8
> > > CPU's 16GB RAM & 500GB hard disk space.
> > >
> > > On Tue, May 16, 2017 at 2:06 AM, kant kodali <kanth...@gmail.com>
> wrote:
> > >
> > > > Hi All,
> > > >
> > > > 1. I was wondering if anyone has seen or heard or able to create 1M
> or
> > > 10M
> > > > or 100M or 1B partitions in a topic? I understand lot of this depends
> > on
> > > > filesystem limitations (we are using ext4) and the OS limitations
> but I
> > > > just would like to know what is the scale one had seen in production?
> > > > 2. Is it advisable?
> > > >
> > > > Thanks!
> > > >
> > >
> >
>

Reply via email to