Hi Vitali,
What are the timestamps in your message? I have seen this before where you
have timestamps well into the future so every few messages causes a log
roll and you end up with a very large amount of log files.

*William*

On Thu, 23 Jul 2020 at 16:22, Vitalii Stoianov <
vitalii.stoianov...@gmail.com> wrote:

> Hi All,
>
> I also have noticed that the number of log/index files are too high and log
> roll is happening more frequently than expected.
> The log.roll.hours is default (168) and log.segment.bytes is 1g and log
> files size in the topic partition folders are usually smaller than 1g.
>
> Regards,
> Vitalii.
>
> On Wed, Jul 22, 2020 at 8:15 PM Vitalii Stoianov <
> vitalii.stoianov...@gmail.com> wrote:
>
> > Hi All,
> >
> > According to this:
> https://docs.confluent.io/current/kafka/deployment.html
> > vm.max_map_count is depend on number of index file:
> > *find /tmp/kafka_logs -name '*index' | wc -l*
> >
> > In our test lab we have next setup:
> >
> > *Topic:test      PartitionCount:256      ReplicationFactor:2
> > Configs:segment.bytes=1073741824,retention.ms
> > <http://retention.ms
> >=86400000,message.format.version=2.3-IV1,max.message.bytes=4194304,unclean.leader.election.enable=true*
> >
> > No cleanup.policy set explicitly for topic or in server.properties so I
> > assume default: delete according to
> > https://kafka.apache.org/23/documentation.html#brokerconfigs
> >
> > I did a small script that counted the number of index files and for this
> > topic it is:
> > ~638000.
> > Also if I check kafka log/data dir it contain some old log/index files
> > create date for which is older than 10 days.(retention for topic is one
> day)
> > Note: When i checked  log-cleaner.log it contains info only about cleanup
> > for compacted logs.
> >
> > In order to set:  vm.max_map_count value correctly, I need to
> > understand the following:
> > Why do such old index/log files exist and not cleaned?
> > How properly set vm.max_map_count if index/logs is not freed on time ??
> >
> > Regards,
> > Vitalii.
> >
>

Reply via email to