Hello!
  Please check whether the segment.ms configuration on topic will help you
to solve your problem.

https://kafka.apache.org/documentation/

https://stackoverflow.com/questions/41048041/kafka-deletes-segments-even-before-segment-size-is-reached

Regards,
 Florin

segment.ms This configuration controls the period of time after which Kafka
will force the log to roll even if the segment file isn't full to ensure
that retention can delete or compact old data. long 604800000 [1,...]
log.roll.ms medium

On Mon, Dec 17, 2018 at 12:28 PM Claudia Wegmann <c.wegm...@kasasi.de>
wrote:

> Dear kafka users,
>
> I've got a problem on one of my kafka clusters. I use this cluster with
> kafka streams applications. Some of this stream apps use a kafka state
> store. Therefore a changelog topic is created for those stores with cleanup
> policy "compact". One of these topics is running wild for some time now and
> seems to grow indefinitely. When I check the  log file of the first
> segment, there is a lot of data in it, that should have been compacted
> already.
>
> So I guess I did not configure everything correctly for log compaction to
> work as expected. What config parameters do have influence on log
> compaction? And how to set them, when I want data older than 4 hours to be
> compacted?
>
> Thanks in advance.
>
> Best,
> Claudia
>

Reply via email to