Hi, Can you please remove me from this mail box.
Regards, Komal Babu Sourcing Administrator ko...@qbs.co.uk 020 8733 7139 http://www.qbsd.co.uk QBS Distribution, 7 Wharfside, Rosemont Road, Wembley, HA0 4QB United Kingdom Banner -----Original Message----- From: Claudia Wegmann <c.wegm...@kasasi.de> Sent: 17 December 2018 14:18 To: users@kafka.apache.org Subject: AW: Configuration of log compaction Hi, thanks for the quick response. My problem is not, that no new segments are created, but that segments with old data do not get compacted. I had to restart one broker because there was no diskspace left. After recreating all indexes etc. the broker recognized the old data and compacted it correctly. I had to restart all other brokers of the cluster, too, for them to also recognize the old data and start compacting. So I guess, before restarting the brokers where to busy to compact/delete old data? Is there a configuration to ensure compaction after a certain amount of time or something? Best, Claudia -----Ursprüngliche Nachricht----- Von: Spico Florin <spicoflo...@gmail.com> Gesendet: Montag, 17. Dezember 2018 14:28 An: users@kafka.apache.org Betreff: Re: Configuration of log compaction Hello! Please check whether the segment.ms configuration on topic will help you to solve your problem. https://kafka.apache.org/documentation/ https://stackoverflow.com/questions/41048041/kafka-deletes-segments-even-before-segment-size-is-reached Regards, Florin segment.ms This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data. long 604800000 [1,...] log.roll.ms medium On Mon, Dec 17, 2018 at 12:28 PM Claudia Wegmann <c.wegm...@kasasi.de> wrote: > Dear kafka users, > > I've got a problem on one of my kafka clusters. I use this cluster > with kafka streams applications. Some of this stream apps use a kafka > state store. Therefore a changelog topic is created for those stores > with cleanup policy "compact". One of these topics is running wild for > some time now and seems to grow indefinitely. When I check the log > file of the first segment, there is a lot of data in it, that should > have been compacted already. > > So I guess I did not configure everything correctly for log compaction > to work as expected. What config parameters do have influence on log > compaction? And how to set them, when I want data older than 4 hours > to be compacted? > > Thanks in advance. > > Best, > Claudia >