Is there any work-around for this? How can we leverage the auto-cleanup without taking the server down?
KR, On 1 June 2017 at 15:46, Mohammed Manna <manme...@gmail.com> wrote: > Sorry for bugging everyone, but does anyone have any workaround which has > been implemented successfully? I am assuming that it's a simple issue with > the write output stream not being closed properly. > > The issue was reported here - https://issues.apache.org/ji > ra/browse/KAFKA-1194 > > On 30 May 2017 at 16:01, Mohammed Manna <manme...@gmail.com> wrote: > >> Hi, >> >> I can see that this is an existing issue. The latest comments says >> "Everything works fine after manual clean-up", but this information is >> vague and doesn't really say "What/When to delete"? Has anyone got any idea >> whether this has been addressed already? I am using the latest release. >> >> FYI - the issue details are here = https://issues.apache.org/ji >> ra/browse/KAFKA-1194 >> >> I have set my brokers to have the following settings: >> >> >>> log.retention.hours=2 >>> log.retention.bytes=1073741824 >>> >>> log.segment.bytes=1073741824 >>> >>> log.retention.check.interval.ms=120000 >>> >>> offsets.retention.minutes=60 >>> >>> offsets.retention.check.interval.ms=300000 >> >> >> KR, >> > >