Thanks Joel. The jira issue mentions a workaround to "remove the cleaner offset 
checkpoint" what exactly does that entail? 

Thanks again!
Thunder


-----Original Message-----
From: Joel Koshy [mailto:jjkosh...@gmail.com] 
Sent: Tuesday, February 03, 2015 3:57 PM
To: users@kafka.apache.org
Subject: Re: Turning on cleanup.policy=compact for a topic => not starting 
cleanup ?

Oh yes I forgot about that possibility. I think this should be fixed by 
KAFKA-1641 which will be in the next release.

On Tue, Feb 03, 2015 at 11:50:54PM +0000, Thunder Stumpges wrote:
> AH hah! The log-cleaner.log file did NOT go into the logs directory for some 
> reason, found it in the kafka root directory. But in there I found the 
> following output:
> 
> [2015-02-03 15:43:08,128] INFO Cleaner 0: Beginning cleaning of log 
> dev_testcompact-9. (kafka.log.LogCleaner)
> [2015-02-03 15:43:08,129] INFO Cleaner 0: Building offset map for 
> dev_testcompact-9... (kafka.log.LogCleaner)
> [2015-02-03 15:43:08,222] INFO Cleaner 0: Building offset map for log 
> dev_testcompact-9 for 15 segments in offset range [0, 230). 
> (kafka.log.LogCleaner)
> [2015-02-03 15:43:08,227] ERROR [kafka-log-cleaner-thread-0], Error 
> due to  (kafka.log.LogCleaner)
> java.lang.IllegalArgumentException: requirement failed: Last clean offset is 
> 0 but segment base offset is 152 for log dev_testcompact -9.
>         at scala.Predef$.require(Predef.scala:214)
>         at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:491)
>         at kafka.log.Cleaner.clean(LogCleaner.scala:288)
>         at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:202)
>         at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:187)
>         at 
> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> [2015-02-03 15:43:08,232] INFO [kafka-log-cleaner-thread-0], Stopped  
> (kafka.log.LogCleaner)
> 
> 
> It appears it doesn't like the fact that some of the log segments have been 
> deleted already... ? From when I was using cleanup.policy=delete ?
> 
> "Last clean offset is 0 but segment base offset is 152 for log 
> dev_vsw.avrodto.addelivery.machinelearning.adbanditmodel-9."
> 
> Thanks,
> Thunder
> 
> -----Original Message-----
> From: Thunder Stumpges [mailto:tstump...@ntent.com]
> Sent: Tuesday, February 03, 2015 3:37 PM
> To: users@kafka.apache.org
> Subject: RE: Turning on cleanup.policy=compact for a topic => not starting 
> cleanup ?
> 
> Thanks Joel for the quick turnaround!
> 
> 1st, I took a look at "max-dirty-percent" and noticed it is "100", so that's 
> a good sign that it *should* have something to clean.
> 
> 2nd, I didn't realize the cleaner messages went to a different log! However I 
> have NO log-cleaner.log file in the normal logs directory. It does seem to be 
> configured in log4j (we haven't changed any of the defaults here):
> 
> log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppend
> er log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
> log4j.appender.cleanerAppender.File=log-cleaner.log
> log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m 
> (%c)%n
> 
> log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender 
> log4j.additivity.kafka.log.LogCleaner=false
> 
> 3rd, no messages using compaction, all messages have a key.
> 
> So, lastly the thread dump. Not sure exactly how to do that or what 
> thread I'd be looking for specifically... Found a suggestion to run
> 
> Jstack -l <JAVA_PID> > jstack.out
> 
> So I did that, and looked for anything containing "Clean" or "clean" and no 
> matches.
> 
> I will be trying to set the log level for LogCleaner to debug to see if that 
> helps at all.
> 
> Thanks in advance,
> Thunder
> 
> 
> -----Original Message-----
> From: Joel Koshy [mailto:jjkosh...@gmail.com]
> Sent: Tuesday, February 03, 2015 3:07 PM
> To: users@kafka.apache.org
> Subject: Re: Turning on cleanup.policy=compact for a topic => not starting 
> cleanup ?
> 
> - Can you check the log cleaner logs?
> - Do you have any compressed messages in your log? Or messages without
>   a key?
> - By default it is in a log-cleaner.log file unless you modified that.
> - Can you take a thread-dump to see if the log cleaner is still alive?
> - Also, there is an mbean that you can poke: "max-dirty-percent" under
>   log cleaner. Can you check on its value?
> 
> Thanks,
> 
> Joel
> 
> On Tue, Feb 03, 2015 at 10:59:27PM +0000, Thunder Stumpges wrote:
> > Hi guys, I am having a difficult time getting the log compaction to run on 
> > a topic I created initially with cleanup.policy=delete. Here's the details:
> > 
> > 3 brokers, 0.8.1.1 all have log.cleaner.enable=true (other than this 
> > there are no non-defaults set in our server.properties)
> > 
> > Create the topic first with policy=delete and a short window. Wanted to see 
> > that the segments were being created and deleted by the "normal" delete 
> > retention. This seemed to work fine. New segments created every 6 seconds 
> > while I add some test data, then the cleaner comes along and deletes log 
> > segments after 1min.
> > 
> > sudo bin/kafka-topics.sh --zookeeper zk1:2181/kafka --create --topic 
> > dev_testcompact --partitions 10 --replication-factor 3 --config 
> > cleanup.policy=delete --config segment.ms=6000 --config
> > retention.ms=60000
> > 
> > 
> > Then I tried to update the policy to do compact:
> > sudo bin/kafka-topics.sh --zookeeper zk1:2181/kafka --alter --topic 
> > dev_testcompact --config cleanup.policy=compact --config
> > min.cleanable.dirty.ratio=0.3 -config delete.retention.ms=60000
> > 
> > From this point, the deletion retention stopped cleaning up, but the 
> > compact cleaner doesn't seem to be running at all. I was expecting that 
> > after 15 seconds (default for setting log.cleaner.backoff.ms) the cleaner 
> > would come in and compact the old segments. I now have generated 400 rows 
> > across 40 keys (so there is plenty to compact). Segments are being created 
> > every 6 seconds per the config.
> > 
> > I have also tried removing the deletion related config:
> > 
> > sudo bin/kafka-topics.sh --zookeeper zk1:2181/kafka --alter --topic 
> > dev_testcompact --deleteConfig retention.ms
> > 
> > I then restarted one of the three brokers wondering if it needed to restart 
> > to pick up new configs, still no clenup. What am I doing wrong?!
> 

Reply via email to