Could you use our DumpLogSegment tool on the relevant log segment and see if the log is corrupted? Also, are you using the 0.8.0 release?
Thanks, Jun On Sun, Jan 19, 2014 at 10:09 PM, Bae, Jae Hyeon <metac...@gmail.com> wrote: > Hello > > I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters are > being tested now. > > Today, I got alerted with the following messages: > > "data": { > "exceptionMessage": "Found a message larger than the maximum fetch size > of this consumer on topic nf_errors_log partition 0 at fetch offset > 76736251. Increase the fetch size, or decrease the maximum message size the > broker will allow.", > "exceptionStackTrace": "kafka.common.MessageSizeTooLargeException: > Found a message larger than the maximum fetch size of this consumer on > topic nf_errors_log partition 0 at fetch offset 76736251. Increase the > fetch size, or decrease the maximum message size the broker will allow. > "exceptionType": "kafka.common.MessageSizeTooLargeException" > }, > "description": "RuntimeException aborted realtime > processing[nf_errors_log]" > > What I don't understand is, I am using all default properties, which means > > broker's message.max.bytes is 1000000 > consumer's fetch.message.max.bytes is 1024 * 1024 greater than broker's > message.max.bytes > > How could this happen? I am using snappy compression. > > Thank you > Best, Jae >