[ https://issues.apache.org/jira/browse/KAFKA-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527098#comment-15527098 ]
Michael Coon commented on KAFKA-4224: ------------------------------------- Gotchya. Unfortunately, the system throwing the exception is an isolated system and I can copy/paste the stack trace. I would need to test 0.10.0.1 to see if the new record parsing code would throw a different exception. I still don't believe that code would throw the detail I would need (i.e. offset/partition). > IndexOutOfBounds in RecordsIterator causes infinite loop in NetworkClient > ------------------------------------------------------------------------- > > Key: KAFKA-4224 > URL: https://issues.apache.org/jira/browse/KAFKA-4224 > Project: Kafka > Issue Type: Bug > Components: consumer > Affects Versions: 0.10.0.1 > Reporter: Michael Coon > > For whatever reason, I seem to have a corrupted message that is returned from > a broker that puts the consumer into an infinite loop. The > org.apache.kafka.client.consumer.internals.Fetcher (line 590) is getting the > next record from the RecordsIterator or MemoryRecords but when it attempts to > decode the record, it throws "IndexOutOfBounds" exception. Unfortunately, > that exception is merely logged and the Fetcher goes on to get the next > message. But the exception apparently does not move the underlying buffer > read forward in such a way that it would actually go and get the next record. > The result: it keeps trying to read the corrupted record but can't make > progress. > I offer two potential solutions: > 1) throw the exception up to me and let me figure out whether I want to skip > forward in offsets > 2) Make sure the underlying RecordsIterator actually moves forward on > exceptions so that progress can be made when corrupted messages are found. -- This message was sent by Atlassian JIRA (v6.3.4#6332)