I am observing the following exception with kafka client:

2015-02-04 00:17:27,345
(LAX1-GRIFFIN-r8-1423037468055-pf13797-lax1-GriffinDownloader-1423037818264_c7b1e843ff51-1423037822122-eb7afca7-leader-finder-thread)
ClientUtils$ WARN: Fetching topic metadata with correlation id 112 for
topics [Set(LAX1-GRIFFIN-r8-1423037468055)] from broker
[id:49649,host:172.16.204.44,port:49649] failed
java.lang.ArrayIndexOutOfBoundsException: 7
at
kafka.api.TopicMetadata$$anonfun$readFrom$1.apply$mcVI$sp(TopicMetadata.scala:38)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:78)
at kafka.api.TopicMetadata$.readFrom(TopicMetadata.scala:36)
at
kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
at
kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at scala.collection.immutable.Range.foreach(Range.scala:81)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.immutable.Range.map(Range.scala:46)
at kafka.api.TopicMetadataResponse$.readFrom(TopicMetadataResponse.scala:31)
at kafka.producer.SyncProducer.send(SyncProducer.scala:115)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



# Background on what my code is doing:

In my setup kafka brokers are set for auto topic creation. In the scenario
above a node informs other nodes (currently 5 in total) about ~50 new
(non-existent) topics, and all the nodes almost simultaneously open a
consumer for each of these topics. This triggers topic creation for all the
topics on kafka brokers. Most of the topics are created fine but there is
almost always few topics that throw the above exception and a kafka
producer is unable to send any data to such a topic
(LAX1-GRIFFIN-r8-1423037468055
in the above case)


# Logs
All kafka broker logs (3 brokers) available at http://d.pr/f/1eOGM/5UPMPfg5
For these logs only LAX1-GRIFFIN-r8-1423037468055 had an issue. All other
topics were fine.


Setup
--------
Zookeeper: 3.4.6
Kafka broker: 0.8.2-beta
Kafka clients: 0.8.2-beta

# Kafka boker settings (all other settings are default 0.8.2-beta settings)
kafka.controlled.shutdown.enable: 'FALSE'
kafka.auto.create.topics.enable: 'TRUE'
kafka.num.partitions: 8
kafka.default.replication.factor: 1
kafka.rebalance.backoff.ms: 3000
kafka.rebalance.max.retries: 10
kafka.log.retention.minutes: 1200
kafka.delete.topic.enable: 'TRUE'



Sumit

Reply via email to