[ https://issues.apache.org/jira/browse/KAFKA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385156#comment-15385156 ]
Sriharsha Chintalapani commented on KAFKA-3375: ----------------------------------------------- Hi [~ijuma] as part of this patch there is scalaCompileOptions. This causing interesting behavior in spark streaming env. Spark steaming uses old consumer libs. Both Kafka and Scala built with 2.10.5 and it fails at https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/client/ClientUtils.scala#L62 . If I remove the catch from that block it works fine. Later came to realize the new addtionalCompileOptions causing this weird behavior. After removing the compileOptions it works fine. {code} java.lang.VerifyError: Stack map does not match the one at exception handler 198 Exception Details: Location: kafka/client/ClientUtils$.fetchTopicMetadata(Lscala/collection/Set;Lscala/collection/Seq;Lkafka/producer/ProducerConfig;I)Lkafka/api/TopicMetadataResponse; @198: astore Reason: Type top (current frame, locals[12]) is not assignable to 'kafka/producer/SyncProducer' (stack map, locals[12]) Current Frame: bci: @71 flags: { } locals: { 'kafka/client/ClientUtils$', 'scala/collection/Set', 'scala/collection/Seq', 'kafka/producer/ProducerConfig', integer, integer, 'scala/runtime/IntRef', 'kafka/api/TopicMetadataRequest', 'kafka/api/TopicMetadataResponse', 'java/lang/Throwable', 'scala/collection/Seq', 'java/lang/Throwable' } stack: { 'java/lang/Throwable' } Stackmap Frame: bci: @198 flags: { } locals: { 'kafka/client/ClientUtils$', 'scala/collection/Set', 'scala/collection/Seq', 'kafka/producer/ProducerConfig', integer, integer, 'scala/runtime/IntRef', 'kafka/api/TopicMetadataRequest', 'kafka/api/TopicMetadataResponse', 'java/lang/Throwable', 'scala/collection/Seq', top, 'kafka/producer/SyncProducer' } stack: { 'java/lang/Throwable' } Bytecode: 0000000: 0336 05bb 00a1 5903 b700 a43a 06bb 00a6 0000010: 59b2 00ab b600 af15 042d b600 b42b b900 0000020: ba01 00b7 00bd 3a07 0157 013a 0801 5701 0000030: 3a09 b200 c22c b200 c7b6 00cb b600 cfc0 0000040: 00d1 3a0a a700 353a 0b2a bb00 0b59 2b15 0000050: 0419 0619 0ab7 00d8 bb00 0d59 190b b700 0000060: dbb6 00dd 190b 3a09 1906 1906 b400 e104 0000070: 60b5 00e1 190c b600 e419 06b4 00e1 190a 0000080: b900 e801 00a2 006b 1505 9a00 66b2 00ed 0000090: 2d19 0a19 06b4 00e1 b900 f102 00c0 00f3 00000a0: b600 f73a 0c2a bb00 0f59 2b15 0419 0619 00000b0: 0ab7 00f8 b600 fa19 0c19 07b6 00fe 3a08 00000c0: 0436 05a7 0019 3a0d 1906 1906 b400 e104 00000d0: 60b5 00e1 190c b600 e419 0dbf 1906 1906 00000e0: b400 e104 60b5 00e1 190c b600 e4a7 ff8c 00000f0: 1505 9900 122a bb00 1159 2bb7 0101 b601 0000100: 0319 08b0 bb01 0559 bb01 0759 b201 0c13 0000110: 010e b601 12b7 0114 b201 0c05 bd00 0459 0000120: 032b 5359 0419 0a53 b601 18b6 011c 1909 0000130: b701 1fbf Exception Handler Table: bci [183, 198] => handler: 71 bci [183, 198] => handler: 198 bci [71, 104] => handler: 198 Stackmap Table: full_frame(@71,{Object[#2],Object[#182],Object[#209],Object[#177],Integer,Integer,Object[#161],Object[#166],Object[#211],Object[#72],Object[#209],Object[#213]},{Object[#72]}) chop_frame(@121,1) full_frame(@198,{Object[#2],Object[#182],Object[#209],Object[#177],Integer,Integer,Object[#161],Object[#166],Object[#211],Object[#72],Object[#209],Top,Object[#213]},{Object[#72]}) same_frame(@220) chop_frame(@240,2) same_frame(@260) at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:67) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) 16/07/17 00:56:17 INFO ConsumerFetcherManager: [ConsumerFetcherManager-1468716976795] Added fetcher for partitions ArrayBuffer() 16/07/17 00:56:17 WARN ConsumerFetcherManager$LeaderFinderThread: [spark_9_r7-kfzu-spark-bikas-2-1468716976671-97da4a8d-leader-finder-thread], Failed to find leader for Set([test_spark,0]) {code} the code I am running is https://github.com/apache/spark/blob/master/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaInputDStream.scala and here is the streaming job in spark https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java . > Suppress and fix compiler warnings where reasonable and tweak compiler > settings > ------------------------------------------------------------------------------- > > Key: KAFKA-3375 > URL: https://issues.apache.org/jira/browse/KAFKA-3375 > Project: Kafka > Issue Type: Improvement > Reporter: Ismael Juma > Assignee: Ismael Juma > Fix For: 0.10.0.0 > > > This will make it easier to do KAFKA-2982. -- This message was sent by Atlassian JIRA (v6.3.4#6332)