[jira] [Resolved] (KAFKA-6734) TopicMetadataTest is flaky
[ https://issues.apache.org/jira/browse/KAFKA-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved KAFKA-6734. --- Resolution: Cannot Reproduce > TopicMetadataTest is flaky > -- > > Key: KAFKA-6734 > URL: https://issues.apache.org/jira/browse/KAFKA-6734 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > I got two different test failures in two runs of test suite: > {code} > kafka.integration.TopicMetadataTest > testAutoCreateTopic FAILED > kafka.common.KafkaException: fetching topic metadata for topics > [Set(testAutoCreateTopic)] from broker [List(BrokerEndPoint(0,,41557))] failed > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:77) > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98) > at > kafka.integration.TopicMetadataTest.testAutoCreateTopic(TopicMetadataTest.scala:105) > Caused by: > java.net.SocketTimeoutException > at > sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:211) > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) > at > java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) > at > org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:122) > at > kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:131) > at > kafka.network.BlockingChannel.receive(BlockingChannel.scala:122) > at > kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82) > at > kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79) > at kafka.producer.SyncProducer.send(SyncProducer.scala:124) > at > kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63) > ... 2 more > {code} > {code} > kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack > FAILED > java.lang.AssertionError: Topic metadata is not correctly updated for > broker kafka.server.KafkaServer@4c45dc9f. > Expected ISR: List(BrokerEndPoint(0,localhost,40822), > BrokerEndPoint(1,localhost,39030)) > Actual ISR : Vector(BrokerEndPoint(0,localhost,40822)) > at kafka.utils.TestUtils$.fail(TestUtils.scala:355) > at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865) > at > kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:191) > at > kafka.integration.TopicMetadataTest$$anonfun$checkIsr$1.apply(TopicMetadataTest.scala:189) > at scala.collection.immutable.List.foreach(List.scala:392) > at > kafka.integration.TopicMetadataTest.checkIsr(TopicMetadataTest.scala:189) > at > kafka.integration.TopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack(TopicMetadataTest.scala:231) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6869) Distributed herder synchronization issue
Oleg Kuznetsov created KAFKA-6869: - Summary: Distributed herder synchronization issue Key: KAFKA-6869 URL: https://issues.apache.org/jira/browse/KAFKA-6869 Project: Kafka Issue Type: Bug Components: KafkaConnect Affects Versions: 1.1.0 Reporter: Oleg Kuznetsov {code}org.apache.kafka.connect.runtime.distributed.DistributedHerder#needsReconfigRebalance\{code} field is read/write with and without *synchronized* in multiple places, that is incorrect by JMM. I propose to either adding synchronized to access it in all places or using RW lock varying type of locking for reading and writing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Java 10 replacing Java 9 in Jenkins for trunk
In PR build, I noticed the following ( https://builds.apache.org/job/kafka-pr-jdk10-scala2.12/622/console) : *02:32:11* :clients:compileJava/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk10-scala2.12/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java:1262: error: lambda expressions are not supported in -source 7*02:32:12* client.poll(pollTimeout, nowMs, () -> {*02:32:12* ^*02:32:12* (use -source 8 or higher to enable lambda expressions)*02:32:12* 1 error Could the above be due to the following in build.gradle : if (JavaVersion.current().isJava9Compatible()) options.compilerArgs << "--release" << "7" If so, we need to adjust it. Cheers On Mon, Apr 9, 2018 at 9:46 AM, Ismael Juma wrote: > Hi all, > > Java 10 was recently released and support for Java 9 has ended since it's > not a LTS release. I've added a kafka-trunk Jenkins job for Java 10 and > disabled the Java 9 job. I also added a PR Jenkins job for Java 10 and will > soon disable the Java 9 PR job. > > The general idea is to have a separate Jenkins job for the latest non LTS > release (Java 10) and all supported LTS releases (Java 8 and Java 7 > currently, soon to become Java 8 only). > > Let me know if you have any questions or concerns. > > Ismael >
Re: Java 10 replacing Java 9 in Jenkins for trunk
Hi Ted, We are in the process of updating the system tests infrastructure so that it works with Java 8. Once that happens, we will switch the build to use Java 8. Ismael On Sat, 5 May 2018, 19:57 Ted Yu, wrote: > In PR build, I noticed the following ( > https://builds.apache.org/job/kafka-pr-jdk10-scala2.12/622/console) : > > *02:32:11* > :clients:compileJava/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk10-scala2.12/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java:1262: > error: lambda expressions are not supported in -source 7*02:32:12* > client.poll(pollTimeout, nowMs, () -> {*02:32:12* > ^*02:32:12* (use -source 8 or higher to > enable lambda expressions)*02:32:12* 1 error > > > Could the above be due to the following in build.gradle : > > if (JavaVersion.current().isJava9Compatible()) > options.compilerArgs << "--release" << "7" > > If so, we need to adjust it. > > Cheers > > On Mon, Apr 9, 2018 at 9:46 AM, Ismael Juma wrote: > > > Hi all, > > > > Java 10 was recently released and support for Java 9 has ended since it's > > not a LTS release. I've added a kafka-trunk Jenkins job for Java 10 and > > disabled the Java 9 job. I also added a PR Jenkins job for Java 10 and > will > > soon disable the Java 9 PR job. > > > > The general idea is to have a separate Jenkins job for the latest non LTS > > release (Java 10) and all supported LTS releases (Java 8 and Java 7 > > currently, soon to become Java 8 only). > > > > Let me know if you have any questions or concerns. > > > > Ismael > > >
[jira] [Resolved] (KAFKA-5677) Remove deprecated punctuate method
[ https://issues.apache.org/jira/browse/KAFKA-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guozhang Wang resolved KAFKA-5677. -- Resolution: Fixed Assignee: (was: Jimin Hsieh) Fix Version/s: 2.0.0 > Remove deprecated punctuate method > -- > > Key: KAFKA-5677 > URL: https://issues.apache.org/jira/browse/KAFKA-5677 > Project: Kafka > Issue Type: Task >Reporter: Michal Borowiecki >Priority: Major > Fix For: 2.0.0 > > > Task to track the removal of the punctuate method that got deprecated in > KAFKA-5233 and associated unit tests. > (not sure the fix version number at this point) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6870) Concurrency conflicts in SampledStat
Chia-Ping Tsai created KAFKA-6870: - Summary: Concurrency conflicts in SampledStat Key: KAFKA-6870 URL: https://issues.apache.org/jira/browse/KAFKA-6870 Project: Kafka Issue Type: Bug Reporter: Chia-Ping Tsai The samples stored in SampledStat is not thread-safe. However, ReplicaFetcherThreads used to handle replica to specified brokers may update (when the samples is empty, we will add a new sample to it) and iterate the samples concurrently, and then cause the ConcurrentModificationException. {code:java} [2018-05-03 13:50:56,087] ERROR [ReplicaFetcher replicaId=106, leaderId=100, fetcherId=0] Error due to (kafka.server.ReplicaFetcherThread:76) java.util.ConcurrentModificationException at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909) at java.util.ArrayList$Itr.next(ArrayList.java:859) at org.apache.kafka.common.metrics.stats.Rate$SampledTotal.combine(Rate.java:132) at org.apache.kafka.common.metrics.stats.SampledStat.measure(SampledStat.java:78) at org.apache.kafka.common.metrics.stats.Rate.measure(Rate.java:66) at org.apache.kafka.common.metrics.KafkaMetric.measurableValue(KafkaMetric.java:85) at org.apache.kafka.common.metrics.Sensor.checkQuotas(Sensor.java:201) at org.apache.kafka.common.metrics.Sensor.checkQuotas(Sensor.java:192) at kafka.server.ReplicationQuotaManager.isQuotaExceeded(ReplicationQuotaManager.scala:104) at kafka.server.ReplicaFetcherThread.kafka$server$ReplicaFetcherThread$$shouldFollowerThrottle(ReplicaFetcherThread.scala:384) at kafka.server.ReplicaFetcherThread$$anonfun$buildFetchRequest$1.apply(ReplicaFetcherThread.scala:263) at kafka.server.ReplicaFetcherThread$$anonfun$buildFetchRequest$1.apply(ReplicaFetcherThread.scala:261) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.server.ReplicaFetcherThread.buildFetchRequest(ReplicaFetcherThread.scala:261) at kafka.server.AbstractFetcherThread$$anonfun$2.apply(AbstractFetcherThread.scala:102) at kafka.server.AbstractFetcherThread$$anonfun$2.apply(AbstractFetcherThread.scala:101) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:101) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) {code} Before [https://github.com/apache/kafka/commit/d734f4e56d276f84b8c52b602edd67d41cbb6c35,] the ConcurrentModificationException doesn't exist since all changes to samples is "add" currently. Using the get(index) is able to avoid the ConcurrentModificationException. In short, we can just make samples thread-safe. Or just replace the foreach loop by get(index) if we have concerns about the performance of thread-safe list... -- This message was sent by Atlassian JIRA (v7.6.3#76005)