[jira] [Created] (KAFKA-7756) Leader: -1 after topic delete
zhws created KAFKA-7756: --- Summary: Leader: -1 after topic delete Key: KAFKA-7756 URL: https://issues.apache.org/jira/browse/KAFKA-7756 Project: Kafka Issue Type: Bug Reporter: zhws Attachments: image-2018-12-19-17-03-42-912.png, image-2018-12-19-17-07-27-850.png, image-2018-12-19-17-10-25-784.png 1、when i first delete topic "deleteTestTwo",it's successed. I can see the delete log and zookeeper delete node too. !image-2018-12-19-17-03-42-912.png! 2、But when i create this topic and delete again. !image-2018-12-19-17-07-27-850.png! I just see the file delete log. Zookeeper still have this node, and i execute describe shell as follows !image-2018-12-19-17-10-25-784.png! if some people know the reason, please tell me.thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7756) Leader: -1 after topic delete
[ https://issues.apache.org/jira/browse/KAFKA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhws updated KAFKA-7756: Priority: Blocker (was: Major) > Leader: -1 after topic delete > - > > Key: KAFKA-7756 > URL: https://issues.apache.org/jira/browse/KAFKA-7756 > Project: Kafka > Issue Type: Bug >Reporter: zhws >Priority: Blocker > Attachments: image-2018-12-19-17-03-42-912.png, > image-2018-12-19-17-07-27-850.png, image-2018-12-19-17-10-25-784.png > > > 1、when i first delete topic "deleteTestTwo",it's successed. I can see the > delete log and zookeeper delete node too. > !image-2018-12-19-17-03-42-912.png! > > 2、But when i create this topic and delete again. > !image-2018-12-19-17-07-27-850.png! > I just see the file delete log. > Zookeeper still have this node, and i execute describe shell as follows > !image-2018-12-19-17-10-25-784.png! > > if some people know the reason, please tell me.thanks > kafka version : 2.0 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7756) Leader: -1 after topic delete
[ https://issues.apache.org/jira/browse/KAFKA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhws updated KAFKA-7756: Description: 1、when i first delete topic "deleteTestTwo",it's successed. I can see the delete log and zookeeper delete node too. !image-2018-12-19-17-03-42-912.png! 2、But when i create this topic and delete again. !image-2018-12-19-17-07-27-850.png! I just see the file delete log. Zookeeper still have this node, and i execute describe shell as follows !image-2018-12-19-17-10-25-784.png! if some people know the reason, please tell me.thanks kafka version : 2.0 was: 1、when i first delete topic "deleteTestTwo",it's successed. I can see the delete log and zookeeper delete node too. !image-2018-12-19-17-03-42-912.png! 2、But when i create this topic and delete again. !image-2018-12-19-17-07-27-850.png! I just see the file delete log. Zookeeper still have this node, and i execute describe shell as follows !image-2018-12-19-17-10-25-784.png! if some people know the reason, please tell me.thanks > Leader: -1 after topic delete > - > > Key: KAFKA-7756 > URL: https://issues.apache.org/jira/browse/KAFKA-7756 > Project: Kafka > Issue Type: Bug >Reporter: zhws >Priority: Major > Attachments: image-2018-12-19-17-03-42-912.png, > image-2018-12-19-17-07-27-850.png, image-2018-12-19-17-10-25-784.png > > > 1、when i first delete topic "deleteTestTwo",it's successed. I can see the > delete log and zookeeper delete node too. > !image-2018-12-19-17-03-42-912.png! > > 2、But when i create this topic and delete again. > !image-2018-12-19-17-07-27-850.png! > I just see the file delete log. > Zookeeper still have this node, and i execute describe shell as follows > !image-2018-12-19-17-10-25-784.png! > > if some people know the reason, please tell me.thanks > kafka version : 2.0 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6359) Work for KIP-236
[ https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724818#comment-16724818 ] Tom Bentley commented on KAFKA-6359: [~satish.duggana], [~sriharsha] asked me here and also out of band a couple of months ago about working on it. I said then that while it's something I intend to come back to it's not something I have time for right now, so he was welcome to work on it. I don't know if he's made any progress. So while it's fine with me it would be best check with him too. > Work for KIP-236 > > > Key: KAFKA-6359 > URL: https://issues.apache.org/jira/browse/KAFKA-6359 > Project: Kafka > Issue Type: Improvement >Reporter: Tom Bentley >Assignee: Tom Bentley >Priority: Minor > > This issue is for the work described in KIP-236. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7741) Bad dependency via SBT
[ https://issues.apache.org/jira/browse/KAFKA-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724824#comment-16724824 ] sacha barber commented on KAFKA-7741: - John / randall no problem You guys have done awesome job on answering this, well done > Bad dependency via SBT > -- > > Key: KAFKA-7741 > URL: https://issues.apache.org/jira/browse/KAFKA-7741 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.0.0, 2.0.1, 2.1.0 > Environment: Windows 10 professional, IntelliJ IDEA 2017.1 >Reporter: sacha barber >Assignee: John Roesler >Priority: Major > > I am using the Kafka-Streams-Scala 2.1.0 JAR. > And if I create a new Scala project using SBT with these dependencies > {code} > name := "ScalaKafkaStreamsDemo" > version := "1.0" > scalaVersion := "2.12.1" > libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0" > libraryDependencies += "org.apache.kafka" % "kafka-clients" % "2.0.0" > libraryDependencies += "org.apache.kafka" % "kafka-streams" % "2.0.0" > libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % "2.0.0" > //TEST > libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % Test > libraryDependencies += "org.apache.kafka" % "kafka-streams-test-utils" % > "2.0.0" % Test > {code} > I get this error > > {code} > SBT 'ScalaKafkaStreamsDemo' project refresh failed > Error:Error while importing SBT project:...[info] Resolving > jline#jline;2.14.1 ... > [warn] [FAILED ] > javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}: (0ms) > [warn] local: tried > [warn] > C:\Users\sacha\.ivy2\local\javax.ws.rs\javax.ws.rs-api\2.1.1\${packaging.type}s\javax.ws.rs-api.${packaging.type} > [warn] public: tried > [warn] > https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1.1/javax.ws.rs-api-2.1.1.${packaging.type} > [info] downloading > https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-test-utils/2.1.0/kafka-streams-test-utils-2.1.0.jar > ... > [info] [SUCCESSFUL ] > org.apache.kafka#kafka-streams-test-utils;2.1.0!kafka-streams-test-utils.jar > (344ms) > [warn] :: > [warn] :: FAILED DOWNLOADS :: > [warn] :: ^ see resolution messages for details ^ :: > [warn] :: > [warn] :: javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type} > [warn] :: > [trace] Stack trace suppressed: run 'last *:ssExtractDependencies' for the > full output. > [trace] Stack trace suppressed: run 'last *:update' for the full output. > [error] (*:ssExtractDependencies) sbt.ResolveException: download failed: > javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type} > [error] (*:update) sbt.ResolveException: download failed: > javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type} > [error] Total time: 8 s, completed 16-Dec-2018 19:27:21 > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=384M; > support was removed in 8.0See complete log in href="file:/C:/Users/sacha/.IdeaIC2017.1/system/log/sbt.last.log">file:/C:/Users/sacha/.IdeaIC2017.1/system/log/sbt.last.log > {code} > This seems to be a common issue with bad dependency from Kafka to > javax.ws.rs-api. > if I drop the Kafka version down to 2.0.0 and add this line to my SBT file > this error goes away > {code} > libraryDependencies += "javax.ws.rs" % "javax.ws.rs-api" % "2.1" > artifacts(Artifact("javax.ws.rs-api", "jar", "jar"))` > {code} > > However I would like to work with 2.1.0 version. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7282) Failed to read `log header` from file channel
[ https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724852#comment-16724852 ] Jody commented on KAFKA-7282: - [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? > Failed to read `log header` from file channel > - > > Key: KAFKA-7282 > URL: https://issues.apache.org/jira/browse/KAFKA-7282 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.11.0.2, 1.1.1, 2.0.0 > Environment: Linux >Reporter: Alastair Munro >Priority: Major > > Full stack trace: > {code:java} > [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing > fetch operation on partition segmenter-evt-v1-14, offset 96745 > (kafka.server.ReplicaManager) > org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read > `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. > Expected to read 17 bytes, but reached end of file after reading 0 bytes. > Started read from position 25935. > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40) > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286) > at kafka.log.LogSegment.translateOffset(LogSegment.scala:254) > at kafka.log.LogSegment.read(LogSegment.scala:277) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114) > at kafka.log.Log.maybeHandleIOException(Log.scala:1837) > at kafka.log.Log.read(Log.scala:1114) > at > kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973) > at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802) > at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678) > at kafka.server.KafkaApis.handle(KafkaApis.scala:107) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-7282) Failed to read `log header` from file channel
[ https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724852#comment-16724852 ] Jody edited comment on KAFKA-7282 at 12/19/18 9:59 AM: --- [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. was (Author: j9dy): [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using OpenShift (version 3.10) with GlusterFS as storage backend. > Failed to read `log header` from file channel > - > > Key: KAFKA-7282 > URL: https://issues.apache.org/jira/browse/KAFKA-7282 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.11.0.2, 1.1.1, 2.0.0 > Environment: Linux >Reporter: Alastair Munro >Priority: Major > > Full stack trace: > {code:java} > [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing > fetch operation on partition segmenter-evt-v1-14, offset 96745 > (kafka.server.ReplicaManager) > org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read > `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. > Expected to read 17 bytes, but reached end of file after reading 0 bytes. > Started read from position 25935. > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40) > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286) > at kafka.log.LogSegment.translateOffset(LogSegment.scala:254) > at kafka.log.LogSegment.read(LogSegment.scala:277) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114) > at kafka.log.Log.maybeHandleIOException(Log.scala:1837) > at kafka.log.Log.read(Log.scala:1114) > at > kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973) > at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802) > at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678) > at kafka.server.KafkaApis.handle(KafkaApis.scala:107) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-7282) Failed to read `log header` from file channel
[ https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724852#comment-16724852 ] Jody edited comment on KAFKA-7282 at 12/19/18 9:59 AM: --- [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using OpenShift (version 3.10) with GlusterFS as storage backend. was (Author: j9dy): [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? > Failed to read `log header` from file channel > - > > Key: KAFKA-7282 > URL: https://issues.apache.org/jira/browse/KAFKA-7282 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.11.0.2, 1.1.1, 2.0.0 > Environment: Linux >Reporter: Alastair Munro >Priority: Major > > Full stack trace: > {code:java} > [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing > fetch operation on partition segmenter-evt-v1-14, offset 96745 > (kafka.server.ReplicaManager) > org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read > `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. > Expected to read 17 bytes, but reached end of file after reading 0 bytes. > Started read from position 25935. > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40) > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286) > at kafka.log.LogSegment.translateOffset(LogSegment.scala:254) > at kafka.log.LogSegment.read(LogSegment.scala:277) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114) > at kafka.log.Log.maybeHandleIOException(Log.scala:1837) > at kafka.log.Log.read(Log.scala:1114) > at > kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973) > at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802) > at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678) > at kafka.server.KafkaApis.handle(KafkaApis.scala:107) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-7282) Failed to read `log header` from file channel
[ https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724852#comment-16724852 ] Jody edited comment on KAFKA-7282 at 12/19/18 10:03 AM: [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. In the mail you linked, there is an update which says that {code:java} write-behind {code} may be the critical option to turn off: [https://lists.gluster.org/pipermail/gluster-users/2017-May/031208.html] was (Author: j9dy): [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. > Failed to read `log header` from file channel > - > > Key: KAFKA-7282 > URL: https://issues.apache.org/jira/browse/KAFKA-7282 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.11.0.2, 1.1.1, 2.0.0 > Environment: Linux >Reporter: Alastair Munro >Priority: Major > > Full stack trace: > {code:java} > [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing > fetch operation on partition segmenter-evt-v1-14, offset 96745 > (kafka.server.ReplicaManager) > org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read > `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. > Expected to read 17 bytes, but reached end of file after reading 0 bytes. > Started read from position 25935. > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40) > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286) > at kafka.log.LogSegment.translateOffset(LogSegment.scala:254) > at kafka.log.LogSegment.read(LogSegment.scala:277) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114) > at kafka.log.Log.maybeHandleIOException(Log.scala:1837) > at kafka.log.Log.read(Log.scala:1114) > at > kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973) > at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802) > at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678) > at kafka.server.KafkaApis.handle(KafkaApis.scala:107) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-7282) Failed to read `log header` from file channel
[ https://issues.apache.org/jira/browse/KAFKA-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724852#comment-16724852 ] Jody edited comment on KAFKA-7282 at 12/19/18 10:04 AM: [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. Edit: In the mail you linked, there is an update which says that {code:java} write-behind {code} may be the critical option to turn off: [https://lists.gluster.org/pipermail/gluster-users/2017-May/031208.html] was (Author: j9dy): [~amunro] did you end up with a better configuration? We are running into the same issue, the log files of our Kafka brokers are being spammed by the error you reported above. Does this also imply we have data issues (e.g. we are losing data because of this)? By the way, we are also using Kafka 2.0.0, OpenShift (version 3.10) with GlusterFS as storage backend. In the mail you linked, there is an update which says that {code:java} write-behind {code} may be the critical option to turn off: [https://lists.gluster.org/pipermail/gluster-users/2017-May/031208.html] > Failed to read `log header` from file channel > - > > Key: KAFKA-7282 > URL: https://issues.apache.org/jira/browse/KAFKA-7282 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.11.0.2, 1.1.1, 2.0.0 > Environment: Linux >Reporter: Alastair Munro >Priority: Major > > Full stack trace: > {code:java} > [2018-08-13 11:22:01,635] ERROR [ReplicaManager broker=2] Error processing > fetch operation on partition segmenter-evt-v1-14, offset 96745 > (kafka.server.ReplicaManager) > org.apache.kafka.common.KafkaException: java.io.EOFException: Failed to read > `log header` from file channel `sun.nio.ch.FileChannelImpl@6e6d8ddd`. > Expected to read 17 bytes, but reached end of file after reading 0 bytes. > Started read from position 25935. > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:40) > at > org.apache.kafka.common.record.RecordBatchIterator.makeNext(RecordBatchIterator.java:24) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.common.record.FileRecords.searchForOffsetWithSize(FileRecords.java:286) > at kafka.log.LogSegment.translateOffset(LogSegment.scala:254) > at kafka.log.LogSegment.read(LogSegment.scala:277) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159) > at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114) > at kafka.log.Log.maybeHandleIOException(Log.scala:1837) > at kafka.log.Log.read(Log.scala:1114) > at > kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974) > at > kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973) > at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802) > at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815) > at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:678) > at kafka.server.KafkaApis.handle(KafkaApis.scala:107) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7755) Kubernetes - Kafka clients are resolving DNS entries only one time
[ https://issues.apache.org/jira/browse/KAFKA-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724864#comment-16724864 ] ASF GitHub Bot commented on KAFKA-7755: --- hackerwin7 opened a new pull request #6049: KAFKA-7755 urn update inet addresses URL: https://github.com/apache/kafka/pull/6049 *More detailed description of your change, if necessary. The PR title and PR message become the squashed commit message, so use a separate comment to ping reviewers.* *Summary of testing strategy (including rationale) for the feature or bug fix. Unit and/or integration tests are expected for any behaviour change and system tests should be considered for larger changes.* ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Kubernetes - Kafka clients are resolving DNS entries only one time > -- > > Key: KAFKA-7755 > URL: https://issues.apache.org/jira/browse/KAFKA-7755 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 2.1.0, 2.2.0, 2.1.1 > Environment: Kubernetes >Reporter: Loïc Monney >Priority: Blocker > > *Introduction* > Since 2.1.0 Kafka clients are supporting multiple DNS resolved IP addresses > if the first one fails. This change has been introduced by > https://issues.apache.org/jira/browse/KAFKA-6863. However this DNS resolution > is now performed only one time by the clients. This is not a problem if all > brokers have fixed IP addresses, however this is definitely an issue when > Kafka brokers are run on top of Kubernetes. Indeed, new Kubernetes pods will > receive another IP address, so as soon as all brokers will have been > restarted clients won't be able to reconnect to any broker. > *Impact* > Everyone running Kafka 2.1 or later on top of Kubernetes is impacted when a > rolling restart is performed. > *Root cause* > Since https://issues.apache.org/jira/browse/KAFKA-6863 Kafka clients are > resolving DNS entries only once. > *Proposed solution* > In > [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java#L368] > Kafka clients should perform the DNS resolution again when all IP addresses > have been "used" (when _index_ is back to 0) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7750) Hjson support in kafka connect
[ https://issues.apache.org/jira/browse/KAFKA-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724893#comment-16724893 ] Manjeet Duhan commented on KAFKA-7750: -- conncector json we use it as design document that gets checked into our system, also its helpful to make adhoc changes and comments, multi-line strings make it easy to test it out. plain json makes it un-usable as connector config design as un-usable artifact. What's the confluent recommendation to manage and design the connector jsons as they can be large. We thought HJSON looks better than yaml mainly managing indentation. > Hjson support in kafka connect > -- > > Key: KAFKA-7750 > URL: https://issues.apache.org/jira/browse/KAFKA-7750 > Project: Kafka > Issue Type: Improvement >Reporter: Manjeet Duhan >Priority: Major > Attachments: image-2018-12-18-10-07-22-944.png > > > I agree that json format is most accepted format among applications to > communicate but this json is programme friendly , We needed something user > friendly where we can pass comments comments as part of connector > configuration. > Features of Hjson :- > # We are allowed to use comments > # We are allowed to pass json as part of connector configuration key without > escaping it which is very user friendly. (We have modified version of > kafka-connect-elasticsearch where user can pass index mapping part of > connector properties). > Please find attached connector configuration in Json and Hjson. We are > already running this in production. I have introduced HJSON filter on POST > and PUT apis of kafka connect > !image-2018-12-18-10-07-22-944.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6654) Customize SSLContext creation
[ https://issues.apache.org/jira/browse/KAFKA-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724988#comment-16724988 ] Clement Pellerin commented on KAFKA-6654: - KIP-383 proposes a solution for this Jira but it needs more votes. > Customize SSLContext creation > - > > Key: KAFKA-6654 > URL: https://issues.apache.org/jira/browse/KAFKA-6654 > Project: Kafka > Issue Type: Improvement > Components: config >Affects Versions: 1.0.0 >Reporter: Robert Wruck >Priority: Major > > Currently, loading of SSL keystore and truststore always uses a > FileInputStream (SslFactory.SecurityStore) and cannot be changed to load > keystores from other locations such as the classpath, raw byte arrays etc. > Furthermore, passwords for the key stores have to be provided as plaintext > configuration properties. > Delegating the creation of an SSLContext to a customizable implementation > might solve some more issues such as KAFKA-5519, KAFKA-4933, KAFKA-4294, > KAFKA-2629 by enabling Kafka users to implement their own. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5519) Support for multiple certificates in a single keystore
[ https://issues.apache.org/jira/browse/KAFKA-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724991#comment-16724991 ] Clement Pellerin commented on KAFKA-5519: - KIP-383 proposes a solution for KAFKA-6654 which would be a work-around for this Jira. > Support for multiple certificates in a single keystore > -- > > Key: KAFKA-5519 > URL: https://issues.apache.org/jira/browse/KAFKA-5519 > Project: Kafka > Issue Type: New Feature > Components: security >Affects Versions: 0.10.2.1 >Reporter: Alla Tumarkin >Priority: Major > Labels: upstream-issue > > Background > Currently, we need to have a keystore exclusive to the component with exactly > one key in it. Looking at the JSSE Reference guide, it seems like we would > need to introduce our own KeyManager into the SSLContext which selects a > configurable key alias name. > https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/X509KeyManager.html > has methods for dealing with aliases. > The goal here to use a specific certificate (with proper ACLs set for this > client), and not just the first one that matches. > Looks like it requires a code change to the SSLChannelBuilder -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3410) Unclean leader election and "Halting because log truncation is not allowed"
[ https://issues.apache.org/jira/browse/KAFKA-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725039#comment-16725039 ] Nico Meyer commented on KAFKA-3410: --- [~tcrawfo3], which Kafka version did you use? I believe before 2.0.0 the leader election is not immediately triggered if the config is changed for a topic. Only if the cluster membership changes, i.e. any broker is restarted. > Unclean leader election and "Halting because log truncation is not allowed" > --- > > Key: KAFKA-3410 > URL: https://issues.apache.org/jira/browse/KAFKA-3410 > Project: Kafka > Issue Type: Bug > Components: replication >Reporter: James Cheng >Priority: Major > Labels: reliability > > I ran into a scenario where one of my brokers would continually shutdown, > with the error message: > [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because > log truncation is not allowed for topic test, Current leader 1's latest > offset 0 is less than replica 2's latest offset 151 > (kafka.server.ReplicaFetcherThread) > I managed to reproduce it with the following scenario: > 1. Start broker1, with unclean.leader.election.enable=false > 2. Start broker2, with unclean.leader.election.enable=false > 3. Create topic, single partition, with replication-factor 2. > 4. Write data to the topic. > 5. At this point, both brokers are in the ISR. Broker1 is the partition > leader. > 6. Ctrl-Z on broker2. (Simulates a GC pause or a slow network) Broker2 gets > dropped out of ISR. Broker1 is still the leader. I can still write data to > the partition. > 7. Shutdown Broker1. Hard or controlled, doesn't matter. > 8. rm -rf the log directory of broker1. (This simulates a disk replacement or > full hardware replacement) > 9. Resume broker2. It attempts to connect to broker1, but doesn't succeed > because broker1 is down. At this point, the partition is offline. Can't write > to it. > 10. Resume broker1. Broker1 resumes leadership of the topic. Broker2 attempts > to join ISR, and immediately halts with the error message: > [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because > log truncation is not allowed for topic test, Current leader 1's latest > offset 0 is less than replica 2's latest offset 151 > (kafka.server.ReplicaFetcherThread) > I am able to recover by setting unclean.leader.election.enable=true on my > brokers. > I'm trying to understand a couple things: > * In step 10, why is broker1 allowed to resume leadership even though it has > no data? > * In step 10, why is it necessary to stop the entire broker due to one > partition that is in this state? Wouldn't it be possible for the broker to > continue to serve traffic for all the other topics, and just mark this one as > unavailable? > * Would it make sense to allow an operator to manually specify which broker > they want to become the new master? This would give me more control over how > much data loss I am willing to handle. In this case, I would want broker2 to > become the new master. Or, is that possible and I just don't know how to do > it? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7683) Support ConfigDef.Type.MAP
[ https://issues.apache.org/jira/browse/KAFKA-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725077#comment-16725077 ] Paul Czajka commented on KAFKA-7683: Indeed it is! Agreed - closing this issue. > Support ConfigDef.Type.MAP > -- > > Key: KAFKA-7683 > URL: https://issues.apache.org/jira/browse/KAFKA-7683 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Paul Czajka >Priority: Minor > > Support ConfigDef.Type.MAP which will parse a string value (e.g. > "a=1;b=2;c=3") into a HashMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-7683) Support ConfigDef.Type.MAP
[ https://issues.apache.org/jira/browse/KAFKA-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Czajka resolved KAFKA-7683. Resolution: Information Provided Alternate solution already exists, this is a non-issue. > Support ConfigDef.Type.MAP > -- > > Key: KAFKA-7683 > URL: https://issues.apache.org/jira/browse/KAFKA-7683 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Paul Czajka >Priority: Minor > > Support ConfigDef.Type.MAP which will parse a string value (e.g. > "a=1;b=2;c=3") into a HashMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-6647) KafkaStreams.cleanUp creates .lock file in directory its trying to clean (Windows OS)
[ https://issues.apache.org/jira/browse/KAFKA-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16722566#comment-16722566 ] sacha barber edited comment on KAFKA-6647 at 12/19/18 3:22 PM: --- I would also like to add this seems to be caused by the TopologyTestDriver.close if I add a method like this (scala sorry) {code:java} def cleanup(props:Properties, testDriver: TopologyTestDriver) = { try { testDriver.close } catch { case e: Exception => { delete(new File("C:\\data\\kafka-streams")) } } } def delete(file: File) { if (file.isDirectory) Option(file.listFiles).map(_.toList).getOrElse(Nil).foreach(delete(_)) file.delete } {code} I see the Exception others are talking about above getting caught for the TopologyTestDriver close() call, But then I just resort to using regular java.io to do the actual delete for my tests. This does get my tests to pass ok, but why cant the Kafka code do this on windows, if my simple tests code works. I read the part about how windows will only delete file on next file assignment, but to my eyes my simple tests using delete worked here, whilst Kafka TopologyTestDriver close() did not I am using Windows 10.0, and am using Kafka 2.1.0 And have changed my state directory to this one {code:java} props.put(StreamsConfig.STATE_DIR_CONFIG, s"C:\\data\\kafka-streams".asInstanceOf[Object]) {code} Any ideas when this will get fixed properly? was (Author: sachabarber): I would also like to add this seems to be caused by the TopologyTestDriver.close if I add a method like this (scala sorry) def cleanup(props:Properties, testDriver: TopologyTestDriver) = { {code} def cleanup(props:Properties, testDriver: TopologyTestDriver) = { try { testDriver.close } catch { case e: Exception => { delete(new File("C:\\data\\kafka-streams")) } } } def delete(file: File) { if (file.isDirectory) Option(file.listFiles).map(_.toList).getOrElse(Nil).foreach(delete(_)) file.delete } {code} I see the Exception others are talking about above getting caught for the TopologyTestDriver close() call, But then I just resort to using regular java.io to do the actual delete for my tests. This does get my tests to pass ok, but why cant the Kafka code do this on windows, if my simple tests code works. I read the part about how windows will only delete file on next file assignment, but to my eyes my simple tests using delete worked here, whilst Kafka TopologyTestDriver close() did not I am using Windows 10.0, and am using Kafka 2.1.0 And have changed my state directory to this one {code} props.put(StreamsConfig.STATE_DIR_CONFIG, s"C:\\data\\kafka-streams".asInstanceOf[Object]) {code} Any ideas when this will get fixed properly? > KafkaStreams.cleanUp creates .lock file in directory its trying to clean > (Windows OS) > - > > Key: KAFKA-6647 > URL: https://issues.apache.org/jira/browse/KAFKA-6647 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 1.0.1 > Environment: windows 10. > java version "1.8.0_162" > Java(TM) SE Runtime Environment (build 1.8.0_162-b12) > Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode) > org.apache.kafka:kafka-streams:1.0.1 > Kafka commitId : c0518aa65f25317e >Reporter: George Bloggs >Priority: Minor > > When calling kafkaStreams.cleanUp() before starting a stream the > StateDirectory.cleanRemovedTasks() method contains this check: > {code:java} > ... Line 240 > if (lock(id, 0)) { > long now = time.milliseconds(); > long lastModifiedMs = taskDir.lastModified(); > if (now > lastModifiedMs + cleanupDelayMs) { > log.info("{} Deleting obsolete state directory {} > for task {} as {}ms has elapsed (cleanup delay is {}ms)", logPrefix(), > dirName, id, now - lastModifiedMs, cleanupDelayMs); > Utils.delete(taskDir); > } > } > {code} > The check for lock(id,0) will create a .lock file in the directory that > subsequently is going to be deleted. If the .lock file already exists from a > previous run the attempt to delete the .lock file fails with > AccessDeniedException. > This leaves the .lock file in the taskDir. Calling Utils.delete(taskDir) will > then attempt to remove the taskDir path calling Files.delete(path). > The call to files.delete(path) in postVisitDirectory will then fail > java.nio.file.DirectoryNotEmptyException as the failed attempt to delete the > .lock file left the directory not empty. (o.a.k.s.p.internals.StateDire
[jira] [Commented] (KAFKA-6359) Work for KIP-236
[ https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725137#comment-16725137 ] Satish Duggana commented on KAFKA-6359: --- [~sriharsha] Sorry for not noticing that you offered to finish this PR. Let me know if I can take this up. > Work for KIP-236 > > > Key: KAFKA-6359 > URL: https://issues.apache.org/jira/browse/KAFKA-6359 > Project: Kafka > Issue Type: Improvement >Reporter: Tom Bentley >Assignee: Tom Bentley >Priority: Minor > > This issue is for the work described in KIP-236. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3410) Unclean leader election and "Halting because log truncation is not allowed"
[ https://issues.apache.org/jira/browse/KAFKA-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725161#comment-16725161 ] Timothy Crawford commented on KAFKA-3410: - Ah, I am using 1.1.1. I will see about taking the actions you mention. Thanks > Unclean leader election and "Halting because log truncation is not allowed" > --- > > Key: KAFKA-3410 > URL: https://issues.apache.org/jira/browse/KAFKA-3410 > Project: Kafka > Issue Type: Bug > Components: replication >Reporter: James Cheng >Priority: Major > Labels: reliability > > I ran into a scenario where one of my brokers would continually shutdown, > with the error message: > [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because > log truncation is not allowed for topic test, Current leader 1's latest > offset 0 is less than replica 2's latest offset 151 > (kafka.server.ReplicaFetcherThread) > I managed to reproduce it with the following scenario: > 1. Start broker1, with unclean.leader.election.enable=false > 2. Start broker2, with unclean.leader.election.enable=false > 3. Create topic, single partition, with replication-factor 2. > 4. Write data to the topic. > 5. At this point, both brokers are in the ISR. Broker1 is the partition > leader. > 6. Ctrl-Z on broker2. (Simulates a GC pause or a slow network) Broker2 gets > dropped out of ISR. Broker1 is still the leader. I can still write data to > the partition. > 7. Shutdown Broker1. Hard or controlled, doesn't matter. > 8. rm -rf the log directory of broker1. (This simulates a disk replacement or > full hardware replacement) > 9. Resume broker2. It attempts to connect to broker1, but doesn't succeed > because broker1 is down. At this point, the partition is offline. Can't write > to it. > 10. Resume broker1. Broker1 resumes leadership of the topic. Broker2 attempts > to join ISR, and immediately halts with the error message: > [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because > log truncation is not allowed for topic test, Current leader 1's latest > offset 0 is less than replica 2's latest offset 151 > (kafka.server.ReplicaFetcherThread) > I am able to recover by setting unclean.leader.election.enable=true on my > brokers. > I'm trying to understand a couple things: > * In step 10, why is broker1 allowed to resume leadership even though it has > no data? > * In step 10, why is it necessary to stop the entire broker due to one > partition that is in this state? Wouldn't it be possible for the broker to > continue to serve traffic for all the other topics, and just mark this one as > unavailable? > * Would it make sense to allow an operator to manually specify which broker > they want to become the new master? This would give me more control over how > much data loss I am willing to handle. In this case, I would want broker2 to > become the new master. Or, is that possible and I just don't know how to do > it? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
Pedro Gontijo created KAFKA-7757: Summary: Too many open files after java.io.IOException: Connection to n was disconnected before the response was read Key: KAFKA-7757 URL: https://issues.apache.org/jira/browse/KAFKA-7757 Project: Kafka Issue Type: Bug Components: core Affects Versions: 2.1.0 Reporter: Pedro Gontijo We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) After a while (hours) 2 brokers start to throw: {code:java} java.io.IOException: Connection to NN was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) at kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) at scala.Option.foreach(Option.scala:257) at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) {code} The problem has happened with all brokers. File descriptors start to pile up and if I do not restart it throws "Too many open files" and crashes. {code:java} ERROR Error while accepting connection (kafka.network.Acceptor) java.io.IOException: Too many open files in system at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at kafka.network.Acceptor.accept(SocketServer.scala:460) at kafka.network.Acceptor.run(SocketServer.scala:403) at java.lang.Thread.run(Thread.java:748) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pedro Gontijo updated KAFKA-7757: - Description: We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) After a while (hours) 2 brokers start to throw: {code:java} java.io.IOException: Connection to NN was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) at kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) at scala.Option.foreach(Option.scala:257) at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) {code} The problem has happened with all brokers. File descriptors start to pile up and if I do not restart it throws "Too many open files" and crashes. {code:java} ERROR Error while accepting connection (kafka.network.Acceptor) java.io.IOException: Too many open files in system at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at kafka.network.Acceptor.accept(SocketServer.scala:460) at kafka.network.Acceptor.run(SocketServer.scala:403) at java.lang.Thread.run(Thread.java:748) {code} After some hours the issue happens again... was: We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) After a while (hours) 2 brokers start to throw: {code:java} java.io.IOException: Connection to NN was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) at kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) at scala.Option.foreach(Option.scala:257) at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) {code} The problem has happened with all brokers. File descriptors start to pile up and if I do not restart it throws "Too many open files" and crashes. {code:java} ERROR Error while accepting connection (kafka.network.Acceptor) java.io.IOException: Too many open files in system at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at kafka.network.Acceptor.accept(SocketServer.scala:460) at kafka.network.Acceptor.run(SocketServer.scala:403) at java.lang.Thread.run(Thread.java:748) {code} > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherTh
[jira] [Updated] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pedro Gontijo updated KAFKA-7757: - Attachment: td2.txt td3.txt td1.txt > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > Attachments: server.properties, td1.txt, td2.txt, td3.txt > > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) > at scala.Option.foreach(Option.scala:257) > at > kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) > at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > {code} > The problem has happened with all brokers. > File descriptors start to pile up and if I do not restart it throws "Too many > open files" and crashes. > {code:java} > ERROR Error while accepting connection (kafka.network.Acceptor) > java.io.IOException: Too many open files in system > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) > at kafka.network.Acceptor.accept(SocketServer.scala:460) > at kafka.network.Acceptor.run(SocketServer.scala:403) > at java.lang.Thread.run(Thread.java:748) > {code} > > After some hours the issue happens again... > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pedro Gontijo updated KAFKA-7757: - Attachment: server.properties > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > Attachments: server.properties > > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) > at scala.Option.foreach(Option.scala:257) > at > kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) > at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > {code} > The problem has happened with all brokers. > File descriptors start to pile up and if I do not restart it throws "Too many > open files" and crashes. > {code:java} > ERROR Error while accepting connection (kafka.network.Acceptor) > java.io.IOException: Too many open files in system > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) > at kafka.network.Acceptor.accept(SocketServer.scala:460) > at kafka.network.Acceptor.run(SocketServer.scala:403) > at java.lang.Thread.run(Thread.java:748) > {code} > > After some hours the issue happens again... > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7758) When Naming a Repartition Topic with Aggregations Reuse Repartition Graph Node for Multiple Operations
Bill Bejeck created KAFKA-7758: -- Summary: When Naming a Repartition Topic with Aggregations Reuse Repartition Graph Node for Multiple Operations Key: KAFKA-7758 URL: https://issues.apache.org/jira/browse/KAFKA-7758 Project: Kafka Issue Type: Improvement Components: streams Affects Versions: 2.1.0 Reporter: Bill Bejeck Assignee: Bill Bejeck Fix For: 2.2.0 When performing aggregations that require repartitioning and the repartition topic name is specified, and using the resulting {{KGroupedStream}} for multiple operations i.e. {code:java} final KGroupedStream kGroupedStream = builder.stream("topic").selectKey((k, v) -> k).groupByKey(Grouped.as("grouping")); kGroupedStream.windowedBy(TimeWindows.of(Duration.ofMillis(10L))).count(); kGroupedStream.windowedBy(TimeWindows.of(Duration.ofMillis(30L))).count(); {code} If optimizations aren't enabled, Streams will attempt to build two repartition topics of the same name resulting in a failure creating the topology. However, we have enough information to re-use the existing repartition node via graph nodes used for building the intermediate representation of the topology. This ticket will make the behavior of reusing a {{KGroupedStream}} consistent regardless if optimizations are turned on or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pedro Gontijo updated KAFKA-7757: - Description: We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) After a while (hours) 2 brokers start to throw: {code:java} java.io.IOException: Connection to NN was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) at kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) at scala.Option.foreach(Option.scala:257) at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) {code} File descriptors start to pile up and if I do not restart it throws "Too many open files" and crashes. {code:java} ERROR Error while accepting connection (kafka.network.Acceptor) java.io.IOException: Too many open files in system at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at kafka.network.Acceptor.accept(SocketServer.scala:460) at kafka.network.Acceptor.run(SocketServer.scala:403) at java.lang.Thread.run(Thread.java:748) {code} After some hours the issue happens again... It has happened with all brokers, so it is not something specific to an instance. was: We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) After a while (hours) 2 brokers start to throw: {code:java} java.io.IOException: Connection to NN was disconnected before the response was read at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) at kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) at kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) at scala.Option.foreach(Option.scala:257) at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) {code} The problem has happened with all brokers. File descriptors start to pile up and if I do not restart it throws "Too many open files" and crashes. {code:java} ERROR Error while accepting connection (kafka.network.Acceptor) java.io.IOException: Too many open files in system at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at kafka.network.Acceptor.accept(SocketServer.scala:460) at kafka.network.Acceptor.run(SocketServer.scala:403) at java.lang.Thread.run(Thread.java:748) {code} After some hours the issue happens again... > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > Attachments: server.properties, td1.txt, td2.txt, td3.txt > > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherTh
[jira] [Created] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST
Oleksandr Diachenko created KAFKA-7759: -- Summary: Disable WADL output on OPTIONS method in Connect REST Key: KAFKA-7759 URL: https://issues.apache.org/jira/browse/KAFKA-7759 Project: Kafka Issue Type: Bug Affects Versions: 2.1.0 Reporter: Oleksandr Diachenko Assignee: Oleksandr Diachenko Fix For: 2.2.0 Currently, Connect REST API exposes WADL output on OPTIONS method: {code} curl -i -X OPTIONS http://localhost:8083/connectors HTTP/1.1 200 OK Date: Fri, 07 Dec 2018 22:51:53 GMT Content-Type: application/vnd.sun.wadl+xml Allow: HEAD,POST,GET,OPTIONS Last-Modified: Fri, 07 Dec 2018 14:51:53 PST Content-Length: 1331 Server: Jetty(9.4.12.v20180830) http://wadl.dev.java.net/2009/02";> http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 2018-04-10 07:34:57"/> http://localhost:8083/application.wadl/xsd0.xsd";> http://localhost:8083/";> http://www.w3.org/2001/XMLSchema"; name="forward" style="query" type="xs:boolean"/> http://www.w3.org/2001/XMLSchema"; name="forward" style="query" type="xs:boolean"/> {code} It was never documented and poses potential security vulnerability, so it should be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7401) Broker fails to start when recovering a segment from before the log start offset
[ https://issues.apache.org/jira/browse/KAFKA-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725432#comment-16725432 ] Xin Han commented on KAFKA-7401: I've also seen this issue due to restart from unclean shutdown. Apart from the logs shown above, I also got logs saying corrupt index. > Broker fails to start when recovering a segment from before the log start > offset > > > Key: KAFKA-7401 > URL: https://issues.apache.org/jira/browse/KAFKA-7401 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 1.1.0, 1.1.1 >Reporter: Bob Barrett >Priority: Major > > If a segment needs to be recovered (for example, because of a missing index > file or uncompleted swap operation) and its base offset is less than the log > start offset, the broker will crash with the following error: > Fatal error during KafkaServer startup. Prepare to shutdown > (kafka.server.KafkaServer) > java.lang.IllegalArgumentException: inconsistent range > at java.util.concurrent.ConcurrentSkipListMap$SubMap.(Unknown Source) > at java.util.concurrent.ConcurrentSkipListMap.subMap(Unknown Source) > at java.util.concurrent.ConcurrentSkipListMap.subMap(Unknown Source) > at kafka.log.Log$$anonfun$12.apply(Log.scala:1579) > at kafka.log.Log$$anonfun$12.apply(Log.scala:1578) > at scala.Option.map(Option.scala:146) > at kafka.log.Log.logSegments(Log.scala:1578) > at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:358) > at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:389) > at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:380) > at scala.collection.immutable.Set$Set1.foreach(Set.scala:94) > at kafka.log.Log.completeSwapOperations(Log.scala:380) > at kafka.log.Log.loadSegments(Log.scala:408) > at kafka.log.Log.(Log.scala:216) > at kafka.log.Log$.apply(Log.scala:1765) > at kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:260) > at > kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:340) > at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > at java.util.concurrent.FutureTask.run(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Since these segments are outside the log range, we should delete them, or at > least not block broker startup because of them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-7715) Connect should have a parameter to disable WADL output for OPTIONS method
[ https://issues.apache.org/jira/browse/KAFKA-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleksandr Diachenko resolved KAFKA-7715. Resolution: Won't Fix Related KIP(KIP-404) was discarded, reported a bug - https://issues.apache.org/jira/browse/KAFKA-7759 > Connect should have a parameter to disable WADL output for OPTIONS method > - > > Key: KAFKA-7715 > URL: https://issues.apache.org/jira/browse/KAFKA-7715 > Project: Kafka > Issue Type: Improvement > Components: config, security >Affects Versions: 2.1.0 >Reporter: Oleksandr Diachenko >Assignee: Oleksandr Diachenko >Priority: Critical > Fix For: 2.1.1 > > > Currently, Connect REST API exposes WADL output on OPTIONS method: > {code:bash} > curl -i -X OPTIONS http://localhost:8083/connectors > HTTP/1.1 200 OK > Date: Fri, 07 Dec 2018 22:51:53 GMT > Content-Type: application/vnd.sun.wadl+xml > Allow: HEAD,POST,GET,OPTIONS > Last-Modified: Fri, 07 Dec 2018 14:51:53 PST > Content-Length: 1331 > Server: Jetty(9.4.12.v20180830) > > http://wadl.dev.java.net/2009/02";> > http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 > 2018-04-10 07:34:57"/> > > http://localhost:8083/application.wadl/xsd0.xsd";> > > > > http://localhost:8083/";> > > > > http://www.w3.org/2001/XMLSchema"; name="forward" > style="query" type="xs:boolean"/> > > > > > > > > > http://www.w3.org/2001/XMLSchema"; name="forward" > style="query" type="xs:boolean"/> > > > > > > > > > {code} > This can be a potential vulnerability, so it makes sense to have a > configuration parameter, which disables WADL output. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7715) Connect should have a parameter to disable WADL output for OPTIONS method
[ https://issues.apache.org/jira/browse/KAFKA-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725436#comment-16725436 ] ASF GitHub Bot commented on KAFKA-7715: --- avocader closed pull request #6025: KAFKA-7715: Added a configuration parameter to Connect which disables WADL output for OPTIONS method. URL: https://github.com/apache/kafka/pull/6025 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/config/connect-distributed.properties b/config/connect-distributed.properties index 72db145f3f8..9d00e9d129a 100644 --- a/config/connect-distributed.properties +++ b/config/connect-distributed.properties @@ -75,6 +75,9 @@ offset.flush.interval.ms=1 #rest.advertised.host.name= #rest.advertised.port= +# Controls presence of WADL information in a response to OPTIONS request +#rest.wadl.enable=false + # Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins # (connectors, converters, transformations). The list should consist of top level directories that include # any combination of: diff --git a/config/connect-standalone.properties b/config/connect-standalone.properties index a340a3bf315..8b521598631 100644 --- a/config/connect-standalone.properties +++ b/config/connect-standalone.properties @@ -29,6 +29,9 @@ offset.storage.file.filename=/tmp/connect.offsets # Flush much faster than normal, which is useful for testing/debugging offset.flush.interval.ms=1 +# Controls presence of WADL information in a response to OPTIONS request +#rest.wadl.enable=false + # Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins # (connectors, converters, transformations). The list should consist of top level directories that include # any combination of: diff --git a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java index be3a70991f0..55f1d00030a 100644 --- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java +++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java @@ -171,6 +171,11 @@ public static final String REST_ADVERTISED_LISTENER_CONFIG = "rest.advertised.listener"; private static final String REST_ADVERTISED_LISTENER_DOC = "Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use."; +public static final String REST_ENABLE_WADL_CONFIG = "rest.wadl.enable"; +private static final String REST_ENABLE_WADL_DOC = +"If true, OPTIONS request to Connect REST replies with WADL information. " ++ "It's recommended to disable it, since exposing WADL information can pose a security risk."; +private static final Boolean REST_ENABLE_WADL_DEFAULT = true; public static final String ACCESS_CONTROL_ALLOW_ORIGIN_CONFIG = "access.control.allow.origin"; protected static final String ACCESS_CONTROL_ALLOW_ORIGIN_DOC = @@ -255,6 +260,7 @@ protected static ConfigDef baseConfigDef() { .define(REST_ADVERTISED_HOST_NAME_CONFIG, Type.STRING, null, Importance.LOW, REST_ADVERTISED_HOST_NAME_DOC) .define(REST_ADVERTISED_PORT_CONFIG, Type.INT, null, Importance.LOW, REST_ADVERTISED_PORT_DOC) .define(REST_ADVERTISED_LISTENER_CONFIG, Type.STRING, null, Importance.LOW, REST_ADVERTISED_LISTENER_DOC) +.define(REST_ENABLE_WADL_CONFIG, Type.BOOLEAN, REST_ENABLE_WADL_DEFAULT, Importance.LOW, REST_ENABLE_WADL_DOC) .define(ACCESS_CONTROL_ALLOW_ORIGIN_CONFIG, Type.STRING, ACCESS_CONTROL_ALLOW_ORIGIN_DEFAULT, Importance.LOW, ACCESS_CONTROL_ALLOW_ORIGIN_DOC) diff --git a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java index 15386430bc5..ecdb83e5183 100644 --- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java +++ b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java @@ -45,6 +45,7 @@ import org.eclipse.jetty.servlets.CrossOriginFilter; import org.eclipse.jetty.util.ssl.SslContextFactory; import org.glassfish.jersey.server.ResourceConfig; +import org.glassfish.jersey.server.ServerProperties; import org.glassfish.jersey.servlet.ServletContainer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -171,6 +172,7 @@ public void start(Herder herder) { resourceConfig.register(new ConnectorPluginsResource(
[jira] [Commented] (KAFKA-7759) Disable WADL output on OPTIONS method in Connect REST
[ https://issues.apache.org/jira/browse/KAFKA-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725447#comment-16725447 ] ASF GitHub Bot commented on KAFKA-7759: --- avocader opened a new pull request #6051: KAFKA-7759: Disable WADL output on OPTIONS method in Connect REST. URL: https://github.com/apache/kafka/pull/6051 Currently, Connect REST endpoint replies to OPTIONS request with verbose WADL information, which could be used for an attack. This was never documented or intended to expose. More discussion is [here] (https://lists.apache.org/thread.html/84eb4538397ae4544d20c072c936d9a31f22f429a0891cbb7d8e2296@%3Cdev.kafka.apache.org%3E) Added unit tests in RestServerTest, which asserts that calling `OPTIONS` on `/connectors` replies with a list of supported HTTP methods, with no WADL information. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Disable WADL output on OPTIONS method in Connect REST > - > > Key: KAFKA-7759 > URL: https://issues.apache.org/jira/browse/KAFKA-7759 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Oleksandr Diachenko >Assignee: Oleksandr Diachenko >Priority: Major > Fix For: 2.2.0 > > > Currently, Connect REST API exposes WADL output on OPTIONS method: > {code} > curl -i -X OPTIONS http://localhost:8083/connectors > HTTP/1.1 200 OK > Date: Fri, 07 Dec 2018 22:51:53 GMT > Content-Type: application/vnd.sun.wadl+xml > Allow: HEAD,POST,GET,OPTIONS > Last-Modified: Fri, 07 Dec 2018 14:51:53 PST > Content-Length: 1331 > Server: Jetty(9.4.12.v20180830) > > http://wadl.dev.java.net/2009/02";> > http://jersey.java.net/"; jersey:generatedBy="Jersey: 2.27 > 2018-04-10 07:34:57"/> > > http://localhost:8083/application.wadl/xsd0.xsd";> > > > > http://localhost:8083/";> > > > > http://www.w3.org/2001/XMLSchema"; name="forward" > style="query" type="xs:boolean"/> > > > > > > > > > http://www.w3.org/2001/XMLSchema"; name="forward" > style="query" type="xs:boolean"/> > > > > > > > > > {code} > It was never documented and poses potential security vulnerability, so it > should be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7760) Add broker configuration to set minimum value for segment.bytes and segment.ms
Badai Aqrandista created KAFKA-7760: --- Summary: Add broker configuration to set minimum value for segment.bytes and segment.ms Key: KAFKA-7760 URL: https://issues.apache.org/jira/browse/KAFKA-7760 Project: Kafka Issue Type: Improvement Reporter: Badai Aqrandista If someone set segment.bytes or segment.ms at topic level to a very small value (e.g. segment.bytes=1000 or segment.ms=1000), Kafka will generate a very high number of segment files. This can bring down the whole broker due to hitting the maximum open file (for log) or maximum number of mmap-ed file (for index). To prevent that from happening, I would like to suggest adding two new items to the broker configuration: * min.topic.segment.bytes, defaults to 1048576: The minimum value for segment.bytes. When someone sets topic configuration segment.bytes to a value lower than this, Kafka throws an error INVALID VALUE. * min.topic.segment.ms, defaults to 360: The minimum value for segment.ms. When someone sets topic configuration segment.ms to a value lower than this, Kafka throws an error INVALID VALUE. Thanks Badai -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7760) Add broker configuration to set minimum value for segment.bytes and segment.ms
[ https://issues.apache.org/jira/browse/KAFKA-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gwen Shapira updated KAFKA-7760: Labels: kip newbie (was: ) > Add broker configuration to set minimum value for segment.bytes and segment.ms > -- > > Key: KAFKA-7760 > URL: https://issues.apache.org/jira/browse/KAFKA-7760 > Project: Kafka > Issue Type: Improvement >Reporter: Badai Aqrandista >Priority: Major > Labels: kip, newbie > > If someone set segment.bytes or segment.ms at topic level to a very small > value (e.g. segment.bytes=1000 or segment.ms=1000), Kafka will generate a > very high number of segment files. This can bring down the whole broker due > to hitting the maximum open file (for log) or maximum number of mmap-ed file > (for index). > To prevent that from happening, I would like to suggest adding two new items > to the broker configuration: > * min.topic.segment.bytes, defaults to 1048576: The minimum value for > segment.bytes. When someone sets topic configuration segment.bytes to a value > lower than this, Kafka throws an error INVALID VALUE. > * min.topic.segment.ms, defaults to 360: The minimum value for > segment.ms. When someone sets topic configuration segment.ms to a value lower > than this, Kafka throws an error INVALID VALUE. > Thanks > Badai -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7728) Add JoinReason to the join group request for better rebalance handling
[ https://issues.apache.org/jira/browse/KAFKA-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725512#comment-16725512 ] Mayuresh Gharat commented on KAFKA-7728: To shed more light on this : As per the current scenario, there are 2 types of metadata changes that trigger rebalance : 1) Increase in the number of partitions for any of the currently subscribed topics, causing a rebalance. 2) Newly added topics, causing a rebalance. The leader is responsible for rebalancing (by virtue of sending a JoinGroupRequest), in case of 1). The other consumers in the group will not cause the rebalance for 1). For 2) anyone in the consumer group can trigger a rebalance, when it detects that a new topic has be created (by virtue of its metadata refresh). Currently we trigger a rebalance on leader rejoin because, we don't know the reason why the leader is sending the JoinGroupRequest. With the proposal in this jira and using static membership from KIP-345 (using the "group.instance.id" as the leader, instead of "member.id", we can check the reason for the rejoin from the leader. If no reason is specified, we can assume that the JoinGroupRequest was because of leader restart. In that case, the GroupCoordinator can send the current assignment for the leader and also send the groupSubscription (all the topics subscribed by all the consumers of the group) back to the leader. This prevents the unnecessary rebalance due to leader bounce. We will have to change logic in ConsumerCoordinator to not perform assignments (if it is the leader) and just accept the assignment given by the GroupCoordinator, in this scenario. [~bchen225242], [~guozhang], [~hachikuji] thoughts?? > Add JoinReason to the join group request for better rebalance handling > -- > > Key: KAFKA-7728 > URL: https://issues.apache.org/jira/browse/KAFKA-7728 > Project: Kafka > Issue Type: Improvement >Reporter: Boyang Chen >Assignee: Mayuresh Gharat >Priority: Major > Labels: consumer, mirror-maker, needs-kip > > Recently [~mgharat] and I discussed about the current rebalance logic on > leader join group request handling. So far we blindly trigger rebalance when > the leader rejoins. The caveat is that KIP-345 is not covering this effort > and if a consumer group is not using sticky assignment but using other > strategy like round robin, the redundant rebalance could still shuffle the > topic partitions around consumers. (for example mirror maker application) > I checked on broker side and here is what we currently do: > > {code:java} > if (group.isLeader(memberId) || !member.matches(protocols)) > // force a rebalance if a member has changed metadata or if the leader sends > JoinGroup. > // The latter allows the leader to trigger rebalances for changes affecting > assignment > // which do not affect the member metadata (such as topic metadata changes > for the consumer) {code} > Based on the broker logic, we only need to trigger rebalance for leader > rejoin when the topic metadata change has happened. I also looked up the > ConsumerCoordinator code on client side, and found out the metadata > monitoring logic here: > {code:java} > public boolean rejoinNeededOrPending() { > ... > // we need to rejoin if we performed the assignment and metadata has changed > if (assignmentSnapshot != null && > !assignmentSnapshot.equals(metadataSnapshot)) > return true; > }{code} > I guess instead of just returning true, we could introduce a new enum field > called JoinReason which could indicate the purpose of the rejoin. Thus we > don't need to do a full rebalance when the leader is just in rolling bounce. > We could utilize this information I guess. Just add another enum field into > the join group request called JoinReason so that we know whether leader is > rejoining due to topic metadata change. If yes, we trigger rebalance > obviously; if no, we shouldn't trigger rebalance. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7761) CLONE - Add broker configuration to set minimum value for segment.bytes and segment.ms
Chinmay Patil created KAFKA-7761: Summary: CLONE - Add broker configuration to set minimum value for segment.bytes and segment.ms Key: KAFKA-7761 URL: https://issues.apache.org/jira/browse/KAFKA-7761 Project: Kafka Issue Type: Improvement Reporter: Chinmay Patil If someone set segment.bytes or segment.ms at topic level to a very small value (e.g. segment.bytes=1000 or segment.ms=1000), Kafka will generate a very high number of segment files. This can bring down the whole broker due to hitting the maximum open file (for log) or maximum number of mmap-ed file (for index). To prevent that from happening, I would like to suggest adding two new items to the broker configuration: * min.topic.segment.bytes, defaults to 1048576: The minimum value for segment.bytes. When someone sets topic configuration segment.bytes to a value lower than this, Kafka throws an error INVALID VALUE. * min.topic.segment.ms, defaults to 360: The minimum value for segment.ms. When someone sets topic configuration segment.ms to a value lower than this, Kafka throws an error INVALID VALUE. Thanks Badai -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725566#comment-16725566 ] 点儿郎当 commented on KAFKA-7757: - I have the same problem.500 machines link the cluster of kafka's three nodes. Please help us to find out what the problem is. > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > Attachments: server.properties, td1.txt, td2.txt, td3.txt > > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) > at scala.Option.foreach(Option.scala:257) > at > kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) > at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > {code} > File descriptors start to pile up and if I do not restart it throws "Too many > open files" and crashes. > {code:java} > ERROR Error while accepting connection (kafka.network.Acceptor) > java.io.IOException: Too many open files in system > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) > at kafka.network.Acceptor.accept(SocketServer.scala:460) > at kafka.network.Acceptor.run(SocketServer.scala:403) > at java.lang.Thread.run(Thread.java:748) > {code} > > After some hours the issue happens again... It has happened with all > brokers, so it is not something specific to an instance. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container
[ https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725568#comment-16725568 ] 点儿郎当 commented on KAFKA-7581: - Five hundred machines are linked to a cluster of three Kafka nodes, and the test scenario is not easy to simulate. The problem was also addressed in KAFKA-7757. > Issues in building kafka using gradle on a Ubuntu based docker container > > > Key: KAFKA-7581 > URL: https://issues.apache.org/jira/browse/KAFKA-7581 > Project: Kafka > Issue Type: Bug > Components: build >Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2 > Environment: Ubuntu 16.04.3 LTS >Reporter: Sarvesh Tamba >Priority: Blocker > > The following issues are seen when kafka is built using gradle on a Ubuntu > based docker container:- > /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177: > File name too long > This can happen on some encrypted or legacy file systems. Please see SI-3623 > for more details. > .foreach { txnMetadataCacheEntry => > ^ > 56 warnings found > one error found > > Task :core:compileScala FAILED > FAILURE: Build failed with an exception. > * What went wrong: > Execution failed for task ':core:compileScala'. > > Compilation failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container
[ https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725589#comment-16725589 ] Jordan Moore commented on KAFKA-7581: - If this problem is with *building Kafka* in a Docker container, then no external system is "linked" to the build process. That being said, I don't think this is a blocker, and I cannot reproduce. Here is even a Dockerfile that I just made {code} FROM ubuntu:16.04 ARG JAVA_VERSION=8.0.192-zulu ARG GRADLE_VERSION=4.8.1 ARG KAFKA_VERSION=2.0.0 ARG SCALA_VERSION=2.11 RUN apt-get update && apt-get install -y \ curl \ zip \ unzip \ && rm -rf /var/apt/lists/* RUN curl -s "https://get.sdkman.io"; | bash RUN ["/bin/bash", "-c", "source /root/.sdkman/bin/sdkman-init.sh; \ sdk i java $JAVA_VERSION && sdk i gradle $GRADLE_VERSION"] RUN mkdir /kafka-src \ && curl https://archive.apache.org/dist/kafka/$KAFKA_VERSION/kafka-$KAFKA_VERSION-src.tgz \ | tar -xvzC /kafka-src --strip-components=1 WORKDIR /kafka-src RUN ["/bin/bash", "-c", "source /root/.sdkman/bin/sdkman-init.sh; \ gradle && ./gradlew -PscalaVersion=$SCALA_VERSION releaseTarGz -x signArchives"] {code} And it builds the release tarball, which you could use in a multi-stage build, for example {code} docker run --rm -ti kafka-7581:latest bash -c "ls -ltr /kafka-src/core/build/distributions/" total 57888 -rw-r--r-- 1 root root 3343041 Dec 20 04:57 kafka_2.11-2.0.0-site-docs.tgz -rw-r--r-- 1 root root 55928942 Dec 20 04:57 kafka_2.11-2.0.0.tgz {code} > Issues in building kafka using gradle on a Ubuntu based docker container > > > Key: KAFKA-7581 > URL: https://issues.apache.org/jira/browse/KAFKA-7581 > Project: Kafka > Issue Type: Bug > Components: build >Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2 > Environment: Ubuntu 16.04.3 LTS >Reporter: Sarvesh Tamba >Priority: Blocker > > The following issues are seen when kafka is built using gradle on a Ubuntu > based docker container:- > /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177: > File name too long > This can happen on some encrypted or legacy file systems. Please see SI-3623 > for more details. > .foreach { txnMetadataCacheEntry => > ^ > 56 warnings found > one error found > > Task :core:compileScala FAILED > FAILURE: Build failed with an exception. > * What went wrong: > Execution failed for task ':core:compileScala'. > > Compilation failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6090) Upgrade the Scala recommendation to 2.12
[ https://issues.apache.org/jira/browse/KAFKA-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725592#comment-16725592 ] Jordan Moore commented on KAFKA-6090: - Resolved in KAFKA-7524 > Upgrade the Scala recommendation to 2.12 > > > Key: KAFKA-6090 > URL: https://issues.apache.org/jira/browse/KAFKA-6090 > Project: Kafka > Issue Type: Improvement > Components: build >Reporter: Lionel Cons >Priority: Minor > > Currently, the download page contains for the latest Kafka version (0.11.0.1): > {quote} > We build for multiple versions of Scala. This only matters if you are using > Scala and you want a version built for the same Scala version you use. > Otherwise any version should work (2.11 is recommended). > {quote} > Scala 2.11 is not supported anymore. Version 2.11.11 (released 6 months ago) > indicates: > {quote} > The 2.11.11 release concludes the 2.11.x series, with no further releases > planned. Please consider upgrading to 2.12! > {quote} > So it seems it is time to recommend 2.12 for Kafka usage and (soon) start to > build for Scala 2.13... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6090) Upgrade the Scala recommendation to 2.12
[ https://issues.apache.org/jira/browse/KAFKA-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismael Juma resolved KAFKA-6090. Resolution: Duplicate > Upgrade the Scala recommendation to 2.12 > > > Key: KAFKA-6090 > URL: https://issues.apache.org/jira/browse/KAFKA-6090 > Project: Kafka > Issue Type: Improvement > Components: build >Reporter: Lionel Cons >Priority: Minor > > Currently, the download page contains for the latest Kafka version (0.11.0.1): > {quote} > We build for multiple versions of Scala. This only matters if you are using > Scala and you want a version built for the same Scala version you use. > Otherwise any version should work (2.11 is recommended). > {quote} > Scala 2.11 is not supported anymore. Version 2.11.11 (released 6 months ago) > indicates: > {quote} > The 2.11.11 release concludes the 2.11.x series, with no further releases > planned. Please consider upgrading to 2.12! > {quote} > So it seems it is time to recommend 2.12 for Kafka usage and (soon) start to > build for Scala 2.13... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container
[ https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725619#comment-16725619 ] 点儿郎当 commented on KAFKA-7581: - We don't use docker. We use three physical servers. They are 32 boxes, 63 g of memory and 12 8 T independent hard disks. > Issues in building kafka using gradle on a Ubuntu based docker container > > > Key: KAFKA-7581 > URL: https://issues.apache.org/jira/browse/KAFKA-7581 > Project: Kafka > Issue Type: Bug > Components: build >Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2 > Environment: Ubuntu 16.04.3 LTS >Reporter: Sarvesh Tamba >Priority: Blocker > > The following issues are seen when kafka is built using gradle on a Ubuntu > based docker container:- > /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177: > File name too long > This can happen on some encrypted or legacy file systems. Please see SI-3623 > for more details. > .foreach { txnMetadataCacheEntry => > ^ > 56 warnings found > one error found > > Task :core:compileScala FAILED > FAILURE: Build failed with an exception. > * What went wrong: > Execution failed for task ':core:compileScala'. > > Compilation failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7581) Issues in building kafka using gradle on a Ubuntu based docker container
[ https://issues.apache.org/jira/browse/KAFKA-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725620#comment-16725620 ] Jordan Moore commented on KAFKA-7581: - [~songxinlei], in that case, please don't hijack the issue titled "_Issues in building kafka using gradle on a Ubuntu based docker container_" > Issues in building kafka using gradle on a Ubuntu based docker container > > > Key: KAFKA-7581 > URL: https://issues.apache.org/jira/browse/KAFKA-7581 > Project: Kafka > Issue Type: Bug > Components: build >Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2 > Environment: Ubuntu 16.04.3 LTS >Reporter: Sarvesh Tamba >Priority: Blocker > > The following issues are seen when kafka is built using gradle on a Ubuntu > based docker container:- > /kafka-gradle/kafka-2.0.0/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala:177: > File name too long > This can happen on some encrypted or legacy file systems. Please see SI-3623 > for more details. > .foreach { txnMetadataCacheEntry => > ^ > 56 warnings found > one error found > > Task :core:compileScala FAILED > FAILURE: Build failed with an exception. > * What went wrong: > Execution failed for task ':core:compileScala'. > > Compilation failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mathias Kub updated KAFKA-7757: --- Attachment: kafka-allocated-file-handles.png > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > Attachments: kafka-allocated-file-handles.png, server.properties, > td1.txt, td2.txt, td3.txt > > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) > at scala.Option.foreach(Option.scala:257) > at > kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) > at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > {code} > File descriptors start to pile up and if I do not restart it throws "Too many > open files" and crashes. > {code:java} > ERROR Error while accepting connection (kafka.network.Acceptor) > java.io.IOException: Too many open files in system > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) > at kafka.network.Acceptor.accept(SocketServer.scala:460) > at kafka.network.Acceptor.run(SocketServer.scala:403) > at java.lang.Thread.run(Thread.java:748) > {code} > > After some hours the issue happens again... It has happened with all > brokers, so it is not something specific to an instance. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7757) Too many open files after java.io.IOException: Connection to n was disconnected before the response was read
[ https://issues.apache.org/jira/browse/KAFKA-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725645#comment-16725645 ] Mathias Kub commented on KAFKA-7757: This happens to us after upgrading from 1.1.1 to 2.1.0 as well. We have limited open file handles to about 260k. When the process reaches the limit, we see the Exception as well. !kafka-allocated-file-handles.png! The open file descriptors are sockets. *netstat* shows most of the open files as TCP connections being in the *CLOSE_WAIT* state. > Too many open files after java.io.IOException: Connection to n was > disconnected before the response was read > > > Key: KAFKA-7757 > URL: https://issues.apache.org/jira/browse/KAFKA-7757 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 >Reporter: Pedro Gontijo >Priority: Major > Attachments: kafka-allocated-file-handles.png, server.properties, > td1.txt, td2.txt, td3.txt > > > We upgraded from 0.10.2.2 to 2.1.0 (a cluster with 3 brokers) > After a while (hours) 2 brokers start to throw: > {code:java} > java.io.IOException: Connection to NN was disconnected before the response > was read > at > org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97) > at > kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:97) > at > kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > at > kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) > at > kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) > at scala.Option.foreach(Option.scala:257) > at > kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) > at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > {code} > File descriptors start to pile up and if I do not restart it throws "Too many > open files" and crashes. > {code:java} > ERROR Error while accepting connection (kafka.network.Acceptor) > java.io.IOException: Too many open files in system > at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) > at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) > at kafka.network.Acceptor.accept(SocketServer.scala:460) > at kafka.network.Acceptor.run(SocketServer.scala:403) > at java.lang.Thread.run(Thread.java:748) > {code} > > After some hours the issue happens again... It has happened with all > brokers, so it is not something specific to an instance. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)