[jira] [Updated] (KAFKA-4001) Improve Kafka Streams Join Semantics (KIP-76)
[ https://issues.apache.org/jira/browse/KAFKA-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-4001: --- Summary: Improve Kafka Streams Join Semantics (KIP-76) (was: Improving join semantics in Kafka Stremas) > Improve Kafka Streams Join Semantics (KIP-76) > - > > Key: KAFKA-4001 > URL: https://issues.apache.org/jira/browse/KAFKA-4001 > Project: Kafka > Issue Type: Bug > Components: streams >Reporter: Matthias J. Sax >Assignee: Matthias J. Sax > Fix For: 0.10.1.0 > > > Kafka Streams supports three types of joins: > * KStream-KStream > * KStream-KTable > * KTable-KTable > Furthermore, Kafka Streams supports the join variant, namely > * inner join > * left join > * outer join > Not all combination of "type" and "variant" are supported. > *The problem is, that the semantics of the different joins do use different > semantics (and are thus inconsistent).* > With this ticket, we want to > * introduce unique semantics over all joins > * improve handling of "null" > * add missing inner KStream-KTable join -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-4001) Improve Kafka Streams Join Semantics (KIP-76)
[ https://issues.apache.org/jira/browse/KAFKA-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-4001: --- Description: Kafka Streams supports three types of joins: * KStream-KStream * KStream-KTable * KTable-KTable Furthermore, Kafka Streams supports the join variant, namely * inner join * left join * outer join Not all combination of "type" and "variant" are supported. *The problem is, that the semantics of the different joins do use different semantics (and are thus inconsistent).* With this ticket, we want to * introduce unique semantics over all joins * improve handling of "null" * add missing inner KStream-KTable join See KIP-76 for more details: https://cwiki.apache.org/confluence/display/KAFKA/KIP-76%3A+Improve+Kafka+Streams+Join+Semantics was: Kafka Streams supports three types of joins: * KStream-KStream * KStream-KTable * KTable-KTable Furthermore, Kafka Streams supports the join variant, namely * inner join * left join * outer join Not all combination of "type" and "variant" are supported. *The problem is, that the semantics of the different joins do use different semantics (and are thus inconsistent).* With this ticket, we want to * introduce unique semantics over all joins * improve handling of "null" * add missing inner KStream-KTable join > Improve Kafka Streams Join Semantics (KIP-76) > - > > Key: KAFKA-4001 > URL: https://issues.apache.org/jira/browse/KAFKA-4001 > Project: Kafka > Issue Type: Bug > Components: streams >Reporter: Matthias J. Sax >Assignee: Matthias J. Sax > Fix For: 0.10.1.0 > > > Kafka Streams supports three types of joins: > * KStream-KStream > * KStream-KTable > * KTable-KTable > Furthermore, Kafka Streams supports the join variant, namely > * inner join > * left join > * outer join > Not all combination of "type" and "variant" are supported. > *The problem is, that the semantics of the different joins do use different > semantics (and are thus inconsistent).* > With this ticket, we want to > * introduce unique semantics over all joins > * improve handling of "null" > * add missing inner KStream-KTable join > See KIP-76 for more details: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-76%3A+Improve+Kafka+Streams+Join+Semantics -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (KAFKA-4053) Refactor TopicCommand to remove redundant if/else statements
[ https://issues.apache.org/jira/browse/KAFKA-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shuai Zhang closed KAFKA-4053. -- This issue is solved by pull request: https://github.com/apache/kafka/pull/1751 > Refactor TopicCommand to remove redundant if/else statements > > > Key: KAFKA-4053 > URL: https://issues.apache.org/jira/browse/KAFKA-4053 > Project: Kafka > Issue Type: Improvement > Components: admin >Affects Versions: 0.10.0.1 >Reporter: Shuai Zhang >Priority: Minor > Fix For: 0.10.1.0 > > > In TopicCommand, there are a lot of redundant if/else statements, such as > ```val ifNotExists = if (opts.options.has(opts.ifNotExistsOpt)) true else > false``` > We can refactor it as the following statement: > ```val ifNotExists = opts.options.has(opts.ifNotExistsOpt)``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (KAFKA-4071) Corruptted replication-offset-checkpoint leads to kafka server disfunctional
Zane Zhang created KAFKA-4071: - Summary: Corruptted replication-offset-checkpoint leads to kafka server disfunctional Key: KAFKA-4071 URL: https://issues.apache.org/jira/browse/KAFKA-4071 Project: Kafka Issue Type: Bug Components: clients, offset manager Affects Versions: 0.9.0.1 Environment: Red Hat Enterprise 6.7 Reporter: Zane Zhang For an unknown reason, [kafka data root]/replication-offset-checkpoint was corrupted. First Kafka reported an NumberFormatException in kafka sever.out. And then it reported "error when handling request Name: FetchRequest; ... " ERRORs repeatedly (ERROR details below). As a result, clients cannot read from or write to Kafka on several partitions until replication-offset-checkpoint was manually deleted. ERROR [KafkaApi-7] error when handling request java.lang.NumberFormatException: For input string: " N?-; O" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:77) at java.lang.Integer.parseInt(Integer.java:493) at java.lang.Integer.parseInt(Integer.java:539) at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272) at scala.collection.immutable.StringOps.toInt(StringOps.scala:30) at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:78) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:93) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) at scala.collection.immutable.Set$Set2.foreach(Set.scala:111) at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:173) ERROR [KafkaApi-7] error when handling request Name: FetchRequest; Version: 1; CorrelationId: 0; ClientId: ReplicaFetcherThread-1-7; ReplicaId: 6; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [prodTopicDal09E,166] -> PartitionFetchInfo(7123666,20971520),[prodTopicDal09E,118] -> PartitionFetchInfo(7128188,20971520),[prodTopicDal09E,238] -> -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (KAFKA-4071) Corruptted replication-offset-checkpoint leads to kafka server disfunctional
[ https://issues.apache.org/jira/browse/KAFKA-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430091#comment-15430091 ] Zane Zhang commented on KAFKA-4071: --- The contents of corrupted replication-offset-checkpoint Nê-;O W% __hit_offsetprodTopicDal09Dä Ó-VZ5ðV"ðNê-?O*ü@% __hit_offsetprodTopicDal09D%tVZ5òV"òNê-QO%äã:% __hit_offsetprodTopicDal09D. gÝVZ6 V" Nê-\Oaê4ç% __hit_offsetprodTopicDal09D ÑVZ6V"Nê-]OR L% __hit_offsetprodTopicDal09DÒ VZ6V"Nê-_O÷¸% __hit_offsetprodTopicDal09D2 iVZ6V > Corruptted replication-offset-checkpoint leads to kafka server disfunctional > > > Key: KAFKA-4071 > URL: https://issues.apache.org/jira/browse/KAFKA-4071 > Project: Kafka > Issue Type: Bug > Components: clients, offset manager >Affects Versions: 0.9.0.1 > Environment: Red Hat Enterprise 6.7 >Reporter: Zane Zhang > > For an unknown reason, [kafka data root]/replication-offset-checkpoint was > corrupted. First Kafka reported an NumberFormatException in kafka sever.out. > And then it reported "error when handling request Name: FetchRequest; ... " > ERRORs repeatedly (ERROR details below). As a result, clients cannot read > from or write to Kafka on several partitions until > replication-offset-checkpoint was manually deleted. > ERROR [KafkaApi-7] error when handling request > java.lang.NumberFormatException: For input string: " N?-; O" > at > java.lang.NumberFormatException.forInputString(NumberFormatException.java:77) > at java.lang.Integer.parseInt(Integer.java:493) > at java.lang.Integer.parseInt(Integer.java:539) > at > scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272) > at scala.collection.immutable.StringOps.toInt(StringOps.scala:30) > at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:78) > at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:93) > at > kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) > at > kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) > at scala.collection.immutable.Set$Set2.foreach(Set.scala:111) > at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:173) > ERROR [KafkaApi-7] error when handling request Name: FetchRequest; Version: > 1; CorrelationId: 0; ClientId: ReplicaFetcherThread-1-7; ReplicaId: 6; > MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [prodTopicDal09E,166] -> > PartitionFetchInfo(7123666,20971520),[prodTopicDal09E,118] -> > PartitionFetchInfo(7128188,20971520),[prodTopicDal09E,238] -> -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-4071) Corruptted replication-offset-checkpoint leads to kafka server disfunctional
[ https://issues.apache.org/jira/browse/KAFKA-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zane Zhang updated KAFKA-4071: -- Priority: Critical (was: Major) > Corruptted replication-offset-checkpoint leads to kafka server disfunctional > > > Key: KAFKA-4071 > URL: https://issues.apache.org/jira/browse/KAFKA-4071 > Project: Kafka > Issue Type: Bug > Components: clients, offset manager >Affects Versions: 0.9.0.1 > Environment: Red Hat Enterprise 6.7 >Reporter: Zane Zhang >Priority: Critical > > For an unknown reason, [kafka data root]/replication-offset-checkpoint was > corrupted. First Kafka reported an NumberFormatException in kafka sever.out. > And then it reported "error when handling request Name: FetchRequest; ... " > ERRORs repeatedly (ERROR details below). As a result, clients cannot read > from or write to Kafka on several partitions until > replication-offset-checkpoint was manually deleted. > ERROR [KafkaApi-7] error when handling request > java.lang.NumberFormatException: For input string: " N?-; O" > at > java.lang.NumberFormatException.forInputString(NumberFormatException.java:77) > at java.lang.Integer.parseInt(Integer.java:493) > at java.lang.Integer.parseInt(Integer.java:539) > at > scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272) > at scala.collection.immutable.StringOps.toInt(StringOps.scala:30) > at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:78) > at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:93) > at > kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) > at > kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) > at scala.collection.immutable.Set$Set2.foreach(Set.scala:111) > at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:173) > ERROR [KafkaApi-7] error when handling request Name: FetchRequest; Version: > 1; CorrelationId: 0; ClientId: ReplicaFetcherThread-1-7; ReplicaId: 6; > MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [prodTopicDal09E,166] -> > PartitionFetchInfo(7123666,20971520),[prodTopicDal09E,118] -> > PartitionFetchInfo(7128188,20971520),[prodTopicDal09E,238] -> -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (KAFKA-4071) Corruptted replication-offset-checkpoint leads to kafka server disfunctional
[ https://issues.apache.org/jira/browse/KAFKA-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zane Zhang updated KAFKA-4071: -- Description: For an unknown reason, [kafka data root]/replication-offset-checkpoint was corrupted. First Kafka reported an NumberFormatException in kafka sever.out. And then it reported "error when handling request Name: FetchRequest; ... " ERRORs repeatedly (ERROR details below). As a result, clients cannot read from or write to Kafka on several partitions until replication-offset-checkpoint was manually deleted. Can Kafka broker handle this error and survive from it? And what's the reason this file was corrupted? - Only one file was corrupted and no noticeable disk failure was detected. ERROR [KafkaApi-7] error when handling request java.lang.NumberFormatException: For input string: " N?-; O" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:77) at java.lang.Integer.parseInt(Integer.java:493) at java.lang.Integer.parseInt(Integer.java:539) at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272) at scala.collection.immutable.StringOps.toInt(StringOps.scala:30) at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:78) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:93) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) at scala.collection.immutable.Set$Set2.foreach(Set.scala:111) at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:173) ERROR [KafkaApi-7] error when handling request Name: FetchRequest; Version: 1; CorrelationId: 0; ClientId: ReplicaFetcherThread-1-7; ReplicaId: 6; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [prodTopicDal09E,166] -> PartitionFetchInfo(7123666,20971520),[prodTopicDal09E,118] -> PartitionFetchInfo(7128188,20971520),[prodTopicDal09E,238] -> was: For an unknown reason, [kafka data root]/replication-offset-checkpoint was corrupted. First Kafka reported an NumberFormatException in kafka sever.out. And then it reported "error when handling request Name: FetchRequest; ... " ERRORs repeatedly (ERROR details below). As a result, clients cannot read from or write to Kafka on several partitions until replication-offset-checkpoint was manually deleted. ERROR [KafkaApi-7] error when handling request java.lang.NumberFormatException: For input string: " N?-; O" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:77) at java.lang.Integer.parseInt(Integer.java:493) at java.lang.Integer.parseInt(Integer.java:539) at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272) at scala.collection.immutable.StringOps.toInt(StringOps.scala:30) at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:78) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:93) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173) at scala.collection.immutable.Set$Set2.foreach(Set.scala:111) at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:173) ERROR [KafkaApi-7] error when handling request Name: FetchRequest; Version: 1; CorrelationId: 0; ClientId: ReplicaFetcherThread-1-7; ReplicaId: 6; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [prodTopicDal09E,166] -> PartitionFetchInfo(7123666,20971520),[prodTopicDal09E,118] -> PartitionFetchInfo(7128188,20971520),[prodTopicDal09E,238] -> > Corruptted replication-offset-checkpoint leads to kafka server disfunctional > > > Key: KAFKA-4071 > URL: https://issues.apache.org/jira/browse/KAFKA-4071 > Project: Kafka > Issue Type: Bug > Components: clients, offset manager >Affects Versions: 0.9.0.1 > Environment: Red Hat Enterprise 6.7 >Reporter: Zane Zhang >Priority: Critical > > For an unknown reason, [kafka data root]/replication-offset-checkpoint was > corrupted. First Kafka reported an NumberFormatException in kafka sever.out. > And then it reported "error when handling request Name: FetchRequest; ... " > ERRORs repeatedly (ERROR details below). As a result, clients cannot read > from or write to Kafka on several partitions until > replication-offset-checkpoint was manually deleted. > Can Kafka broker handle this error and survive from it? > And what's the reason this file was corrupted? - Only one file was corrupted > and no noticeable disk failure was detected. > ERROR [KafkaApi-7] error when handling request > java.lang.NumberFo
[jira] [Updated] (KAFKA-4065) Property missing in ProcuderConfig.java - KafkaProducer API 0.9.0.0
[ https://issues.apache.org/jira/browse/KAFKA-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] manzar updated KAFKA-4065: -- Description: 1 ) "compressed.topics" property is missing in ProducerConfig.java in KafkaProducer API 0.9.0.0. due to that we can't enable some specific topic for compression. 2) "compression.type" property is there in ProducerConfig.java that was expected to be "compression.codec" according to official document. was: 1 ) "compressed.topics" property is missing in ProducerConfig.java in KafkaProducer API 0.9.0.0. 2) "compression.type" property is there in ProducerConfig.java that was expected to be "compression.codec" according to official document. > Property missing in ProcuderConfig.java - KafkaProducer API 0.9.0.0 > --- > > Key: KAFKA-4065 > URL: https://issues.apache.org/jira/browse/KAFKA-4065 > Project: Kafka > Issue Type: Bug >Reporter: manzar > > 1 ) "compressed.topics" property is missing in ProducerConfig.java in > KafkaProducer API 0.9.0.0. due to that we can't enable some specific topic > for compression. > 2) "compression.type" property is there in ProducerConfig.java that was > expected to be "compression.codec" according to official document. -- This message was sent by Atlassian JIRA (v6.3.4#6332)