[jira] [Created] (KAFKA-12172) Migrate streams:examples module to JUnit 5

2021-01-11 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12172:
--

 Summary: Migrate streams:examples module to JUnit 5
 Key: KAFKA-12172
 URL: https://issues.apache.org/jira/browse/KAFKA-12172
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #389

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove unnecessary semicolon in NetworkClient (#9853)

[github] KAFKA-12156: Document single threaded response handling in Admin 
client (#9842)


--
[...truncated 3.51 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[GitHub] [kafka-site] JaviOverflow opened a new pull request #320: Update powered-by.html used by percentage

2021-01-11 Thread GitBox


JaviOverflow opened a new pull request #320:
URL: https://github.com/apache/kafka-site/pull/320


   The landing page is showing 80%, while the powered-by page is showing 60%. I 
want to link this as a source for a blog post, but I can't just link the 
homepage, and the powered by page is showing outdated stats.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12173) Migrate streams:streams-scala module to JUnit 5

2021-01-11 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12173:
--

 Summary: Migrate streams:streams-scala module to JUnit 5
 Key: KAFKA-12173
 URL: https://issues.apache.org/jira/browse/KAFKA-12173
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #344

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove unnecessary semicolon in NetworkClient (#9853)

[github] KAFKA-12156: Document single threaded response handling in Admin 
client (#9842)


--
[...truncated 6.96 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.

[jira] [Created] (KAFKA-12174) A program for dynamically alter log4j levels at runtime.

2021-01-11 Thread ChenLin (Jira)
ChenLin created KAFKA-12174:
---

 Summary: A program for dynamically alter log4j levels at runtime.
 Key: KAFKA-12174
 URL: https://issues.apache.org/jira/browse/KAFKA-12174
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: ChenLin


A program for dynamically alter log4j levels at runtime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #371

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12156: Document single threaded response handling in Admin 
client (#9842)


--
[...truncated 3.51 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1cc9e85f,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1cc9e85f,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@670d032c,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@670d032c,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5bbd3095,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5bbd3095,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1afdb3b3,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1afdb3b3,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@49a53ba3,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@49a53ba3,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@46be45d8,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@46be45d8,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7435ec02, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7435ec02, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@322b91c, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@322b91c, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@633af2f8, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@633af2f8, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@571efbc2, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache

[jira] [Created] (KAFKA-12175) Migrate generator module to JUnit 5

2021-01-11 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12175:
--

 Summary: Migrate generator module to JUnit 5
 Key: KAFKA-12175
 URL: https://issues.apache.org/jira/browse/KAFKA-12175
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] bbejeck commented on a change in pull request #319: Updates for 2.6.1

2021-01-11 Thread GitBox


bbejeck commented on a change in pull request #319:
URL: https://github.com/apache/kafka-site/pull/319#discussion_r555089947



##
File path: 26/quickstart-zookeeper.html
##
@@ -0,0 +1,277 @@
+
 
-
+

Review comment:
   I think you don't want the `/` character here

##
File path: 26/quickstart-docker.html
##
@@ -0,0 +1,204 @@
+

[GitHub] [kafka-site] mimaison commented on pull request #319: Updates for 2.6.1

2021-01-11 Thread GitBox


mimaison commented on pull request #319:
URL: https://github.com/apache/kafka-site/pull/319#issuecomment-758012963


   Thanks @bbejeck 
   I've pushed an update



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12176) Consider changing default log.message.timestamp.difference.max.ms

2021-01-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12176:
---

 Summary: Consider changing default 
log.message.timestamp.difference.max.ms
 Key: KAFKA-12176
 URL: https://issues.apache.org/jira/browse/KAFKA-12176
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
 Fix For: 3.0.0


The default `log.message.timestamp.difference.max.ms` is Long.MaxValue, which 
means the broker will accept arbitrary timestamps. The broker relies on 
timestamps internally for deciding when a segments should be rolled and when 
they should be deleted. A buggy client which is producing messages with 
timestamps too far in the future or past can cause segments to roll frequently 
which can lead to open file exhaustion, or it can cause segments to be retained 
indefinitely which can lead to disk space exhaustion.

In https://issues.apache.org/jira/browse/KAFKA-4340, it was proposed to set the 
default equal to `log.retention.ms`, which still seems to make sense. This was 
rejected for two reasons as far as I can tell. First was compatibility, which 
just means we would need a major upgrade. The second is that we previously did 
not have a way to tell the user which record had violated the max timestamp 
difference, but now we have 
[KIP-467|https://cwiki.apache.org/confluence/display/KAFKA/KIP-467%3A+Augment+ProduceResponse+error+messaging+for+specific+culprit+records].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12177) Retention is not idempotent

2021-01-11 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-12177:


 Summary: Retention is not idempotent
 Key: KAFKA-12177
 URL: https://issues.apache.org/jira/browse/KAFKA-12177
 Project: Kafka
  Issue Type: Improvement
Reporter: Lucas Bradstreet


Kafka today applies retention in the following order:
 # Time
 # Size
 # Log start offset


Today it is possible for a segment with offsets less than the log start offset 
to contain data that is not deletable due to time retention. This means that 
it's possible for log start offset retention to unblock further deletions as a 
result of time based retention. Note that this does require a case where the 
max timestamp for each segment increases, decreases and then increases again. 
Even so it would be nice to make retention idempotent by applying log start 
offset retention first, followed by size and time. This would also be 
potentially cheaper to perform as neither log start offset and size retention 
require the maxTimestamp for a segment to be loaded from disk after a broker 
restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12178) Improve guard rails for consumer commit when using EOS

2021-01-11 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-12178:


 Summary: Improve guard rails for consumer commit when using EOS
 Key: KAFKA-12178
 URL: https://issues.apache.org/jira/browse/KAFKA-12178
 Project: Kafka
  Issue Type: Improvement
Reporter: Lucas Bradstreet


When EOS is in use, offsets are committed via the producer using the 
sendOffsetsToTransaction​ API. This is what ensures that a transaction is 
committed atomically along with the consumer offsets. Unfortunately this does 
not prevent the consumer from committing, making it easy to achieve non-EOS 
characteristics by accident. enable.auto.commit = true is the default setting 
for consumers. If this not set to false, or if commitSync/commitAsync are 
called manually offsets will no longer be committed correctly for EOS semantics.

We need more guard rails to prevent consumers from being incorrectly used in 
this way. Currently the consumers have no knowledge that a producer is even 
committing offsets and this is difficult to achieve.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] mimaison commented on pull request #319: Updates for 2.6.1

2021-01-11 Thread GitBox


mimaison commented on pull request #319:
URL: https://github.com/apache/kafka-site/pull/319#issuecomment-758182924


   @bbejeck @mjsax @vvcephei  Can you take a look? I'd like to release 2.6.1 
today



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12179) Install Kafka on IBM Cloud

2021-01-11 Thread Muhammad Arif (Jira)
Muhammad Arif created KAFKA-12179:
-

 Summary: Install Kafka on IBM Cloud
 Key: KAFKA-12179
 URL: https://issues.apache.org/jira/browse/KAFKA-12179
 Project: Kafka
  Issue Type: Improvement
  Components: docs, documentation
Affects Versions: 2.7.0
Reporter: Muhammad Arif
 Fix For: 2.7.0
 Attachments: Kafka1.png, Kafka1.png, Kafka2.png, KafkaVerify1.png, 
KafkaVerify2.png, KafkaVerify3.png, KafkaVerify4.png, Kubernetes1.png, 
Kubernetes2.png, Kubernetes3.png, Kubernetes4.png, Kubernetes5.png, 
Kubernetes6.png, Kubernetes7.png, Storage1.png, Storage2.png

*Installing Kafka on IBM Cloud*

 

*+Contents+*
 # Introduction
 # Provision Kubernetes Cluster
 # Deploy IBM Cloud Block-Storage Plugin
 # Deploy Kafka
 # Verifying the Kafka Installation

 

*Introduction*

To complete this tutorial, you should have an IBM Cloud account, if you do not 
have one, please [register/signup|https://cloud.ibm.com/registration] here.

For installing Kafka, we have used the Kubernetes cluster, and used the IBM 
Cloud Block-Storage plugin for our persistent volume. Upon the completion of 
this tutorial, you would have the Kafka up and running on the Kubernetes 
cluster.
 # Provision the Kubernetes cluster, if you have already setup one, skip to 
step 2.
 # Deploy the IBM Cloud Block-Storage Plugin to the created cluster, if you 
have already done this, skip to step 3.
 # Deploy the Kafka.

*Provision Kubernetes Cluster*
 * Click on the *Catalog* button on top center. Open 
[Catalog|https://cloud.ibm.com/catalog].

 
 * In search catalog box, search for *Kubernetes Service* and click on it
 * You are now at Create Kubernetes Cluster page, there you have the two plans 
to create the Kubernetes cluster, either using free plan or standard plan.

 

*Using Free Plan:*
 * Select Pricing Plan as “*Free*”.
 * Click on *Create*.
 * Wait a few minutes, and then your Cloud would be ready.

 

*_>Note_*_: Please be careful when choosing free cluster, as your pods could be 
stuck at pending state due to insufficient compute and memory resources, if you 
face such kind of issue please increase your resource by creating them choosing 
the standard plan._

 

*Using Standard Plan:*
 * Select Pricing Plan as “*Standard*”
 * Select your Kubernetes Version as latest available or desired one by 
application, in our example we have set it to be ‘*18.13*’.

 ** 
 * Select Infrastructure as “*Classic*”
 * Leave Resource Group to “*Default*”
 * Select Geography as “*Asia*” or your desired one.
 * Select Availability as “*Single Zone*”.

> _This option allows you to create the resources in either single or multi 
> availability zones. Multi availability zone provides you the option to create 
> the resources in more than one availability zones so in case of catastrophe 
> it could sustain the disaster and continues to work._
 * Select Worker Zone as *Chennai 01.*

 *__* 
 * In Worker Pool, input your desired number of nodes as “*3*”
 * Leave the Encrypt Local Disk option to “*On*”
 * Select Master Service Endpoint to “*Both private and public endpoints*”
 * Give your cluster-name as “*Kafka-Cluster*”
 * Provide *tags* to your cluster and click on *Create*.
 * Wait a few minutes, and then your Cloud would be ready.

*Deploy IBM Cloud Block-Storage Plugin*
 * Click on the *Catalog* button on top center.
 * In search catalog box, search for *IBM Cloud Block Storage Plug-in* and 
click on it
 * Select your cluster
 * Provide *Target Namespace* as “*kafka-storage*”, leave *name* and *resource 
group* to *default*
 * Click on *Install*

 

*Deploy Kafka*
 * Again go to the *Catalog* and search for Kafka.

 
 * Provide the details as below.
 * Target: *IBM Kubernetes Service*
 * Method: *Helm chart*
 * Kubernetes cluster: *Kafka-Cluster*(jp-tok)
 * Target namespace: *kafka*
 * Workspace: *kafka-01-07-2021*
 * Resource group: *Default*
 * Click on *Parameters with Default Values*, you can set the deployment values 
or use the default ones, we have used the default ones in this example.
 * Click on *Install*.

*Verifying the Kafka Installation*
 * Go to the *Resources List*in the Left Navigation Menu and click on 
*Kubernetes* and then *Clusters*

   
 * Click on your created *Kafka-Cluster.*
 * A screen would come up for your created cluster, click on *Actions,* and 
then *Web Terminal*

 
 * A warning will appear asking you to install the Web Terminal, click on **

 * When the terminal is installed, click on the action button again and click 
on web terminal and type the following command in below command window. It will 
show you the workspaces of your cluster, you can see *kafka* active.

$ kubectl get ns    

$ kubectl get pod –n Namespace –o wide

$ kubectl get service –n Namespace

   

The installation is done. Enjoy!



--
This message was sent by Atlas

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #372

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Factor `RaftManager` out of `TestRaftServer` (#9839)


--
[...truncated 3.51 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c330b4,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c330b4,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@446d4eb, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@446d4eb, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b19b3d4, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b19b3d4, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b0ea679, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b0ea679, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7aa9633c, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7aa9633c, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48fa010e, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48fa010e, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e7b8cd6, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e7b8cd6, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20a8c60e, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20a8c60e, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1129f1a4, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1129f1a4, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@9c370fe, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@9c370fe, 
timestamped = false, caching = false, logging = true] PASSED

org.

[GitHub] [kafka-site] mimaison commented on pull request #319: Updates for 2.6.1

2021-01-11 Thread GitBox


mimaison commented on pull request #319:
URL: https://github.com/apache/kafka-site/pull/319#issuecomment-758193636


   Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mimaison merged pull request #319: Updates for 2.6.1

2021-01-11 Thread GitBox


mimaison merged pull request #319:
URL: https://github.com/apache/kafka-site/pull/319


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mimaison opened a new pull request #321: Recommend 2.6.1 binary with Scala 2.13

2021-01-11 Thread GitBox


mimaison opened a new pull request #321:
URL: https://github.com/apache/kafka-site/pull/321


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mimaison merged pull request #321: Recommend 2.6.1 binary with Scala 2.13

2021-01-11 Thread GitBox


mimaison merged pull request #321:
URL: https://github.com/apache/kafka-site/pull/321


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[ANNOUNCE] Apache Kafka 2.6.1

2021-01-11 Thread Mickael Maison
The Apache Kafka community is pleased to announce the release for
Apache Kafka 2.6.1.

This is a bug fix release and it includes fixes and improvements from
41 JIRAs, including a few critical bugs.

All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/2.6.1/RELEASE_NOTES.html


You can download the source and binary release (Scala 2.12 and Scala 2.13) from:
https://kafka.apache.org/downloads#2.6.1


---


Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream records
to one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming
the input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide,
including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
Zalando, among others.


A big thank you for the following 36 contributors to this release!

A. Sophie Blee-Goldman, John Roesler, Bruno Cadonna, Rajini Sivaram,
Guozhang Wang, Matthias J. Sax, Chris Egerton, Mickael Maison, Randall
Hauch, leah, Luke Chen, Jason Gustafson, Konstantine Karantasis,
Michael Bingham, Lucas Bradstreet, Andrew Egelhofer, Micah Paul Ramos,
Nikolay, Nitesh Mor, Alex Diachenko, xakassi, Shaik Zakir Hussain,
Stanislav Kozlovski, Stanislav Vodetskyi, Thorsten Hake, Tom Bentley,
Vikas Singh, feyman2016, high.lee, Dima Reznik, Colin Patrick McCabe,
Edoardo Comar, Jim Galasyn, Chia-Ping Tsai, Justine Olshan, Levani
Kokhreidze


We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!

Regards,
Mickael


Re: [ANNOUNCE] Apache Kafka 2.6.1

2021-01-11 Thread Bill Bejeck
Thanks for driving the release Mickael!

-Bill

On Mon, Jan 11, 2021 at 5:17 PM Mickael Maison  wrote:

> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 2.6.1.
>
> This is a bug fix release and it includes fixes and improvements from
> 41 JIRAs, including a few critical bugs.
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.6.1/RELEASE_NOTES.html
>
>
> You can download the source and binary release (Scala 2.12 and Scala 2.13)
> from:
> https://kafka.apache.org/downloads#2.6.1
>
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
> ** The Producer API allows an application to publish a stream records
> to one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming
> the input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide,
> including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
> Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
> Zalando, among others.
>
>
> A big thank you for the following 36 contributors to this release!
>
> A. Sophie Blee-Goldman, John Roesler, Bruno Cadonna, Rajini Sivaram,
> Guozhang Wang, Matthias J. Sax, Chris Egerton, Mickael Maison, Randall
> Hauch, leah, Luke Chen, Jason Gustafson, Konstantine Karantasis,
> Michael Bingham, Lucas Bradstreet, Andrew Egelhofer, Micah Paul Ramos,
> Nikolay, Nitesh Mor, Alex Diachenko, xakassi, Shaik Zakir Hussain,
> Stanislav Kozlovski, Stanislav Vodetskyi, Thorsten Hake, Tom Bentley,
> Vikas Singh, feyman2016, high.lee, Dima Reznik, Colin Patrick McCabe,
> Edoardo Comar, Jim Galasyn, Chia-Ping Tsai, Justine Olshan, Levani
> Kokhreidze
>
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
> Regards,
> Mickael
>


[jira] [Created] (KAFKA-12180) Implement the KIP-631 message generator changes

2021-01-11 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-12180:


 Summary: Implement the KIP-631 message generator changes
 Key: KAFKA-12180
 URL: https://issues.apache.org/jira/browse/KAFKA-12180
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe
Assignee: Colin McCabe


Implement the KIP-631 message generator changes

* Implement the uint16 type
* Implement MetadataRecordType and MetadataJsonConverters



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #390

2021-01-11 Thread Apache Jenkins Server
See 




Re: [VOTE] voting on KIP-631: the quorum-based Kafka controller

2021-01-11 Thread Colin McCabe
After thinking about this more, I have decided to just use an unsigned 16-bit 
int for ports.  Using 4 bytes would be wasteful here.  So I set these fields to 
uint16.  This is straightforward to support in the wire protocol and will be 
more efficient going forward.  I updated the KIP.

best,
Colin

On Tue, Jan 5, 2021, at 17:03, Colin McCabe wrote:
> Hi all,
> 
> Addendum: some of the port types in this KIP were specified as int16 in 
> the wire protocol.  But this does not gracefully handle ports like 
> 33,000, which shows up as negative when using a signed 16 bit number.  
> I think eventually we'll want a uint16 type, but for now I just made 
> them int32.  This is consistent with what we do in MetadataResponse and 
> a few other places.
> 
> best,
> Colin
> 
> 
> On Mon, Dec 21, 2020, at 14:42, Colin McCabe wrote:
> > Hi all,
> > 
> > With non-binding +1 votes from Ron Dagostino, Tom Bentley and Unmesh 
> > Joshi, and binding +1 votes from David Arthur, Boyang Chen, Jason 
> > Gusafson, Ismael Juma, David Jacot, Jun Rao, the KIP passes.
> > 
> > thanks, all!
> > 
> > cheers,
> > Colin
> > 
> > On Fri, Dec 18, 2020, at 12:42, Colin McCabe wrote:
> > > Hi all,
> > > 
> > > I'm going to close the vote in a few hours.  Thanks to everyone who 
> > > reviewed and voted.
> > > 
> > > best,
> > > Colin
> > > 
> > > 
> > > On Fri, Dec 18, 2020, at 10:08, Jun Rao wrote:
> > > > Thanks, Colin. +1
> > > > 
> > > > Jun
> > > > 
> > > > On Thu, Dec 17, 2020 at 2:24 AM David Jacot  wrote:
> > > > 
> > > > > Thanks for driving this KIP, Colin. The KIP is really well written. 
> > > > > This is
> > > > > so exciting!
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > > Best,
> > > > > David
> > > > >
> > > > > On Wed, Dec 16, 2020 at 11:51 PM Colin McCabe  
> > > > > wrote:
> > > > >
> > > > > > On Wed, Dec 16, 2020, at 13:08, Ismael Juma wrote:
> > > > > > > Thanks for all the work on the KIP. Given the magnitude of the 
> > > > > > > KIP, I
> > > > > > > expect that some tweaks will be made as the code is implemented,
> > > > > reviewed
> > > > > > > and tested. I'm overall +1 (binding).
> > > > > > >
> > > > > >
> > > > > > Thanks, Ismael.
> > > > > >
> > > > > > > A few comments below:
> > > > > > > 1. It's a bit weird for kafka-storage to output a random uuid. 
> > > > > > > Would it
> > > > > > be
> > > > > > > better to have a dedicated command for that?
> > > > > >
> > > > > > I'm not sure.  The nice thing about putting it in kafka-storage.sh 
> > > > > > is
> > > > > that
> > > > > > it's there when you need it.  I also think that having subcommands, 
> > > > > > like
> > > > > we
> > > > > > do here, really reduces the "clutter" that we have in some other
> > > > > > command-line tools.  When you get help about the "info" subcommand, 
> > > > > > you
> > > > > > don't see flags for any other subcommand, for example.  I guess we 
> > > > > > can
> > > > > move
> > > > > > this later if it seems more intuitive though.
> > > > > >
> > > > > > > Also, since we use base64
> > > > > > > encoded uuids nearly everywhere (including cluster and topic 
> > > > > > > ids), it
> > > > > > would
> > > > > > > be good to follow that pattern instead of the less compact
> > > > > > > "51380268-1036-410d-a8fc-fb3b55f48033".
> > > > > >
> > > > > > Good idea.  I have updated this to use base64 encoded UUIDs.
> > > > > >
> > > > > > > 2. This is a nit, but I think it would be better to talk about 
> > > > > > > built-in
> > > > > > > quorum mode instead of KIP-500 mode. It's more self descriptive 
> > > > > > > than a
> > > > > > KIP
> > > > > > > reference.
> > > > > >
> > > > > > I do like the sound of "quorum mode."  I guess the main question 
> > > > > > is, if
> > > > > we
> > > > > > later implement raft quorums for regular topics, would that 
> > > > > > nomenclature
> > > > > be
> > > > > > confusing?  I guess we could talk about "metadata quorum mode" to 
> > > > > > avoid
> > > > > > confusion.  Hmm.
> > > > > >
> > > > > > > 3. Did we consider using `session` (like the group coordinator) 
> > > > > > > instead
> > > > > > of
> > > > > > > `regsitration` in `broker.registration.timeout.ms`?
> > > > > >
> > > > > > Hmm, broker.session.timeout.ms does sound better.  I changed it to 
> > > > > > that.
> > > > > >
> > > > > > > 4. The flat id space for the controller and broker while 
> > > > > > > requiring a
> > > > > > > different id in embedded mode seems a bit unintuitive. Are there 
> > > > > > > any
> > > > > > other
> > > > > > > systems that do this? I know we covered some of the reasons in the
> > > > > > "Shared
> > > > > > > IDs between Multiple Nodes" rejected alternatives section, but it
> > > > > didn't
> > > > > > > seem totally convincing to me.
> > > > > >
> > > > > > One of my concerns here is that using separate ID spaces for 
> > > > > > controllers
> > > > > > versus brokers would potentially lead to metrics or logging 
> > > > > > collisions.
> > > > > We
> > > > > > can take a look at that again 

[jira] [Created] (KAFKA-12181) Loosen monotonic fetch offset validation by raft leader

2021-01-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12181:
---

 Summary: Loosen monotonic fetch offset validation by raft leader
 Key: KAFKA-12181
 URL: https://issues.apache.org/jira/browse/KAFKA-12181
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Currently in the Raft's leader implementation, we validate that follower fetch 
offsets increase monotonically. This protects the guarantees that Raft provides 
since a non-monotonic update means that the follower has lost committed data, 
which may or may not result in data loss. It depends whether the update also 
causes a non-monotonic update to the high watermark. If the fetch is from an 
observer, no harm done since observers do not affect the high watermark. If the 
fetch is from a voter and a majority of nodes (excluding the fetcher) have 
offsets larger than or equal to the high watermark, also no harm done. It's 
easy to check for these cases and log a warning instead of raising an error.

The question then is what to do if we get a voter fetch which does cause the 
high watermark to regress? The problem is that there are some scenarios where 
data loss might be unavoidable. For example, a follower's disk might become 
corrupt and ultimately get replaced. Often the damage is already done by the 
time we get the Fetch request with the non-monotonic offset, so the stricter 
validation in fact just prevents recovery. 

It's worth noting also that the stricter validation by the leader cannot be 
relied on to detect data loss. It could be the case that a recovered voter 
restarts in the middle of an election. There is no general way that I'm aware 
of that lets us detect when a voter has lost previously committed data.

With all of this mind, my conclusion is that it makes sense to loosen the 
validation in fetches. The leader can still ensure that its high watermark does 
not go backwards and we can still log a warning, but it should not prevent 
replicas from catching up after hard failures with disk state loss. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #345

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Factor `RaftManager` out of `TestRaftServer` (#9839)


--
[...truncated 6.96 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldForwardDeprecatedInit STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldForwardDeprecatedInit PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kaf

[jira] [Reopened] (KAFKA-6223) Please delete old releases from mirroring system

2021-01-11 Thread Sebb (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebb reopened KAFKA-6223:
-

A couple of old releases could/should now be dropped from the mirrors:

2.5.0
2.6.0

> Please delete old releases from mirroring system
> 
>
> Key: KAFKA-6223
> URL: https://issues.apache.org/jira/browse/KAFKA-6223
> Project: Kafka
>  Issue Type: Bug
> Environment: https://dist.apache.org/repos/dist/release/kafka/
>Reporter: Sebb
>Assignee: Rajini Sivaram
>Priority: Major
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> It's unfair to expect the 3rd party mirrors to carry old releases.
> Note that older releases can still be linked from the download page, but such 
> links should use the archive server at:
> https://archive.apache.org/dist/kafka/
> A suggested process is:
> + Change the download page to use archive.a.o for old releases
> + Delete the corresponding directories from 
> {{https://dist.apache.org/repos/dist/release/kafka/}}
> e.g. {{svn delete https://dist.apache.org/repos/dist/release/kafka/0.8.0}}
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Apache Kafka 2.6.1

2021-01-11 Thread James Cheng
Thank you Mickael for running the release. Good job everyone!

-James

Sent from my iPhone

> On Jan 11, 2021, at 2:17 PM, Mickael Maison  wrote:
> 
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 2.6.1.
> 
> This is a bug fix release and it includes fixes and improvements from
> 41 JIRAs, including a few critical bugs.
> 
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.6.1/RELEASE_NOTES.html
> 
> 
> You can download the source and binary release (Scala 2.12 and Scala 2.13) 
> from:
> https://kafka.apache.org/downloads#2.6.1
> 
> 
> ---
> 
> 
> Apache Kafka is a distributed streaming platform with four core APIs:
> 
> ** The Producer API allows an application to publish a stream records
> to one or more Kafka topics.
> 
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
> 
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming
> the input streams to output streams.
> 
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
> 
> 
> With these APIs, Kafka can be used for two broad classes of application:
> 
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
> 
> ** Building real-time streaming applications that transform or react
> to the streams of data.
> 
> 
> Apache Kafka is in use at large and small companies worldwide,
> including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
> Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
> Zalando, among others.
> 
> 
> A big thank you for the following 36 contributors to this release!
> 
> A. Sophie Blee-Goldman, John Roesler, Bruno Cadonna, Rajini Sivaram,
> Guozhang Wang, Matthias J. Sax, Chris Egerton, Mickael Maison, Randall
> Hauch, leah, Luke Chen, Jason Gustafson, Konstantine Karantasis,
> Michael Bingham, Lucas Bradstreet, Andrew Egelhofer, Micah Paul Ramos,
> Nikolay, Nitesh Mor, Alex Diachenko, xakassi, Shaik Zakir Hussain,
> Stanislav Kozlovski, Stanislav Vodetskyi, Thorsten Hake, Tom Bentley,
> Vikas Singh, feyman2016, high.lee, Dima Reznik, Colin Patrick McCabe,
> Edoardo Comar, Jim Galasyn, Chia-Ping Tsai, Justine Olshan, Levani
> Kokhreidze
> 
> 
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
> 
> Thank you!
> 
> Regards,
> Mickael


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #391

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10500: Add KafkaStreams#removeStreamThread (#9695)


--
[...truncated 7.03 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldForwardDeprecatedInit STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldForwardDeprecatedInit PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #346

2021-01-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10500: Add KafkaStreams#removeStreamThread (#9695)


--
[...truncated 6.96 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@738a284b, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@738a284b, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@63de9372, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@63de9372, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7358d119, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7358d119, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@97215ea, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@97215ea, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2b18bcff, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2b18bcff, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3ccaffed, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3ccaffed, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@9af2174, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@9af2174, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@106bb8ee, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@106bb8ee, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7e7a0158, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7e7a0158, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@15a4bced, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@15a4bced, 
timestamped = false, caching = false, logging = false] PASSED

org

[jira] [Resolved] (KAFKA-10895) Basic auth extension's JAAS config can be corrupted by other plugins

2021-01-11 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-10895.

Resolution: Fixed

> Basic auth extension's JAAS config can be corrupted by other plugins
> 
>
> Key: KAFKA-10895
> URL: https://issues.apache.org/jira/browse/KAFKA-10895
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1, 2.7.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.3.2, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> The Connect 
> [BasicAuthSecurityRestExtension|https://github.com/apache/kafka/blob/trunk/connect/basic-auth-extension/src/main/java/org/apache/kafka/connect/rest/basic/auth/extension/BasicAuthSecurityRestExtension.java]'s
>  doc states that "An entry with the name {{KafkaConnect}} is expected in the 
> JAAS config file configured in the JVM."
> This is technically accurate, as the 
> [JaasBasicAuthFilter|https://github.com/apache/kafka/blob/afa5423356d3d2a2135a51200573b45d097f6d60/connect/basic-auth-extension/src/main/java/org/apache/kafka/connect/rest/basic/auth/extension/JaasBasicAuthFilter.java#L61-L63]
>  that the extension installs creates a {{LoginContext}} using a 
> [constructor|https://docs.oracle.com/javase/8/docs/api/javax/security/auth/login/LoginContext.html#LoginContext-java.lang.String-javax.security.auth.callback.CallbackHandler-]
>  that does not include a 
> [Configuration|https://docs.oracle.com/javase/8/docs/api/javax/security/auth/login/Configuration.html]
>  to be passed in, which causes 
> [Configuration::getConfiguration|https://docs.oracle.com/javase/8/docs/api/javax/security/auth/login/Configuration.html#getConfiguration--]
>  to be used under the hood by the {{LoginContext}} to fetch the JAAS 
> configuration to use for authentication.
> Unfortunately, other plugins (connectors, converters, even other REST 
> extensions, etc.) may invoke 
> [Configuration::setConfiguration|https://docs.oracle.com/javase/8/docs/api/javax/security/auth/login/Configuration.html#setConfiguration-javax.security.auth.login.Configuration-]
>  and install a completely different JAAS configuration onto the JVM. If the 
> user starts their JVM with a JAAS config set via the 
> {{-Djava.security.auth.login.config}} property, that JAAS config can then be 
> completely overwritten, and if the basic auth extension depends on the JAAS 
> config that's installed at startup (as opposed to at runtime by a plugin), it 
> will break.
> It's debatable whether this can or should be addressed with a code fix. One 
> possibility is to cache the current JVM's configuration as soon as the basic 
> auth extension is loaded by invoking {{Configuration::getConfiguration}} and 
> saving the resulting configuration for future {{LoginContext}} 
> instantiations. However, it may be possible that users actually rely on 
> runtime plugins being able to install custom configurations at runtime for 
> their basic auth extension, in which case this change would actually be 
> harmful.
> Regardless, it's worth noting this odd behavior here in the hopes that it can 
> save some time for others who encounter the same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)