showuon commented on PR #12167:
URL: https://github.com/apache/kafka/pull/12167#issuecomment-1132408449
> So I'd like to keep the scope of this PR to unit tests. If you think that
makes sense, I'll go ahead and open up an issue for it to make sure we track
that outstanding item.
Yes,
jsancio merged PR #12160:
URL: https://github.com/apache/kafka/pull/12160
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.
junrao commented on code in PR #12005:
URL: https://github.com/apache/kafka/pull/12005#discussion_r877600481
##
core/src/test/scala/unit/kafka/server/epoch/util/ReplicaFetcherMockBlockingSend.scala:
##
@@ -40,9 +42,15 @@ import scala.collection.Map
* setOffsetsForNextResponse
Jose Armando Garcia Sancio created KAFKA-13918:
--
Summary: Schedule or cancel nooprecord write on metadata version
change
Key: KAFKA-13918
URL: https://issues.apache.org/jira/browse/KAFKA-13918
[
https://issues.apache.org/jira/browse/KAFKA-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jose Armando Garcia Sancio reassigned KAFKA-1930:
-
Assignee: (was: Aditya Auradkar)
> Move server over to new m
vvcephei commented on code in PR #12186:
URL: https://github.com/apache/kafka/pull/12186#discussion_r877582344
##
streams/src/test/java/org/apache/kafka/streams/integration/OptimizedKTableIntegrationTest.java:
##
@@ -125,31 +131,37 @@ public void shouldApplyUpdatesToStandbyStore
vvcephei opened a new pull request, #12186:
URL: https://github.com/apache/kafka/pull/12186
This test has been flaky due to unexpected rebalances during the test.
This change fixes it by detecting an unexpected rebalance and retrying
the test logic (within a timeout).
### Committ
rittikaadhikari commented on code in PR #12005:
URL: https://github.com/apache/kafka/pull/12005#discussion_r877577941
##
core/src/test/scala/unit/kafka/server/epoch/util/ReplicaFetcherMockBlockingSend.scala:
##
@@ -40,9 +42,15 @@ import scala.collection.Map
* setOffsetsForNex
junrao commented on code in PR #12005:
URL: https://github.com/apache/kafka/pull/12005#discussion_r877559859
##
core/src/main/scala/kafka/server/LocalLeaderEndPoint.scala:
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contrib
[
https://issues.apache.org/jira/browse/KAFKA-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17539801#comment-17539801
]
François Rosière commented on KAFKA-13913:
--
[~mjsax] , thanks for your comment.
hachikuji opened a new pull request, #12185:
URL: https://github.com/apache/kafka/pull/12185
The test cases we have in `RequestChannelTest` for `buildResponseSend`
construct the envelope request incorrectly. Basically they confuse the envelope
context and the reference to the wrapped envelo
cmccabe commented on PR #12182:
URL: https://github.com/apache/kafka/pull/12182#issuecomment-1132189652
Thanks for the PR, @mumrah .
1. I agree that tagged fields definitely count as new metadata.
`MetadataVersion.IBP_3_2_IV0` is currently marked with `didMetadataChange =
false`. If
[
https://issues.apache.org/jira/browse/KAFKA-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17539764#comment-17539764
]
Ryan commented on KAFKA-13595:
--
Duplicates https://issues.apache.org/jira/browse/KAFKA-1023
vamossagar12 commented on PR #12121:
URL: https://github.com/apache/kafka/pull/12121#issuecomment-1132048815
> > From a public API point of view, Metrics is in a gray area. It is not
officially part of our public API however we have a few interfaces leaking it.
That being said, we should be
vamossagar12 commented on code in PR #12121:
URL: https://github.com/apache/kafka/pull/12121#discussion_r877394625
##
clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java:
##
@@ -563,10 +615,15 @@ public synchronized void removeReporter(MetricsReporter
reporter) {
[
https://issues.apache.org/jira/browse/KAFKA-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17539718#comment-17539718
]
Matthias J. Sax commented on KAFKA-13913:
-
There was some discussion about this
[
https://issues.apache.org/jira/browse/KAFKA-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Matthias J. Sax updated KAFKA-13913:
Labels: kip (was: )
> Provide builders for KafkaProducer/KafkaConsumer and KafkaStreams
>
[
https://issues.apache.org/jira/browse/KAFKA-13913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Matthias J. Sax updated KAFKA-13913:
Component/s: streams
> Provide builders for KafkaProducer/KafkaConsumer and KafkaStreams
>
divijvaidya opened a new pull request, #12184:
URL: https://github.com/apache/kafka/pull/12184
## Problem
Implementation of connection creation rate quotas in Kafka is dependent on
two configurations:
[quota.window.num](https://kafka.apache.org/documentation.html#brokerconfigs_quota.w
jsancio opened a new pull request, #12183:
URL: https://github.com/apache/kafka/pull/12183
### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgra
[
https://issues.apache.org/jira/browse/KAFKA-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Gustafson resolved KAFKA-13863.
-
Fix Version/s: 3.3.0
Resolution: Fixed
> Prevent null config value when create to
hachikuji merged PR #12109:
URL: https://github.com/apache/kafka/pull/12109
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: jira-unsubscr...@kafka.apach
fvaleri commented on PR #12159:
URL: https://github.com/apache/kafka/pull/12159#issuecomment-1131936314
@divijvaidya I fixed the test you were referring to.
I also fixed the help method `sendNoReceive` which was directly using
`channel.mute()` instead of `selector.mute(channel.id())`.
mumrah commented on PR #12182:
URL: https://github.com/apache/kafka/pull/12182#issuecomment-1131934558
FeaturesImage has `metadataVersion()` method which we can make accessible
through a supplier for the Image/Delta classes. For broker components like
ReplicaManager, we can add an argument
mumrah commented on code in PR #12182:
URL: https://github.com/apache/kafka/pull/12182#discussion_r877220561
##
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java:
##
@@ -1010,6 +1004,11 @@ private void maybeCompleteAuthorizerInitialLoad() {
}
jsancio commented on code in PR #12182:
URL: https://github.com/apache/kafka/pull/12182#discussion_r877200466
##
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java:
##
@@ -1010,6 +1004,11 @@ private void maybeCompleteAuthorizerInitialLoad() {
}
mumrah opened a new pull request, #12182:
URL: https://github.com/apache/kafka/pull/12182
Now that `metadata.version` has been integrated with the controller #12050,
we need to make use of the dynamic nature of the feature flag. This patch adds
a supplier that is passed down to ReplicationC
[
https://issues.apache.org/jira/browse/KAFKA-13807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Colin McCabe resolved KAFKA-13807.
--
Resolution: Fixed
> Ensure that we can set log.flush.interval.ms with IncrementalAlterConfigs
[
https://issues.apache.org/jira/browse/KAFKA-13807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Colin McCabe reassigned KAFKA-13807:
Assignee: Colin McCabe
> Ensure that we can set log.flush.interval.ms with IncrementalAlt
Moovlin commented on PR #12167:
URL: https://github.com/apache/kafka/pull/12167#issuecomment-1131755255
Thanks for the quick responses. To your first answer, I'm happy to do that
and will take a look at the TopciCommandTest for guidance.
To your second answer. Integration tests for t
dajac opened a new pull request, #12181:
URL: https://github.com/apache/kafka/pull/12181
WIP
### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgr
viktorsomogyi opened a new pull request, #12180:
URL: https://github.com/apache/kafka/pull/12180
### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade
[
https://issues.apache.org/jira/browse/KAFKA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Viktor Somogyi-Vass updated KAFKA-13917:
Description:
Currently the heartbeat thread's lookupCoordinator() is called in a t
Viktor Somogyi-Vass created KAFKA-13917:
---
Summary: Avoid calling lookupCoordinator() in tight loop
Key: KAFKA-13917
URL: https://issues.apache.org/jira/browse/KAFKA-13917
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
David Jacot updated KAFKA-13916:
Description: KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-841%3A+Fenced+replicas+sho
David Jacot created KAFKA-13916:
---
Summary: Fenced replicas should not be allowed to join the ISR in
KRaft (KIP-841)
Key: KAFKA-13916
URL: https://issues.apache.org/jira/browse/KAFKA-13916
Project: Kafka
dajac commented on code in PR #12065:
URL: https://github.com/apache/kafka/pull/12065#discussion_r876706874
##
core/src/main/scala/kafka/admin/ConfigCommand.scala:
##
@@ -367,15 +366,12 @@ object ConfigCommand extends Logging {
if (invalidConfigs.nonEmpty)
th
[
https://issues.apache.org/jira/browse/KAFKA-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17539465#comment-17539465
]
lqjacklee commented on KAFKA-13888:
---
[~Niket Goel] for the field ‘LastCaughtUpTimestam
mimaison commented on PR #11780:
URL: https://github.com/apache/kafka/pull/11780#issuecomment-1131508676
Thanks for the updates @C0urante ! I've on PTO next week, I'll take a look
at the tests when I'm back.
--
This is an automated message from the Apache Git Service.
To respond to the me
mimaison commented on code in PR #11780:
URL: https://github.com/apache/kafka/pull/11780#discussion_r876878064
##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java:
##
@@ -1000,6 +1090,266 @@ WorkerMetricsGroup workerMetricsGroup() {
return work
mimaison commented on code in PR #11780:
URL: https://github.com/apache/kafka/pull/11780#discussion_r876877095
##
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java:
##
@@ -576,88 +672,42 @@ public boolean startTask(
executor.submit(workerT
[
https://issues.apache.org/jira/browse/KAFKA-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17539447#comment-17539447
]
ASF GitHub Bot commented on KAFKA-13882:
mimaison commented on code in PR #410:
[
https://issues.apache.org/jira/browse/KAFKA-13915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Peter James Pringle updated KAFKA-13915:
Description:
Add sanity validation on streams start up that *repartition* topics a
[
https://issues.apache.org/jira/browse/KAFKA-13915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Peter James Pringle updated KAFKA-13915:
Description:
Add sanity validation on streams start up that *repartition* topics a
Peter James Pringle created KAFKA-13915:
---
Summary: Kafka streams should validate that the repartition topics
are not created with cleanup.policy compact
Key: KAFKA-13915
URL: https://issues.apache.org/jira/b
showuon commented on code in PR #12136:
URL: https://github.com/apache/kafka/pull/12136#discussion_r876740554
##
core/src/main/scala/kafka/server/KafkaServer.scala:
##
@@ -830,7 +830,13 @@ class KafkaServer(
private def checkpointBrokerMetadata(brokerMetadata: ZkMetaPropertie
showuon commented on PR #12136:
URL: https://github.com/apache/kafka/pull/12136#issuecomment-1131379581
@junrao , thanks for your review. I've addressed your comments. Also, I
found we should handle `IOException` during writing `meta.properties` in
server.startup. Thanks.
--
This is an a
showuon commented on code in PR #12136:
URL: https://github.com/apache/kafka/pull/12136#discussion_r876740835
##
core/src/main/scala/kafka/log/LogManager.scala:
##
@@ -376,8 +381,10 @@ class LogManager(logDirs: Seq[File],
s"($currentNumLoaded/${logsToLoad.length
showuon commented on code in PR #12136:
URL: https://github.com/apache/kafka/pull/12136#discussion_r876740554
##
core/src/main/scala/kafka/server/KafkaServer.scala:
##
@@ -830,7 +830,13 @@ class KafkaServer(
private def checkpointBrokerMetadata(brokerMetadata: ZkMetaPropertie
dajac commented on code in PR #12065:
URL: https://github.com/apache/kafka/pull/12065#discussion_r876706874
##
core/src/main/scala/kafka/admin/ConfigCommand.scala:
##
@@ -367,15 +366,12 @@ object ConfigCommand extends Logging {
if (invalidConfigs.nonEmpty)
th
acsaki opened a new pull request, #12179:
URL: https://github.com/apache/kafka/pull/12179
Clients remain connected and able to produce or consume despite an expired
OAUTHBEARER token.
The problem can be reproduced using the
https://github.com/acsaki/kafka-sasl-reauth project
51 matches
Mail list logo