[GitHub] [kafka] showuon commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
showuon commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647166328 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/RecordContext.java ## @@ -17,39 +17,94 @@ package org.apache.kafka.streams.processor; import org.apache.kafka.common.header.Headers; +import org.apache.kafka.streams.kstream.ValueTransformerWithKeySupplier; /** * The context associated with the current record being processed by * an {@link Processor} */ public interface RecordContext { + /** - * @return The offset of the original record received from Kafka; - * could be -1 if it is not available + * Returns the topic name of the current input record; could be {@code null} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated topic. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid topic name, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the topic name */ -long offset(); +String topic(); /** - * @return The timestamp extracted from the record received from Kafka; - * could be -1 if it is not available + * Returns the partition id of the current input record; could be {@code -1} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated partition id. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid partition id, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the partition id */ -long timestamp(); +int partition(); /** - * @return The topic the record was received on; - * could be null if it is not available + * Returns the offset of the current input record; could be {@code -1} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated offset. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid offset, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the offset */ -String topic(); +long offset(); /** - * @return The partition the record was received on; - * could be -1 if it is not available + * Returns the current timestamp. Review comment: Should we mention the timestamp could be -1 if it's not available as [previous doc](https://github.com/apache/kafka/pull/10810/files#diff-e49dc368634ce1745441b926e5327a51f5e168d6deffc8b7acc5c3483a1431f5L33): > @return The timestamp extracted from the record received from Kafka; could be -1 if it is not available ## File path: streams/src/main/java/org/apache/kafka/streams/processor/Punctuator.java ## @@ -28,6 +29,16 @@ /** * Perform the scheduled periodic operation. * + * If this method accesses {@link ProcessorContext} or + * {@link org.apache.kafka.streams.processor.api.ProcessorContext}, record metadata like topic, + * partition, and offset or {@link org.apache.kafka.streams.processor.api.RecordMetadata} won't + * be available. + * + * Furthermore, for any record that is sent downstream via {@link ProcessorContext#forward(Object, Object)} + * or {@link org.apache.kafka.streams.processor.api.ProcessorContext#forward(Record)}, there + * won't be any record metadata. If {@link ProcessorContext#forward(Object, Object)} is use, Review comment: typo: If {@link ProcessorContext#forward(Object, Object)} is **use** -> **used** ## File path: streams/src/main/java/org/apache/kafka/streams/processor/RecordContext.java ## @@ -17,39 +17,94 @@ package org.apache.kafka.streams.processor; impor
[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes
[ https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359151#comment-17359151 ] Abhijit Mane commented on KAFKA-12847: -- Thanks [~chia7712] for the prompt responses. UID remains 0 because it is a built-in "read-only" bash env. var. From link you posted above, I see this: - --->> {{UID }} The numeric real user id of the current user. *This variable is readonly*. <<--- ARG UID="1000" RUN useradd -u $UID ducker ==> $UID resolves to uid of the logged in user and not 1000 which is causing the issue for me. I am able to consistently recreate the issue. Not clear what is missing from my side. Ex: On a rhel 8.3 vm, I tried this: - 1.) git clone g...@github.com:apache/kafka.git && cd kafka 2.) ./gradlew clean systemTestLibs 3.) bash tests/docker/run_tests.sh // Fails with the UID error. It didn't matter which arch or Linux flavor I used as long as it was bash shell. I guess the CI is a travis job where I see the sysTests being launched in the same way as above. If you get a chance, maybe you could try the 3 steps above if you also run into the same issue (should take ~5 min). Thanks. > Dockerfile needed for kafka system tests needs changes > -- > > Key: KAFKA-12847 > URL: https://issues.apache.org/jira/browse/KAFKA-12847 > Project: Kafka > Issue Type: Bug > Components: system tests >Affects Versions: 2.8.0, 2.7.1 > Environment: Issue tested in environments below but is independent of > h/w arch. or Linux flavor: - > 1.) RHEL-8.3 on x86_64 > 2.) RHEL-8.3 on IBM Power (ppc64le) > 3.) apache/kafka branch tested: trunk (master) >Reporter: Abhijit Mane >Assignee: Abhijit Mane >Priority: Major > Labels: easyfix > Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png > > > Hello, > I tried apache/kafka system tests as per documentation: - > ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_]) > = > PROBLEM > ~~ > 1.) As root user, clone kafka github repo and start "kafka system tests" > # git clone [https://github.com/apache/kafka.git] > # cd kafka > # ./gradlew clean systemTestLibs > # bash tests/docker/run_tests.sh > 2.) Dockerfile issue - > [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile] > This file has an *UID* entry as shown below: - > --- > ARG *UID*="1000" > RUN useradd -u $*UID* ducker > // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not > unique, root user id is 0 > --- > I ran everything as root which means the built-in bash environment variable > 'UID' always > resolves to 0 and can't be changed. Hence, the docker build fails. The issue > should be seen even if run as non-root. > 3.) Next, as root, as per README, I ran: - > server:/kafka> *bash tests/docker/run_tests.sh* > The ducker tool builds the container images & switches to user '*ducker*' > inside the container > & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the > container. > Ref: > [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak] > Ex: docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* > This fails as the 'ducker' user has *no write permissions* to create files > under 'kafka' root dir. Hence, it needs to be made writeable. > // *chmod -R a+w kafka* > – needed as container is run as 'ducker' and needs write access since kafka > root volume from host is mapped to container as "/opt/kafka-dev" where the > 'ducker' user writes logs > = > = > *FIXES needed* > ~ > 1.) Dockerfile - > [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile] > Change 'UID' to '*UID_DUCKER*'. > This won't conflict with built in bash env. var UID and the docker image > build should succeed. > --- > ARG *UID_DUCKER*="1000" > RUN useradd -u $*UID_DUCKER* ducker > // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID > --- > 2.) README needs an update where we must ensure the kafka root dir from where > the tests > are launched is writeable to allow the 'ducker' user to create results/logs. > # chmod -R a+w kafka > With this, I was able to get the docker images built and system tests started > successfully. > = > Also, I wonder whether or not upstream Dockerfile & System tests are part of > CI/CD and get tested for every PR. If so, this issue should have been caught. > > *Question to kafka
[jira] [Commented] (KAFKA-12893) MM2 fails to replicate if starting two+ nodes same time
[ https://issues.apache.org/jira/browse/KAFKA-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359160#comment-17359160 ] Josep Prat commented on KAFKA-12893: Right link to the KIP for documentation purposes: https://cwiki.apache.org/confluence/x/KJDQBQ > MM2 fails to replicate if starting two+ nodes same time > --- > > Key: KAFKA-12893 > URL: https://issues.apache.org/jira/browse/KAFKA-12893 > Project: Kafka > Issue Type: Bug > Components: mirrormaker >Affects Versions: 2.8.0 >Reporter: Tommi Vainikainen >Priority: Major > > I've observed a situation where starting more than one MM2 node in parallel, > MM2 fails to start replication ie. replication flow seems to be stuck without > action. I used exactly same mm2.properties file to start only one at a time, > and the replication flow was proceeding smoothly. > In my setup dc1 has topic "mytopic1" and there is a producer with approx 1 > msg/sec, and I'm trying to repilcate this to dc2. What I observed is that > dc1.mytopic1 is created when initially launching two paraller MM2 instances, > but no messages gets written into the topic as I would expect. If I kill MM2 > instances, and only start one MM2 node, then MM2 starts replicating the > messages in mytopic1. > My mm2.properties: > clusters=dc2, dc1 > dc1->dc2.emit.heartbeats.enabled=true > dc1->dc2.enabled=true > dc1->dc2.sync.group.offsets.enabled=false > dc1->dc2.sync.group.offsets.interval.seconds=45 > dc1->dc2.topics=mytopic1 > dc1->dc2.topics.exclude= > dc1.bootstrap.servers=tvainika-dc1-dev-sandbox.aivencloud.com:12693 > dc1.security.protocol=SSL > dc1.ssl.keystore.type=PKCS12 > dc1.ssl.keystore.location=dc1/client.keystore.p12 > dc1.ssl.keystore.password=secret > dc1.ssl.key.password=secret > dc1.ssl.truststore.location=dc1/client.truststore.jks > dc1.ssl.truststore.password=secret > dc2.bootstrap.servers=tvainika-dc2-dev-sandbox.aivencloud.com:12693 > dc2.security.protocol=SSL > dc2.ssl.keystore.type=PKCS12 > dc2.ssl.keystore.location=dc2/client.keystore.p12 > dc2.ssl.keystore.password=secret > dc2.ssl.key.password=secret > dc2.ssl.truststore.location=dc2/client.truststore.jks > dc2.ssl.truststore.password=secret > tasks.max=3 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-12893) MM2 fails to replicate if starting two+ nodes same time
[ https://issues.apache.org/jira/browse/KAFKA-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359160#comment-17359160 ] Josep Prat edited comment on KAFKA-12893 at 6/8/21, 8:51 AM: - Right link to the KIP for documentation purposes: https://cwiki.apache.org/confluence/x/4g5RCg was (Author: josep.prat): Right link to the KIP for documentation purposes: https://cwiki.apache.org/confluence/x/KJDQBQ > MM2 fails to replicate if starting two+ nodes same time > --- > > Key: KAFKA-12893 > URL: https://issues.apache.org/jira/browse/KAFKA-12893 > Project: Kafka > Issue Type: Bug > Components: mirrormaker >Affects Versions: 2.8.0 >Reporter: Tommi Vainikainen >Priority: Major > > I've observed a situation where starting more than one MM2 node in parallel, > MM2 fails to start replication ie. replication flow seems to be stuck without > action. I used exactly same mm2.properties file to start only one at a time, > and the replication flow was proceeding smoothly. > In my setup dc1 has topic "mytopic1" and there is a producer with approx 1 > msg/sec, and I'm trying to repilcate this to dc2. What I observed is that > dc1.mytopic1 is created when initially launching two paraller MM2 instances, > but no messages gets written into the topic as I would expect. If I kill MM2 > instances, and only start one MM2 node, then MM2 starts replicating the > messages in mytopic1. > My mm2.properties: > clusters=dc2, dc1 > dc1->dc2.emit.heartbeats.enabled=true > dc1->dc2.enabled=true > dc1->dc2.sync.group.offsets.enabled=false > dc1->dc2.sync.group.offsets.interval.seconds=45 > dc1->dc2.topics=mytopic1 > dc1->dc2.topics.exclude= > dc1.bootstrap.servers=tvainika-dc1-dev-sandbox.aivencloud.com:12693 > dc1.security.protocol=SSL > dc1.ssl.keystore.type=PKCS12 > dc1.ssl.keystore.location=dc1/client.keystore.p12 > dc1.ssl.keystore.password=secret > dc1.ssl.key.password=secret > dc1.ssl.truststore.location=dc1/client.truststore.jks > dc1.ssl.truststore.password=secret > dc2.bootstrap.servers=tvainika-dc2-dev-sandbox.aivencloud.com:12693 > dc2.security.protocol=SSL > dc2.ssl.keystore.type=PKCS12 > dc2.ssl.keystore.location=dc2/client.keystore.p12 > dc2.ssl.keystore.password=secret > dc2.ssl.key.password=secret > dc2.ssl.truststore.location=dc2/client.truststore.jks > dc2.ssl.truststore.password=secret > tasks.max=3 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12905) Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
[ https://issues.apache.org/jira/browse/KAFKA-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359162#comment-17359162 ] Bruno Cadonna commented on KAFKA-12905: --- [~a493172422] Could you convert this ticket to a sub-task of KAFKA-7438? I think it is better to track it there. You can do this on the top under "More". Let me know if you need any help. > Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest > - > > Key: KAFKA-12905 > URL: https://issues.apache.org/jira/browse/KAFKA-12905 > Project: Kafka > Issue Type: Improvement >Reporter: YI-CHEN WANG >Assignee: YI-CHEN WANG >Priority: Major > > For > [Kafka-7438|https://issues.apache.org/jira/browse/KAFKA-7438?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20Reopened)%20AND%20assignee%20in%20(EMPTY)%20AND%20text%20~%20%22mockito%22] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes
[ https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359163#comment-17359163 ] Chia-Ping Tsai commented on KAFKA-12847: > If you get a chance, maybe you could try the 3 steps above if you also run > into the same issue (should take ~5 min). I do run the scripts (system tests) every day :) > UID The numeric real user id of the current user. This variable is readonly. The link (https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak#L231) I attached is the way we build image with UID of logged-in user. For example, the UID of logged-in user is 3. The user created by container (https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile#L95) will have same UID (3) and so it can modify the mounted folder (from host). > UID remains 0 because it is a built-in "read-only" bash env. > $UID resolves to uid of the logged in user and not 1000 which is causing the > issue for me. The variable is read-only but I failed to observe the relationship with this issue. We use the "read-only" env variable to set the UID of user inside container. As I mention above, the main purpose is to align the UID inside/outside container. > Dockerfile needed for kafka system tests needs changes > -- > > Key: KAFKA-12847 > URL: https://issues.apache.org/jira/browse/KAFKA-12847 > Project: Kafka > Issue Type: Bug > Components: system tests >Affects Versions: 2.8.0, 2.7.1 > Environment: Issue tested in environments below but is independent of > h/w arch. or Linux flavor: - > 1.) RHEL-8.3 on x86_64 > 2.) RHEL-8.3 on IBM Power (ppc64le) > 3.) apache/kafka branch tested: trunk (master) >Reporter: Abhijit Mane >Assignee: Abhijit Mane >Priority: Major > Labels: easyfix > Attachments: Dockerfile.upstream, 截圖 2021-06-05 上午1.53.17.png > > > Hello, > I tried apache/kafka system tests as per documentation: - > ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_]) > = > PROBLEM > ~~ > 1.) As root user, clone kafka github repo and start "kafka system tests" > # git clone [https://github.com/apache/kafka.git] > # cd kafka > # ./gradlew clean systemTestLibs > # bash tests/docker/run_tests.sh > 2.) Dockerfile issue - > [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile] > This file has an *UID* entry as shown below: - > --- > ARG *UID*="1000" > RUN useradd -u $*UID* ducker > // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not > unique, root user id is 0 > --- > I ran everything as root which means the built-in bash environment variable > 'UID' always > resolves to 0 and can't be changed. Hence, the docker build fails. The issue > should be seen even if run as non-root. > 3.) Next, as root, as per README, I ran: - > server:/kafka> *bash tests/docker/run_tests.sh* > The ducker tool builds the container images & switches to user '*ducker*' > inside the container > & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the > container. > Ref: > [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak] > Ex: docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* > This fails as the 'ducker' user has *no write permissions* to create files > under 'kafka' root dir. Hence, it needs to be made writeable. > // *chmod -R a+w kafka* > – needed as container is run as 'ducker' and needs write access since kafka > root volume from host is mapped to container as "/opt/kafka-dev" where the > 'ducker' user writes logs > = > = > *FIXES needed* > ~ > 1.) Dockerfile - > [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile] > Change 'UID' to '*UID_DUCKER*'. > This won't conflict with built in bash env. var UID and the docker image > build should succeed. > --- > ARG *UID_DUCKER*="1000" > RUN useradd -u $*UID_DUCKER* ducker > // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID > --- > 2.) README needs an update where we must ensure the kafka root dir from where > the tests > are launched is writeable to allow the 'ducker' user to create results/logs. > # chmod -R a+w kafka > With this, I was able to get the docker images built and system tests started > successfully. > = > Also, I wonder whether or not upstream Dockerfile & System tests are part of > CI/CD and get tested for every PR. If so, this is
[GitHub] [kafka] cadonna commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
cadonna commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647252571 ## File path: core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala ## @@ -314,7 +314,6 @@ class GroupMetadataManager(brokerId: Int, case None => responseCallback(Errors.NOT_COORDINATOR) -None Review comment: Just for my understanding since my scala is a bit rusty. `None` is useless here because the return type of the method is `Unit`, right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] cadonna commented on a change in pull request #10835: KAFKA-12905: Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
cadonna commented on a change in pull request #10835: URL: https://github.com/apache/kafka/pull/10835#discussion_r647259690 ## File path: streams/src/test/java/org/apache/kafka/streams/state/internals/metrics/NamedCacheMetricsTest.java ## @@ -49,37 +41,28 @@ private static final String HIT_RATIO_MIN_DESCRIPTION = "The minimum cache hit ratio"; private static final String HIT_RATIO_MAX_DESCRIPTION = "The maximum cache hit ratio"; -private final StreamsMetricsImpl streamsMetrics = createMock(StreamsMetricsImpl.class); -private final Sensor expectedSensor = mock(Sensor.class); +private Sensor expectedSensor = Mockito.mock(Sensor.class); private final Map tagMap = mkMap(mkEntry("key", "value")); @Test public void shouldGetHitRatioSensorWithBuiltInMetricsVersionCurrent() { final String hitRatio = "hit-ratio"; -mockStatic(StreamsMetricsImpl.class); -expect(streamsMetrics.version()).andStubReturn(Version.LATEST); -expect(streamsMetrics.cacheLevelSensor(THREAD_ID, TASK_ID, STORE_NAME, hitRatio, RecordingLevel.DEBUG)) -.andReturn(expectedSensor); -expect(streamsMetrics.cacheLevelTagMap(THREAD_ID, TASK_ID, STORE_NAME)).andReturn(tagMap); +final StreamsMetricsImpl streamsMetrics = Mockito.mock(StreamsMetricsImpl.class); +Mockito.when(streamsMetrics.cacheLevelSensor(THREAD_ID, TASK_ID, STORE_NAME, hitRatio, RecordingLevel.DEBUG)).thenReturn(expectedSensor); +Mockito.when(streamsMetrics.cacheLevelTagMap(THREAD_ID, TASK_ID, STORE_NAME)).thenReturn(tagMap); StreamsMetricsImpl.addAvgAndMinAndMaxToSensor( -expectedSensor, -StreamsMetricsImpl.CACHE_LEVEL_GROUP, -tagMap, -hitRatio, -HIT_RATIO_AVG_DESCRIPTION, -HIT_RATIO_MIN_DESCRIPTION, -HIT_RATIO_MAX_DESCRIPTION); -replay(streamsMetrics); -replay(StreamsMetricsImpl.class); +expectedSensor, +StreamsMetricsImpl.CACHE_LEVEL_GROUP, +tagMap, +hitRatio, +HIT_RATIO_AVG_DESCRIPTION, +HIT_RATIO_MIN_DESCRIPTION, +HIT_RATIO_MAX_DESCRIPTION); Review comment: nit ```suggestion expectedSensor, StreamsMetricsImpl.CACHE_LEVEL_GROUP, tagMap, hitRatio, HIT_RATIO_AVG_DESCRIPTION, HIT_RATIO_MIN_DESCRIPTION, HIT_RATIO_MAX_DESCRIPTION ); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-12890) Consumer group stuck in `CompletingRebalance`
[ https://issues.apache.org/jira/browse/KAFKA-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Jacot updated KAFKA-12890: Fix Version/s: 3.0.0 > Consumer group stuck in `CompletingRebalance` > - > > Key: KAFKA-12890 > URL: https://issues.apache.org/jira/browse/KAFKA-12890 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.7.0, 2.6.1, 2.8.0, 2.7.1, 2.6.2 >Reporter: David Jacot >Assignee: David Jacot >Priority: Blocker > Fix For: 3.0.0 > > > We have seen recently multiple consumer groups stuck in > `CompletingRebalance`. It appears that those group never receives the > assignment from the leader of the group and remains stuck in this state > forever. > When a group transitions to the `CompletingRebalance` state, the group > coordinator sets up `DelayedHeartbeat` for each member of the group. It does > so to ensure that the member sends a sync request within the session timeout. > If it does not, the group coordinator rebalances the group. Note that here, > `DelayedHeartbeat` is used here for this purpose. `DelayedHeartbeat` are also > completed when member heartbeats. > The issue is that https://github.com/apache/kafka/pull/8834 has changed the > heartbeat logic to allow members to heartbeat while the group is in the > `CompletingRebalance` state. This was not allowed before. Now, if a member > starts to heartbeat while the group is in the `CompletingRebalance`, the > heartbeat request will basically complete the pending `DelayedHeartbeat` that > was setup previously for catching not receiving the sync request. Therefore, > if the sync request never comes, the group coordinator does not notice > anymore. > We need to bring that behavior back somehow. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12896) Group rebalance loop caused by repeated group leader JoinGroups
[ https://issues.apache.org/jira/browse/KAFKA-12896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Jacot updated KAFKA-12896: Fix Version/s: 3.0.0 > Group rebalance loop caused by repeated group leader JoinGroups > --- > > Key: KAFKA-12896 > URL: https://issues.apache.org/jira/browse/KAFKA-12896 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 2.6.0 >Reporter: Lucas Bradstreet >Assignee: David Jacot >Priority: Major > Fix For: 3.0.0 > > > We encountered a strange case of a rebalance loop with the > "cooperative-sticky" assignor. The logs show the following for several hours: > > {{Apr 7, 2021 @ 03:58:36.040 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830137 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.992 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830136 > (__consumer_offsets-7) (reason: Updating metadata for member > mygroup-1-7ad27e07-3784-4588-97e1-d796a74d4ecc during CompletingRebalance)}} > {{Apr 7, 2021 @ 03:58:35.988 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830136 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.972 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830135 > (__consumer_offsets-7) (reason: Updating metadata for member mygroup during > CompletingRebalance)}} > {{Apr 7, 2021 @ 03:58:35.965 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830135 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.953 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830134 > (__consumer_offsets-7) (reason: Updating metadata for member > mygroup-7ad27e07-3784-4588-97e1-d796a74d4ecc during CompletingRebalance)}} > {{Apr 7, 2021 @ 03:58:35.941 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830134 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.926 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830133 > (__consumer_offsets-7) (reason: Updating metadata for member mygroup during > CompletingRebalance)}} > Every single time, it was the same member that triggered the JoinGroup and it > was always the leader of the group.{{}} > The leader has the privilege of being able to trigger a rebalance by sending > `JoinGroup` even if its subscription metadata has not changed. But why would > it do so? > It is possible that this is due to the same issue or a similar bug to > https://issues.apache.org/jira/browse/KAFKA-12890. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12896) Group rebalance loop caused by repeated group leader JoinGroups
[ https://issues.apache.org/jira/browse/KAFKA-12896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Jacot updated KAFKA-12896: Priority: Blocker (was: Major) > Group rebalance loop caused by repeated group leader JoinGroups > --- > > Key: KAFKA-12896 > URL: https://issues.apache.org/jira/browse/KAFKA-12896 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 2.6.0 >Reporter: Lucas Bradstreet >Assignee: David Jacot >Priority: Blocker > Fix For: 3.0.0 > > > We encountered a strange case of a rebalance loop with the > "cooperative-sticky" assignor. The logs show the following for several hours: > > {{Apr 7, 2021 @ 03:58:36.040 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830137 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.992 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830136 > (__consumer_offsets-7) (reason: Updating metadata for member > mygroup-1-7ad27e07-3784-4588-97e1-d796a74d4ecc during CompletingRebalance)}} > {{Apr 7, 2021 @ 03:58:35.988 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830136 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.972 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830135 > (__consumer_offsets-7) (reason: Updating metadata for member mygroup during > CompletingRebalance)}} > {{Apr 7, 2021 @ 03:58:35.965 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830135 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.953 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830134 > (__consumer_offsets-7) (reason: Updating metadata for member > mygroup-7ad27e07-3784-4588-97e1-d796a74d4ecc during CompletingRebalance)}} > {{Apr 7, 2021 @ 03:58:35.941 [GroupCoordinator 7]: Stabilized group mygroup > generation 19830134 (__consumer_offsets-7)}} > {{Apr 7, 2021 @ 03:58:35.926 [GroupCoordinator 7]: Preparing to rebalance > group mygroup in state PreparingRebalance with old generation 19830133 > (__consumer_offsets-7) (reason: Updating metadata for member mygroup during > CompletingRebalance)}} > Every single time, it was the same member that triggered the JoinGroup and it > was always the leader of the group.{{}} > The leader has the privilege of being able to trigger a rebalance by sending > `JoinGroup` even if its subscription metadata has not changed. But why would > it do so? > It is possible that this is due to the same issue or a similar bug to > https://issues.apache.org/jira/browse/KAFKA-12890. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12913) Make Scala Case class's final
Matthew de Detrich created KAFKA-12913: -- Summary: Make Scala Case class's final Key: KAFKA-12913 URL: https://issues.apache.org/jira/browse/KAFKA-12913 Project: Kafka Issue Type: Improvement Components: core Reporter: Matthew de Detrich Assignee: Matthew de Detrich Its considered best practice to make case classes final since Scala code that uses case class relies on equals/hashcode/unapply to function correctly (which breaks if user's can override this behaviour) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] cadonna commented on a change in pull request #10802: KAFKA-6718 / Part1: Update SubscriptionInfoData with clientTags
cadonna commented on a change in pull request #10802: URL: https://github.com/apache/kafka/pull/10802#discussion_r647295043 ## File path: streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java ## @@ -407,32 +409,56 @@ public void shouldNotErrorAccessingFutureVars() { @Test public void shouldEncodeAndDecodeVersion9() { final SubscriptionInfo info = -new SubscriptionInfo(9, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE); +new SubscriptionInfo(9, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE, Collections.emptyMap()); assertThat(info, is(SubscriptionInfo.decode(info.encode(; } @Test public void shouldEncodeAndDecodeVersion10() { final SubscriptionInfo info = -new SubscriptionInfo(10, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE); +new SubscriptionInfo(10, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE, Collections.emptyMap()); assertThat(info, is(SubscriptionInfo.decode(info.encode(; } @Test public void shouldEncodeAndDecodeVersion10WithNamedTopologies() { final SubscriptionInfo info = -new SubscriptionInfo(10, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", NAMED_TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE); +new SubscriptionInfo(10, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", NAMED_TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE, Collections.emptyMap()); assertThat(info, is(SubscriptionInfo.decode(info.encode(; } @Test public void shouldThrowIfAttemptingToUseNamedTopologiesWithOlderVersion() { assertThrows( TaskAssignmentException.class, -() -> new SubscriptionInfo(MIN_NAMED_TOPOLOGY_VERSION - 1, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", NAMED_TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE) +() -> new SubscriptionInfo(MIN_NAMED_TOPOLOGY_VERSION - 1, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", NAMED_TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE, Collections.emptyMap()) ); } +@Test +public void shouldEncodeAndDecodeVersion11() { +final SubscriptionInfo info = +new SubscriptionInfo(11, LATEST_SUPPORTED_VERSION, UUID_1, "localhost:80", TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, IGNORED_ERROR_CODE, mkMap(mkEntry("t1", "v1"))); Review comment: Could you use a map with more than just one entry? If you use the same map in multiple tests, you should put it into a class field. The same applies to the tests below. ## File path: streams/src/main/resources/common/message/SubscriptionInfoData.json ## @@ -135,6 +140,22 @@ "type": "int64" } ] +}, +{ + "name": "ClientTag", + "versions": "1+", Review comment: I think this should be 11+. While technically it probably does not make any difference, it better documents when the struct was introduced. ## File path: streams/src/test/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfoTest.java ## @@ -96,8 +98,8 @@ public void shouldThrowForUnknownVersion1() { "localhost:80", TASK_OFFSET_SUMS, IGNORED_UNIQUE_FIELD, -IGNORED_ERROR_CODE -)); +IGNORED_ERROR_CODE, +Collections.emptyMap())); Review comment: Could you use a static final variable named `IGNORED_CLIENT_TAGS` to better document the code as was done for some other fields? Here and below. ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/assignment/SubscriptionInfo.java ## @@ -125,10 +130,33 @@ private SubscriptionInfo(final SubscriptionInfoData subscriptionInfoData) { this.data = subscriptionInfoData; } +public Map clientTags() { +return data.clientTags() + .stream() + .collect( + Collectors.toMap( + clientTag -> new String(clientTag.key(), StandardCharsets.UTF_8), + clientTag -> new String(clientTag.value(), StandardCharsets.UTF_8) + ) + ); Review comment: nit: ```suggestion return data.clientTags().stream() .collect( Collectors.toMap( clientTag -> new String(clientTag.key(), StandardCharsets.UTF_8), clientTag -> new String(clientTag.value(), StandardCharsets.UTF_8) )
[GitHub] [kafka] lkokhreidze commented on pull request #10802: KAFKA-6718 / Part1: Update SubscriptionInfoData with clientTags
lkokhreidze commented on pull request #10802: URL: https://github.com/apache/kafka/pull/10802#issuecomment-856654402 Hi @cadonna, Thanks for the feedback. I think that's a good call. I didn't know that consequence of bumping the version. I will open PR around assignor changes shortly, and re-brand this PR as "Part 2" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] tombentley commented on pull request #9878: KAFKA-6987: Add KafkaFuture.toCompletionStage()
tombentley commented on pull request #9878: URL: https://github.com/apache/kafka/pull/9878#issuecomment-856684447 Test failure is unrelated. @dajac @cmccabe @mimaison please could one of you review? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mdedetrich opened a new pull request #10839: KAFKA-12913: Make case class's final
mdedetrich opened a new pull request #10839: URL: https://github.com/apache/kafka/pull/10839 This PR makes all of the Scala's `case class`'s final to ensure correctness of Kafka's Scala code that uses these case classes. In Scala its best practice to make `case class`'s final since `case class` automatically generates critical methods such as `hashcode`/`equals`/`unapply` which can break code if user's override these methods by subclassing. Please see the [KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-754%3A+Make+Scala+case+class%27s+final) for more info. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mdedetrich commented on a change in pull request #10839: KAFKA-12913: Make case class's final
mdedetrich commented on a change in pull request #10839: URL: https://github.com/apache/kafka/pull/10839#discussion_r647360753 ## File path: core/src/test/scala/unit/kafka/security/authorizer/AclAuthorizerWithZkSaslTest.scala ## @@ -178,9 +177,3 @@ class TestableDigestLoginModule extends DigestLoginModule { } } } - -class TestableJaasSection(contextName: String, modules: Seq[JaasModule]) extends JaasSection(contextName, modules) { Review comment: The only case in the entirety of Kafka where we subclass a `case class`. Since this is in tests and its only use was to better pretty print via `toString`, I have removed it since due to it being superficial. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mdedetrich commented on a change in pull request #10839: KAFKA-12913: Make case class's final
mdedetrich commented on a change in pull request #10839: URL: https://github.com/apache/kafka/pull/10839#discussion_r647361367 ## File path: core/src/main/scala/kafka/server/ZkAdminManager.scala ## @@ -1143,3 +1141,5 @@ class ZkAdminManager(val config: KafkaConfig, retval } } + +private[server] final case class RequestStatus(user: String, mechanism: Option[ScramMechanism], legalRequest: Boolean, iterations: Int) Review comment: Note that I have renamed `requestStatus` to `RequestStatus` (classes should start with a capital) and made it private since `RequestStatus` is only ever used internally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mdedetrich commented on a change in pull request #10839: KAFKA-12913: Make case class's final
mdedetrich commented on a change in pull request #10839: URL: https://github.com/apache/kafka/pull/10839#discussion_r647362215 ## File path: core/src/main/scala/kafka/server/MetadataCache.scala ## @@ -462,10 +462,10 @@ class ZkMetadataCache(brokerId: Int) extends MetadataCache with Logging { } } - case class MetadataSnapshot(partitionStates: mutable.AnyRefMap[String, mutable.LongMap[UpdateMetadataPartitionState]], - topicIds: Map[String, Uuid], - controllerId: Option[Int], - aliveBrokers: mutable.LongMap[Broker], - aliveNodes: mutable.LongMap[collection.Map[ListenerName, Node]]) - } + +private[server] final case class MetadataSnapshot(partitionStates: mutable.AnyRefMap[String, mutable.LongMap[UpdateMetadataPartitionState]], Review comment: Note that I have made this `case class` private since its only ever used internally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mdedetrich commented on a change in pull request #10839: KAFKA-12913: Make case class's final
mdedetrich commented on a change in pull request #10839: URL: https://github.com/apache/kafka/pull/10839#discussion_r647362843 ## File path: core/src/test/scala/integration/kafka/server/MultipleListenersWithSameSecurityProtocolBaseTest.scala ## @@ -180,7 +180,7 @@ abstract class MultipleListenersWithSameSecurityProtocolBaseTest extends ZooKeep props.put(s"${prefix}${KafkaConfig.SaslJaasConfigProp}", jaasConfig) } - case class ClientMetadata(val listenerName: ListenerName, val saslMechanism: String, topic: String) { Review comment: `val` is unnecessary/pointless here -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mdedetrich commented on a change in pull request #10839: KAFKA-12913: Make case class's final
mdedetrich commented on a change in pull request #10839: URL: https://github.com/apache/kafka/pull/10839#discussion_r647363288 ## File path: core/src/test/scala/unit/kafka/security/authorizer/AclAuthorizerWithZkSaslTest.scala ## @@ -69,7 +68,7 @@ class AclAuthorizerWithZkSaslTest extends ZooKeeperTestHarness with SaslSetup { val jaasSections = JaasTestUtils.zkSections val serverJaas = jaasSections.filter(_.contextName == "Server") val clientJaas = jaasSections.filter(_.contextName == "Client") - .map(section => new TestableJaasSection(section.contextName, section.modules)) + .map(section => JaasSection(section.contextName, section.modules)) Review comment: `new` here is pointless -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jlprat opened a new pull request #10840: KAFKA-12849: KIP-744 TaskMetadata ThreadMetadata StreamsMetadata as API
jlprat opened a new pull request #10840: URL: https://github.com/apache/kafka/pull/10840 Implementation of [KIP-744](https://cwiki.apache.org/confluence/x/XIrOCg). Creates new Interfaces for TaskMetadata, ThreadMetadata, and StreamsMetadata, providing internal implementations for each of them. Deprecates current TaskMetadata, ThreadMetadata under o.a.k.s.processor, and SreamsMetadata under a.o.k.s.state. Updates references on internal classes from deprecated classes to new interfaces. Deprecates methods on KStreams returning deprecated ThreadMeatada and StreamsMetadta, and provides new ones returning the new interfaces. Update Javadocs referencing to deprecated classes and methods to point to the right ones. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-12849) Consider migrating TaskMetadata to interface with internal implementation
[ https://issues.apache.org/jira/browse/KAFKA-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josep Prat updated KAFKA-12849: --- Fix Version/s: 3.0.0 > Consider migrating TaskMetadata to interface with internal implementation > - > > Key: KAFKA-12849 > URL: https://issues.apache.org/jira/browse/KAFKA-12849 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: A. Sophie Blee-Goldman >Assignee: Josep Prat >Priority: Major > Labels: needs-kip, newbie, newbie++ > Fix For: 3.0.0 > > > In KIP-740 we had to go through a deprecation cycle in order to change the > constructor from the original one which accepted the taskId parameter as a > string, to the new one which takes a TaskId object directly. We had > considered just changing the signature directly without deprecation as this > was never intended to be instantiated by users, rather it just acts as a > pass-through metadata class. Sort of by definition if there is no reason to > ever instantiate it, this seems to indicate it may be better suited as a > public interface with the implementation and constructor as internal APIs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] ijuma commented on pull request #10828: MINOR: Only log overridden topic configs during topic creation
ijuma commented on pull request #10828: URL: https://github.com/apache/kafka/pull/10828#issuecomment-856770388 I updated the PR description. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ijuma commented on pull request #10828: MINOR: Only log overridden topic configs during topic creation
ijuma commented on pull request #10828: URL: https://github.com/apache/kafka/pull/10828#issuecomment-856771541 Unrelated failures: > Build / JDK 11 and Scala 2.13 / kafka.server.RaftClusterTest.testCreateClusterAndCreateListDeleteTopic() > Build / JDK 15 and Scala 2.13 / org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testDeleteConnector > Build / JDK 15 and Scala 2.13 / kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange() -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ijuma merged pull request #10828: MINOR: Only log overridden topic configs during topic creation
ijuma merged pull request #10828: URL: https://github.com/apache/kafka/pull/10828 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-12905) Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
[ https://issues.apache.org/jira/browse/KAFKA-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YI-CHEN WANG updated KAFKA-12905: - Parent: KAFKA-7438 Issue Type: Sub-task (was: Improvement) > Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest > - > > Key: KAFKA-12905 > URL: https://issues.apache.org/jira/browse/KAFKA-12905 > Project: Kafka > Issue Type: Sub-task >Reporter: YI-CHEN WANG >Assignee: YI-CHEN WANG >Priority: Major > > For > [Kafka-7438|https://issues.apache.org/jira/browse/KAFKA-7438?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20Reopened)%20AND%20assignee%20in%20(EMPTY)%20AND%20text%20~%20%22mockito%22] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12905) Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
[ https://issues.apache.org/jira/browse/KAFKA-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YI-CHEN WANG updated KAFKA-12905: - Attachment: image-2021-06-08-21-42-15-727.png > Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest > - > > Key: KAFKA-12905 > URL: https://issues.apache.org/jira/browse/KAFKA-12905 > Project: Kafka > Issue Type: Sub-task >Reporter: YI-CHEN WANG >Assignee: YI-CHEN WANG >Priority: Major > Attachments: image-2021-06-08-21-42-15-727.png > > > For > [Kafka-7438|https://issues.apache.org/jira/browse/KAFKA-7438?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20Reopened)%20AND%20assignee%20in%20(EMPTY)%20AND%20text%20~%20%22mockito%22] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] ijuma commented on pull request #10828: MINOR: Only log overridden topic configs during topic creation
ijuma commented on pull request #10828: URL: https://github.com/apache/kafka/pull/10828#issuecomment-856781793 Merged to trunk and cherry-picked to 2.8. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-12905) Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
[ https://issues.apache.org/jira/browse/KAFKA-12905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359368#comment-17359368 ] YI-CHEN WANG commented on KAFKA-12905: -- [~cadonna] OK, I have converted.My future modification will also be placed here. > Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest > - > > Key: KAFKA-12905 > URL: https://issues.apache.org/jira/browse/KAFKA-12905 > Project: Kafka > Issue Type: Sub-task >Reporter: YI-CHEN WANG >Assignee: YI-CHEN WANG >Priority: Major > Attachments: image-2021-06-08-21-42-15-727.png > > > For > [Kafka-7438|https://issues.apache.org/jira/browse/KAFKA-7438?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20Reopened)%20AND%20assignee%20in%20(EMPTY)%20AND%20text%20~%20%22mockito%22] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12890) Consumer group stuck in `CompletingRebalance`
[ https://issues.apache.org/jira/browse/KAFKA-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismael Juma updated KAFKA-12890: Fix Version/s: 2.8.1 > Consumer group stuck in `CompletingRebalance` > - > > Key: KAFKA-12890 > URL: https://issues.apache.org/jira/browse/KAFKA-12890 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.7.0, 2.6.1, 2.8.0, 2.7.1, 2.6.2 >Reporter: David Jacot >Assignee: David Jacot >Priority: Blocker > Fix For: 3.0.0, 2.8.1 > > > We have seen recently multiple consumer groups stuck in > `CompletingRebalance`. It appears that those group never receives the > assignment from the leader of the group and remains stuck in this state > forever. > When a group transitions to the `CompletingRebalance` state, the group > coordinator sets up `DelayedHeartbeat` for each member of the group. It does > so to ensure that the member sends a sync request within the session timeout. > If it does not, the group coordinator rebalances the group. Note that here, > `DelayedHeartbeat` is used here for this purpose. `DelayedHeartbeat` are also > completed when member heartbeats. > The issue is that https://github.com/apache/kafka/pull/8834 has changed the > heartbeat logic to allow members to heartbeat while the group is in the > `CompletingRebalance` state. This was not allowed before. Now, if a member > starts to heartbeat while the group is in the `CompletingRebalance`, the > heartbeat request will basically complete the pending `DelayedHeartbeat` that > was setup previously for catching not receiving the sync request. Therefore, > if the sync request never comes, the group coordinator does not notice > anymore. > We need to bring that behavior back somehow. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12890) Consumer group stuck in `CompletingRebalance`
[ https://issues.apache.org/jira/browse/KAFKA-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismael Juma updated KAFKA-12890: Fix Version/s: 2.7.2 2.6.3 > Consumer group stuck in `CompletingRebalance` > - > > Key: KAFKA-12890 > URL: https://issues.apache.org/jira/browse/KAFKA-12890 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.7.0, 2.6.1, 2.8.0, 2.7.1, 2.6.2 >Reporter: David Jacot >Assignee: David Jacot >Priority: Blocker > Fix For: 3.0.0, 2.6.3, 2.7.2, 2.8.1 > > > We have seen recently multiple consumer groups stuck in > `CompletingRebalance`. It appears that those group never receives the > assignment from the leader of the group and remains stuck in this state > forever. > When a group transitions to the `CompletingRebalance` state, the group > coordinator sets up `DelayedHeartbeat` for each member of the group. It does > so to ensure that the member sends a sync request within the session timeout. > If it does not, the group coordinator rebalances the group. Note that here, > `DelayedHeartbeat` is used here for this purpose. `DelayedHeartbeat` are also > completed when member heartbeats. > The issue is that https://github.com/apache/kafka/pull/8834 has changed the > heartbeat logic to allow members to heartbeat while the group is in the > `CompletingRebalance` state. This was not allowed before. Now, if a member > starts to heartbeat while the group is in the `CompletingRebalance`, the > heartbeat request will basically complete the pending `DelayedHeartbeat` that > was setup previously for catching not receiving the sync request. Therefore, > if the sync request never comes, the group coordinator does not notice > anymore. > We need to bring that behavior back somehow. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] dongjinleekr commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
dongjinleekr commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647468635 ## File path: core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala ## @@ -314,7 +314,6 @@ class GroupMetadataManager(brokerId: Int, case None => responseCallback(Errors.NOT_COORDINATOR) -None Review comment: Exactly. This is how ide (IntelliJ 2021.1.2 + scala 2021.1.21) is showing - a dead code with a gray marking.  -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] lkokhreidze commented on a change in pull request #10802: KAFKA-6718 / Part1: Update SubscriptionInfoData with clientTags
lkokhreidze commented on a change in pull request #10802: URL: https://github.com/apache/kafka/pull/10802#discussion_r647470689 ## File path: streams/src/test/java/org/apache/kafka/streams/tests/StreamsUpgradeTest.java ## @@ -161,8 +162,8 @@ public ByteBuffer subscriptionUserData(final Set topics) { userEndPoint(), taskManager.getTaskOffsetSums(), uniqueField, -0 -).encode(); +0, +Collections.emptyMap()).encode(); Review comment: Makes sense. Added non-empty map as a static final field. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ijuma commented on pull request #10835: KAFKA-12905: Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
ijuma commented on pull request #10835: URL: https://github.com/apache/kafka/pull/10835#issuecomment-856794295 @cadonna The build changes look good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ijuma commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
ijuma commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647474072 ## File path: core/src/main/scala/kafka/server/KafkaConfig.scala ## @@ -402,43 +402,43 @@ object KafkaConfig { val RackProp = "broker.rack" /** * Log Configuration ***/ val NumPartitionsProp = "num.partitions" - val LogDirsProp = "log.dirs" - val LogDirProp = "log.dir" - val LogSegmentBytesProp = "log.segment.bytes" - - val LogRollTimeMillisProp = "log.roll.ms" - val LogRollTimeHoursProp = "log.roll.hours" - - val LogRollTimeJitterMillisProp = "log.roll.jitter.ms" - val LogRollTimeJitterHoursProp = "log.roll.jitter.hours" - - val LogRetentionTimeMillisProp = "log.retention.ms" - val LogRetentionTimeMinutesProp = "log.retention.minutes" - val LogRetentionTimeHoursProp = "log.retention.hours" - - val LogRetentionBytesProp = "log.retention.bytes" - val LogCleanupIntervalMsProp = "log.retention.check.interval.ms" - val LogCleanupPolicyProp = "log.cleanup.policy" - val LogCleanerThreadsProp = "log.cleaner.threads" - val LogCleanerIoMaxBytesPerSecondProp = "log.cleaner.io.max.bytes.per.second" - val LogCleanerDedupeBufferSizeProp = "log.cleaner.dedupe.buffer.size" - val LogCleanerIoBufferSizeProp = "log.cleaner.io.buffer.size" - val LogCleanerDedupeBufferLoadFactorProp = "log.cleaner.io.buffer.load.factor" - val LogCleanerBackoffMsProp = "log.cleaner.backoff.ms" - val LogCleanerMinCleanRatioProp = "log.cleaner.min.cleanable.ratio" - val LogCleanerEnableProp = "log.cleaner.enable" - val LogCleanerDeleteRetentionMsProp = "log.cleaner.delete.retention.ms" - val LogCleanerMinCompactionLagMsProp = "log.cleaner.min.compaction.lag.ms" - val LogCleanerMaxCompactionLagMsProp = "log.cleaner.max.compaction.lag.ms" - val LogIndexSizeMaxBytesProp = "log.index.size.max.bytes" - val LogIndexIntervalBytesProp = "log.index.interval.bytes" - val LogFlushIntervalMessagesProp = "log.flush.interval.messages" - val LogDeleteDelayMsProp = "log.segment.delete.delay.ms" - val LogFlushSchedulerIntervalMsProp = "log.flush.scheduler.interval.ms" - val LogFlushIntervalMsProp = "log.flush.interval.ms" - val LogFlushOffsetCheckpointIntervalMsProp = "log.flush.offset.checkpoint.interval.ms" - val LogFlushStartOffsetCheckpointIntervalMsProp = "log.flush.start.offset.checkpoint.interval.ms" - val LogPreAllocateProp = "log.preallocate" + val LogDirsProp = LogConfigPrefix + "dirs" + val LogDirProp = LogConfigPrefix + "dir" + val LogSegmentBytesProp = LogConfigPrefix + "segment.bytes" + + val LogRollTimeMillisProp = LogConfigPrefix + "roll.ms" + val LogRollTimeHoursProp = LogConfigPrefix + "roll.hours" + + val LogRollTimeJitterMillisProp = LogConfigPrefix + "roll.jitter.ms" + val LogRollTimeJitterHoursProp = LogConfigPrefix + "roll.jitter.hours" + + val LogRetentionTimeMillisProp = LogConfigPrefix + "retention.ms" + val LogRetentionTimeMinutesProp = LogConfigPrefix + "retention.minutes" + val LogRetentionTimeHoursProp = LogConfigPrefix + "retention.hours" + + val LogRetentionBytesProp = LogConfigPrefix + "retention.bytes" + val LogCleanupIntervalMsProp = LogConfigPrefix + "retention.check.interval.ms" + val LogCleanupPolicyProp = LogConfigPrefix + "cleanup.policy" + val LogCleanerThreadsProp = LogConfigPrefix + "cleaner.threads" + val LogCleanerIoMaxBytesPerSecondProp = LogConfigPrefix + "cleaner.io.max.bytes.per.second" + val LogCleanerDedupeBufferSizeProp = LogConfigPrefix + "cleaner.dedupe.buffer.size" + val LogCleanerIoBufferSizeProp = LogConfigPrefix + "cleaner.io.buffer.size" + val LogCleanerDedupeBufferLoadFactorProp = LogConfigPrefix + "cleaner.io.buffer.load.factor" + val LogCleanerBackoffMsProp = LogConfigPrefix + "cleaner.backoff.ms" + val LogCleanerMinCleanRatioProp = LogConfigPrefix + "cleaner.min.cleanable.ratio" + val LogCleanerEnableProp = LogConfigPrefix + "cleaner.enable" + val LogCleanerDeleteRetentionMsProp = LogConfigPrefix + "cleaner.delete.retention.ms" + val LogCleanerMinCompactionLagMsProp = LogConfigPrefix + "cleaner.min.compaction.lag.ms" + val LogCleanerMaxCompactionLagMsProp = LogConfigPrefix + "cleaner.max.compaction.lag.ms" + val LogIndexSizeMaxBytesProp = LogConfigPrefix + "index.size.max.bytes" + val LogIndexIntervalBytesProp = LogConfigPrefix + "index.interval.bytes" + val LogFlushIntervalMessagesProp = LogConfigPrefix + "flush.interval.messages" + val LogDeleteDelayMsProp = LogConfigPrefix + "segment.delete.delay.ms" + val LogFlushSchedulerIntervalMsProp = LogConfigPrefix + "flush.scheduler.interval.ms" + val LogFlushIntervalMsProp = LogConfigPrefix + "flush.interval.ms" + val LogFlushOffsetCheckpointIntervalMsProp = LogConfigPrefix + "flush.offset.checkpoint.interval.ms" + val LogFlushStartOffsetCheckpointIntervalMsProp = LogConfigPrefix + "flush.start.offset.checkpoint.interval.ms" + val LogPreAllocateProp = LogConfigPrefix + "preallocate" Review comment: This
[GitHub] [kafka] ijuma commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
ijuma commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647474443 ## File path: core/src/main/scala/kafka/tools/ConsoleConsumer.scala ## @@ -248,7 +248,7 @@ object ConsoleConsumer extends Logging { val timeoutMsOpt = parser.accepts("timeout-ms", "If specified, exit if no message is available for consumption for the specified interval.") .withRequiredArg .describedAs("timeout_ms") - .ofType(classOf[java.lang.Integer]) + .ofType(classOf[java.lang.Long]) Review comment: Why are we changing this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] lkokhreidze commented on pull request #10802: KAFKA-6718 / Part1: Update SubscriptionInfoData with clientTags
lkokhreidze commented on pull request #10802: URL: https://github.com/apache/kafka/pull/10802#issuecomment-856797369 Hi @cadonna I've addressed all of your comments. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] lkokhreidze commented on pull request #10802: KAFKA-6718 / Part1: Update SubscriptionInfoData with clientTags
lkokhreidze commented on pull request #10802: URL: https://github.com/apache/kafka/pull/10802#issuecomment-856813901 Hi @cadonna Giving it a bit more thought around the order of PRs, logically it makes more sense to have this PR first, as TaskAssignor gets data from the subscription info. Coming back to your point about bumping the version increasing the number of rebalances in a rolling upgrade scenario - considering that version for the 3.0 release was already bumped either way via https://github.com/apache/kafka/pull/10609 do you think it's still a problem? I would prefer to finalise this PR first and avoid more context switching, but of course if it's really needed I can switch to task assignor implementation. Thanks again for the feedback! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] lkokhreidze edited a comment on pull request #10802: KAFKA-6718 / Part1: Update SubscriptionInfoData with clientTags
lkokhreidze edited a comment on pull request #10802: URL: https://github.com/apache/kafka/pull/10802#issuecomment-856813901 Hi @cadonna Giving it a bit more thought around the order of PRs, logically it makes more sense to have this PR first, as TaskAssignor gets data from the subscription info. Coming back to your point about bumping the version increasing the number of rebalances in a rolling upgrade scenario - considering that protocol version for the 3.0 was already increased via https://github.com/apache/kafka/pull/10609, do you think it's still a problem? I would prefer to finalise this PR first and avoid more context switching, but of course if it's really needed I can switch to task assignor implementation. Thanks again for the feedback! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] chia7712 commented on a change in pull request #10835: KAFKA-12905: Replace EasyMock and PowerMock with Mockito for NamedCacheMetricsTest
chia7712 commented on a change in pull request #10835: URL: https://github.com/apache/kafka/pull/10835#discussion_r647503080 ## File path: streams/src/test/java/org/apache/kafka/streams/state/internals/metrics/NamedCacheMetricsTest.java ## @@ -49,37 +41,28 @@ private static final String HIT_RATIO_MIN_DESCRIPTION = "The minimum cache hit ratio"; private static final String HIT_RATIO_MAX_DESCRIPTION = "The maximum cache hit ratio"; -private final StreamsMetricsImpl streamsMetrics = createMock(StreamsMetricsImpl.class); -private final Sensor expectedSensor = mock(Sensor.class); +private Sensor expectedSensor = Mockito.mock(Sensor.class); Review comment: How about adding ‘final’ back? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kpatelatwork opened a new pull request #10841: KAFKA-12482 Remove deprecated rest.host.name and rest.port configs
kpatelatwork opened a new pull request #10841: URL: https://github.com/apache/kafka/pull/10841 The following Connect worker configuration properties were deprecated 3 years ago and 3.0.0 seems like a good major release to remove them as part of this PR: - rest.host.name (deprecated in KIP-208) - rest.port (deprecated in KIP-208) Ran unit and integration tests locally ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vamossagar12 opened a new pull request #10842: KAFKA-12848: kafka streams jmh benchmarks
vamossagar12 opened a new pull request #10842: URL: https://github.com/apache/kafka/pull/10842 *More detailed description of your change, if necessary. The PR title and PR message become the squashed commit message, so use a separate comment to ping reviewers.* *Summary of testing strategy (including rationale) for the feature or bug fix. Unit and/or integration tests are expected for any behaviour change and system tests should be considered for larger changes.* ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12914) StreamSourceNode.toString() throws with StreamsBuilder.stream(Pattern) ctor
Will Bartlett created KAFKA-12914: - Summary: StreamSourceNode.toString() throws with StreamsBuilder.stream(Pattern) ctor Key: KAFKA-12914 URL: https://issues.apache.org/jira/browse/KAFKA-12914 Project: Kafka Issue Type: Bug Components: streams Affects Versions: 2.8.0 Reporter: Will Bartlett Hi, I came across what looks like a bug. h2. Repro {code:java} import org.apache.kafka.common.serialization.Serdes import org.apache.kafka.streams.KafkaStreams import org.apache.kafka.streams.StreamsBuilder import org.apache.kafka.streams.kstream.Consumed import java.util.* import java.util.regex.Pattern fun main() { val builder = StreamsBuilder() builder.stream(Pattern.compile("foo"), Consumed.with(Serdes.Long(), Serdes.Long())) val streams = KafkaStreams(builder.build(), Properties()) streams.start() } {code} {code:bash} SLF4J: Failed toString() invocation on an object of type [java.util.LinkedHashSet] Reported exception: java.lang.NullPointerException at java.base/java.util.Collections$UnmodifiableCollection.(Collections.java:1030) at java.base/java.util.Collections$UnmodifiableSet.(Collections.java:1132) at java.base/java.util.Collections.unmodifiableSet(Collections.java:1122) at org.apache.kafka.streams.kstream.internals.graph.SourceGraphNode.topicNames(SourceGraphNode.java:55) at org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode.toString(StreamSourceNode.java:65) at java.base/java.lang.String.valueOf(String.java:3352) at java.base/java.lang.StringBuilder.append(StringBuilder.java:166) at java.base/java.util.AbstractCollection.toString(AbstractCollection.java:457) at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:277) at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:249) at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:211) at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:161) at ch.qos.logback.classic.spi.LoggingEvent.getFormattedMessage(LoggingEvent.java:293) at ch.qos.logback.classic.spi.LoggingEvent.prepareForDeferredProcessing(LoggingEvent.java:206) at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:223) at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102) at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84) at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51) at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270) at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257) at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421) at ch.qos.logback.classic.Logger.filterAndLog_2(Logger.java:414) at ch.qos.logback.classic.Logger.debug(Logger.java:490) at org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:305) at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:624) at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:613) at ApplicationKt.main(Application.kt:11) at ApplicationKt.main(Application.kt) SLF4J: Failed toString() invocation on an object of type [org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode] Reported exception: java.lang.NullPointerException at java.base/java.util.Collections$UnmodifiableCollection.(Collections.java:1030) at java.base/java.util.Collections$UnmodifiableSet.(Collections.java:1132) at java.base/java.util.Collections.unmodifiableSet(Collections.java:1122) at org.apache.kafka.streams.kstream.internals.graph.SourceGraphNode.topicNames(SourceGraphNode.java:55) at org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode.toString(StreamSourceNode.java:65) at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:277) at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:249) at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:211) at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:161) at ch.qos.logback.classic.spi.LoggingEvent.getFormattedMessage(LoggingEvent.java:293) at ch.qos.logback.classic.spi.LoggingEvent.prepareForDeferredProcessing(LoggingEvent.java:206) at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:223) at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102) at ch.qos.logback.core.Unsyn
[jira] [Commented] (KAFKA-12848) Add some basic benchmarks for Kafka Streams
[ https://issues.apache.org/jira/browse/KAFKA-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359432#comment-17359432 ] Sagar Rao commented on KAFKA-12848: --- [~ableegoldman], i have added benchmarks for persistent stores for kafka streams: [https://github.com/apache/kafka/pull/10842.|https://github.com/apache/kafka/pull/10842] If these look fine, I will add for other kinds of state stores. This is just in continuation of the example pR by John. BTW, couple of questions: 1) Is there a way to run these benchmarks in a scheduled manner on the CI/CD tool used by you guys? I have seen something similar done on kubernetes. Do you think that would make sense? 2) The other thing is I need to use these rocksdb related state stores benchmarks for the couple of other tickets: Direct ByteBuffer and rocksdb merge() API. So, can we merge just this while we keep adding other jmh benchmarks or we wait for everything and i keep switching locally? Or based upon what all we want to cover in the benchmarks, we can add sub tasks? WDYT? > Add some basic benchmarks for Kafka Streams > --- > > Key: KAFKA-12848 > URL: https://issues.apache.org/jira/browse/KAFKA-12848 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: A. Sophie Blee-Goldman >Assignee: Sagar Rao >Priority: Major > Labels: newbie, newbie++ > > As the title suggests, we often want to test out improvements or verify that > a bugfix does not introduce a serious regression. While there are existing > benchmarks that are run for quality assurance by various contributors, there > are no publicly available benchmarks for Kafka Streams in AK itself. > It would be great if we had a simple jmh suite (or something) with various > Streams features which could be run on a one-off basis by developers. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (KAFKA-12848) Add some basic benchmarks for Kafka Streams
[ https://issues.apache.org/jira/browse/KAFKA-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359432#comment-17359432 ] Sagar Rao edited comment on KAFKA-12848 at 6/8/21, 3:50 PM: [~ableegoldman], i have added benchmarks for persistent stores for kafka streams: [https://github.com/apache/kafka/pull/10842.|https://github.com/apache/kafka/pull/10842] If these look fine, I will add for other kinds of state stores. This is just in continuation of the example pR by John. BTW, couple of questions: 1) Is there a way to run these benchmarks in a scheduled manner on the CI/CD tool used by you guys? I have seen something similar done on kubernetes. Do you think that would make sense? 2) The other thing is I need to use these rocksdb related state stores benchmarks for the couple of other tickets: Direct ByteBuffer and rocksdb merge() API. So, can we merge just this while we keep adding other jmh benchmarks or we wait for everything and i keep switching locally? Or based upon what all we want to cover in the benchmarks, we can add sub tasks? WDYT? was (Author: sagarrao): [~ableegoldman], i have added benchmarks for persistent stores for kafka streams: [https://github.com/apache/kafka/pull/10842.|https://github.com/apache/kafka/pull/10842] If these look fine, I will add for other kinds of state stores. This is just in continuation of the example pR by John. BTW, couple of questions: 1) Is there a way to run these benchmarks in a scheduled manner on the CI/CD tool used by you guys? I have seen something similar done on kubernetes. Do you think that would make sense? 2) The other thing is I need to use these rocksdb related state stores benchmarks for the couple of other tickets: Direct ByteBuffer and rocksdb merge() API. So, can we merge just this while we keep adding other jmh benchmarks or we wait for everything and i keep switching locally? Or based upon what all we want to cover in the benchmarks, we can add sub tasks? WDYT? > Add some basic benchmarks for Kafka Streams > --- > > Key: KAFKA-12848 > URL: https://issues.apache.org/jira/browse/KAFKA-12848 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: A. Sophie Blee-Goldman >Assignee: Sagar Rao >Priority: Major > Labels: newbie, newbie++ > > As the title suggests, we often want to test out improvements or verify that > a bugfix does not introduce a serious regression. While there are existing > benchmarks that are run for quality assurance by various contributors, there > are no publicly available benchmarks for Kafka Streams in AK itself. > It would be great if we had a simple jmh suite (or something) with various > Streams features which could be run on a one-off basis by developers. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] wenbingshen commented on a change in pull request #10841: KAFKA-12482 Remove deprecated rest.host.name and rest.port configs
wenbingshen commented on a change in pull request #10841: URL: https://github.com/apache/kafka/pull/10841#discussion_r647551596 ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java ## @@ -146,25 +146,6 @@ + "data to be committed in a future attempt."; public static final long OFFSET_COMMIT_TIMEOUT_MS_DEFAULT = 5000L; -/** - * @deprecated As of 1.1.0. Only used when listeners is not set. Use listeners instead. - */ -@Deprecated -public static final String REST_HOST_NAME_CONFIG = "rest.host.name"; Review comment: Should we add a description about removing this option in the upgrade document? ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java ## @@ -146,25 +146,6 @@ + "data to be committed in a future attempt."; public static final long OFFSET_COMMIT_TIMEOUT_MS_DEFAULT = 5000L; -/** - * @deprecated As of 1.1.0. Only used when listeners is not set. Use listeners instead. - */ -@Deprecated -public static final String REST_HOST_NAME_CONFIG = "rest.host.name"; -private static final String REST_HOST_NAME_DOC -= "Hostname for the REST API. If this is set, it will only bind to this interface.\n" + -"Deprecated, only used when listeners is not set. Use listeners instead."; - -/** - * @deprecated As of 1.1.0. Only used when listeners is not set. Use listeners instead. - */ -@Deprecated -public static final String REST_PORT_CONFIG = "rest.port"; Review comment: Same as above. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12915) Online TV Channel
mushfiqur rahoman created KAFKA-12915: - Summary: Online TV Channel Key: KAFKA-12915 URL: https://issues.apache.org/jira/browse/KAFKA-12915 Project: Kafka Issue Type: Bug Reporter: mushfiqur rahoman Online TV Channel -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] rhauch commented on a change in pull request #10841: KAFKA-12482 Remove deprecated rest.host.name and rest.port configs
rhauch commented on a change in pull request #10841: URL: https://github.com/apache/kafka/pull/10841#discussion_r647588529 ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java ## @@ -304,8 +285,6 @@ protected static ConfigDef baseConfigDef() { Importance.LOW, OFFSET_COMMIT_INTERVAL_MS_DOC) .define(OFFSET_COMMIT_TIMEOUT_MS_CONFIG, Type.LONG, OFFSET_COMMIT_TIMEOUT_MS_DEFAULT, Importance.LOW, OFFSET_COMMIT_TIMEOUT_MS_DOC) -.define(REST_HOST_NAME_CONFIG, Type.STRING, null, Importance.LOW, REST_HOST_NAME_DOC) -.define(REST_PORT_CONFIG, Type.INT, REST_PORT_DEFAULT, Importance.LOW, REST_PORT_DOC) .define(LISTENERS_CONFIG, Type.LIST, null, Importance.LOW, LISTENERS_DOC) Review comment: Should we add more validation here, via a custom validator, to ensure that at least one listener is required? ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java ## @@ -105,32 +105,19 @@ public RestServer(WorkerConfig config) { createConnectors(listeners, adminListeners); } -@SuppressWarnings("deprecation") -List parseListeners() { -List listeners = config.getList(WorkerConfig.LISTENERS_CONFIG); -if (listeners == null || listeners.size() == 0) { -String hostname = config.getString(WorkerConfig.REST_HOST_NAME_CONFIG); - -if (hostname == null) -hostname = ""; - -listeners = Collections.singletonList(String.format("%s://%s:%d", PROTOCOL_HTTP, hostname, config.getInt(WorkerConfig.REST_PORT_CONFIG))); -} - -return listeners; -} - /** * Adds Jetty connector for each configured listener */ public void createConnectors(List listeners, List adminListeners) { List connectors = new ArrayList<>(); -for (String listener : listeners) { -if (!listener.isEmpty()) { -Connector connector = createConnector(listener); -connectors.add(connector); -log.info("Added connector for {}", listener); +if (listeners != null && !listeners.isEmpty()) { +for (String listener : listeners) { +if (!listener.isEmpty()) { +Connector connector = createConnector(listener); +connectors.add(connector); +log.info("Added connector for {}", listener); +} Review comment: Are we just relying upon worker config validation to ensure that at least one listener is required? If so, then do we need the outer condition? If not, then should we add those checks via validators so configuration errors are checked immediately upon startup? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jsancio commented on a change in pull request #10786: KAFKA-12787: Integrate controller snapshoting with raft client
jsancio commented on a change in pull request #10786: URL: https://github.com/apache/kafka/pull/10786#discussion_r647596275 ## File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala ## @@ -247,6 +247,34 @@ final class KafkaMetadataLog private ( FileRawSnapshotWriter.create(log.dir.toPath, snapshotId, Optional.of(this)) } + override def createSnapshotFromEndOffset(endOffset: Long): RawSnapshotWriter = { +val highWatermarkOffset = highWatermark.offset +if (endOffset > highWatermarkOffset) { + throw new IllegalArgumentException( +s"Cannot create a snapshot for an end offset ($endOffset) greater than the high-watermark ($highWatermarkOffset)" + ) +} + +if (endOffset < startOffset) { + throw new IllegalArgumentException( +s"Cannot create a snapshot for an end offset ($endOffset) less or equal to the log start offset ($startOffset)" Review comment: Fixed as part of the merge commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (KAFKA-12870) RecordAccumulator stuck in a flushing state
[ https://issues.apache.org/jira/browse/KAFKA-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359447#comment-17359447 ] Matthias J. Sax commented on KAFKA-12870: - Could this be related to https://issues.apache.org/jira/browse/KAFKA-10888 ? > RecordAccumulator stuck in a flushing state > --- > > Key: KAFKA-12870 > URL: https://issues.apache.org/jira/browse/KAFKA-12870 > Project: Kafka > Issue Type: Bug > Components: producer , streams >Affects Versions: 2.5.1, 2.8.0, 2.7.1, 2.6.2 >Reporter: Niclas Lockner >Priority: Major > Attachments: RecordAccumulator.log, full.log > > > After a Kafka Stream with exactly once enabled has performed its first > commit, the RecordAccumulator within the stream's internal producer gets > stuck in a state where all subsequent ProducerBatches that get allocated are > immediately flushed instead of being held in memory until they expire, > regardless of the stream's linger or batch size config. > This is reproduced in the example code found at > [https://github.com/niclaslockner/kafka-12870] which can be run with > ./gradlew run --args= > The example has a producer that sends 1 record/sec to one topic, and a Kafka > stream with EOS enabled that forwards the records from that topic to another > topic with the configuration linger = 5 sec, commit interval = 10 sec. > > The expected behavior when running the example is that the stream's > ProducerBatches will expire (or get flushed because of the commit) every 5th > second, and that the stream's producer will send a ProduceRequest every 5th > second with an expired ProducerBatch that contains 5 records. > The actual behavior is that the ProducerBatch is made immediately available > for the Sender, and the Sender sends one ProduceRequest for each record. > > The example code contains a copy of the RecordAccumulator class (copied from > kafka-clients 2.8.0) with some additional logging added to > * RecordAccumulator#ready(Cluster, long) > * RecordAccumulator#beginFlush() > * RecordAccumulator#awaitFlushCompletion() > These log entries show (see the attached RecordsAccumulator.log) > * that the batches are considered sendable because a flush is in progress > * that Sender.maybeSendAndPollTransactionalRequest() calls > RecordAccumulator's beginFlush() without also calling awaitFlushCompletion(), > and that this makes RecordAccumulator's flushesInProgress jump between 1-2 > instead of the expected 0-1. > > This issue is not reproducible in version 2.3.1 or 2.4.1. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] thomaskwscott commented on a change in pull request #10760: KAFKA-12541 Extend ListOffset to fetch offset with max timestamp
thomaskwscott commented on a change in pull request #10760: URL: https://github.com/apache/kafka/pull/10760#discussion_r647639491 ## File path: clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java ## @@ -4225,76 +4232,84 @@ public ListOffsetsResult listOffsets(Map topicPartit } } -for (final Map.Entry> entry : leaders.entrySet()) { -final int brokerId = entry.getKey().id(); +for (final Map.Entry>> versionedEntry : leaders.entrySet()) { +for (final Map.Entry> entry : versionedEntry.getValue().entrySet()) { +final int brokerId = versionedEntry.getKey().id(); -calls.add(new Call("listOffsets on broker " + brokerId, context.deadline(), new ConstantNodeIdProvider(brokerId)) { +calls.add(new Call("listOffsets on broker " + brokerId, context.deadline(), new ConstantNodeIdProvider(brokerId)) { -final List partitionsToQuery = new ArrayList<>(entry.getValue().values()); +final List partitionsToQuery = new ArrayList<>(entry.getValue().values()); -@Override -ListOffsetsRequest.Builder createRequest(int timeoutMs) { -return ListOffsetsRequest.Builder +@Override +ListOffsetsRequest.Builder createRequest(int timeoutMs) { +ListOffsetRequestVersion requestVersion = entry.getKey(); +if (requestVersion == ListOffsetRequestVersion.V7AndAbove) { +return ListOffsetsRequest.Builder + .forMaxTimestamp(context.options().isolationLevel()) +.setTargetTimes(partitionsToQuery); +} Review comment: @dajac I changed to use the approach discussed here, I'd appreciate it if you can give this another review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12916) Add new AUTO_CREATE ACL for auto topic creation
Christopher L. Shannon created KAFKA-12916: -- Summary: Add new AUTO_CREATE ACL for auto topic creation Key: KAFKA-12916 URL: https://issues.apache.org/jira/browse/KAFKA-12916 Project: Kafka Issue Type: Improvement Components: core Affects Versions: 2.8.0 Reporter: Christopher L. Shannon Currently Kafka supports creation new topics through a CreateTopicsRequest or by auto creation during a topic MetadataRequest (if enabled on the cluster). Kafka supports ACLs for creation but currently the CREATE acl is used to grant access to both types of topic creations. This is problematic because it may be desirable to allow a user to auto create topics but not to be able to submit a create topic request. The difference is that auto creation will use the cluster defaults for the new topic settings but a topic creation request would allow the user to have control to configure all of the different topic settings which may not be what was intended. The proposed change is to create a new ACL operation called AUTO_CREATE that will be checked to see if a user is authorized to auto create topics instead of using the existing CREATE operation. This new operation will apply both cluster wide (allowed to create a topic of any name) or topic wide (will validate by topic name or prefix). The CREATE operation will still be used for the existing CreateTopicsRequest command. Going forward this will allow an administrator to grant permission to auto create topics with cluster defaults but not to explicitly create topics. This change will be fully backwards compatible and will not break existing users. The AclAuthorizer class will be updated so that any user that is granted the CREATE operation will also imply AUTO_CREATE. This means that any existing configurations that grant CREATE will still work because when the new check is done for AUTO_CREATE the CREATE operation will be implied and return true. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] vvcephei commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
vvcephei commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647659103 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/api/RecordMetadata.java ## @@ -16,19 +16,51 @@ */ package org.apache.kafka.streams.processor.api; +import org.apache.kafka.streams.kstream.ValueTransformerWithKeySupplier; + public interface RecordMetadata { /** - * @return The topic of the original record received from Kafka + * Returns the topic name of the current input record; could be {@code null} if it is not + * available. Review comment: Oh boy. My recollection is that we were trying to keep a null context as a sentinel so that we would be able to return a “not present” Optional. But we are also populating “dummy” contexts in a few places, which might defeat that logic. You’d have to trace through the code to figure out whether or not this can happen. We’d better hurry up and deprecate the old API by the feature freeze so that we can simplify these code paths. Ultimately, I agree: we shouldn’t need nullable members inside an Optional container. In the mean time, I don’t think the warning is harmful. It might cause people to insert null checks that we can’t prove are unnecessary right now, but if someone wants to comb through the codebase to prove it, we can always update the Java doc later to say “never null”. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] YiDing-Duke opened a new pull request #10843: Minor: Log formatting for exceptions during configuration related operations
YiDing-Duke opened a new pull request #10843: URL: https://github.com/apache/kafka/pull/10843 Format configuration logging during exceptions or errors. Also make sure it redacts sensitive information or unknown values. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] vincent81jiang opened a new pull request #10844: MINOR: rename parameter 'startOffset' in Log.read to 'fetchStartOffset'
vincent81jiang opened a new pull request #10844: URL: https://github.com/apache/kafka/pull/10844 'fetchStartOffset' is easier to read than 'startOffset' in Log.scala since 'startOffset' is easy to be confused with logStartOffset. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mjsax commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
mjsax commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647662697 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/api/RecordMetadata.java ## @@ -16,19 +16,51 @@ */ package org.apache.kafka.streams.processor.api; +import org.apache.kafka.streams.kstream.ValueTransformerWithKeySupplier; + public interface RecordMetadata { /** - * @return The topic of the original record received from Kafka + * Returns the topic name of the current input record; could be {@code null} if it is not + * available. Review comment: Cool. Works for me. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] YiDing-Duke commented on pull request #10843: Minor: Log formatting for exceptions during configuration related operations
YiDing-Duke commented on pull request #10843: URL: https://github.com/apache/kafka/pull/10843#issuecomment-856973287 @dajac @kowshik @showuon could you help review this PR? thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jlprat commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
jlprat commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647670096 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/api/ProcessorContext.java ## @@ -53,9 +53,9 @@ TaskId taskId(); /** - * The metadata of the source record, if is one. Processors may be invoked to + * Return the metadata of the current record if available. Processors may be invoked to Review comment: ```suggestion * Returns the metadata of the current record if available. Processors may be invoked to ``` An `s` is missing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jlprat commented on pull request #10840: KAFKA-12849: KIP-744 TaskMetadata ThreadMetadata StreamsMetadata as API
jlprat commented on pull request #10840: URL: https://github.com/apache/kafka/pull/10840#issuecomment-856977741 Failure was: ``` [2021-06-08T14:01:59.439Z] FAILURE: Build failed with an exception. [2021-06-08T14:01:59.439Z] [2021-06-08T14:01:59.439Z] * What went wrong: [2021-06-08T14:01:59.439Z] Execution failed for task ':core:integrationTest'. [2021-06-08T14:01:59.439Z] > Process 'Gradle Test Executor 127' finished with non-zero exit value 1 [2021-06-08T14:01:59.439Z] This problem might be caused by incorrect test process configuration. [2021-06-08T14:01:59.439Z] Please refer to the test execution section in the User Manual at https://docs.gradle.org/7.0.2/userguide/java_testing.html#sec:test_execution ``` On top of the common `RaftClusterTest` ones -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] cshannon opened a new pull request #10845: KAFKA-12916: Add new AUTO_CREATE ACL for auto topic creation
cshannon opened a new pull request #10845: URL: https://github.com/apache/kafka/pull/10845 This will authorizing a user to auto create a topic with cluster defaults but prevent manual creation with overriden settings. The change is backwards compatible as being granted CREATE also implies AUTO_CREATE. Ran through tests in the core module. Updated AclAuthorizerTest to test new Acl Inheritance. The full test suite still needs to be run and some more tests may need to be added. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mjsax commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
mjsax commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647695386 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/RecordContext.java ## @@ -17,39 +17,94 @@ package org.apache.kafka.streams.processor; import org.apache.kafka.common.header.Headers; +import org.apache.kafka.streams.kstream.ValueTransformerWithKeySupplier; /** * The context associated with the current record being processed by * an {@link Processor} */ public interface RecordContext { + /** - * @return The offset of the original record received from Kafka; - * could be -1 if it is not available + * Returns the topic name of the current input record; could be {@code null} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated topic. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid topic name, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the topic name */ -long offset(); +String topic(); /** - * @return The timestamp extracted from the record received from Kafka; - * could be -1 if it is not available + * Returns the partition id of the current input record; could be {@code -1} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated partition id. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid partition id, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the partition id */ -long timestamp(); +int partition(); /** - * @return The topic the record was received on; - * could be null if it is not available + * Returns the offset of the current input record; could be {@code -1} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated offset. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid offset, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the offset */ -String topic(); +long offset(); /** - * @return The partition the record was received on; - * could be -1 if it is not available + * Returns the current timestamp. + * + * If it is triggered while processing a record streamed from the source processor, + * timestamp is defined as the timestamp of the current input record; the timestamp is extracted from + * {@link org.apache.kafka.clients.consumer.ConsumerRecord ConsumerRecord} by {@link TimestampExtractor}. Review comment: Yes. Otherwise it renders `org.apache.kafka.clients.consumer.ConsumerRecord` but we only want to have the short `ConsumerRecord` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mjsax commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
mjsax commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647696003 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/api/ProcessorContext.java ## @@ -53,9 +53,9 @@ TaskId taskId(); /** - * The metadata of the source record, if is one. Processors may be invoked to + * Return the metadata of the current record if available. Processors may be invoked to Review comment: We write JavaDocs imperatively. No `s` is intended. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mjsax commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
mjsax commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647702594 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/RecordContext.java ## @@ -17,39 +17,94 @@ package org.apache.kafka.streams.processor; import org.apache.kafka.common.header.Headers; +import org.apache.kafka.streams.kstream.ValueTransformerWithKeySupplier; /** * The context associated with the current record being processed by * an {@link Processor} */ public interface RecordContext { + /** - * @return The offset of the original record received from Kafka; - * could be -1 if it is not available + * Returns the topic name of the current input record; could be {@code null} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated topic. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid topic name, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the topic name */ -long offset(); +String topic(); /** - * @return The timestamp extracted from the record received from Kafka; - * could be -1 if it is not available + * Returns the partition id of the current input record; could be {@code -1} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated partition id. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid partition id, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the partition id */ -long timestamp(); +int partition(); /** - * @return The topic the record was received on; - * could be null if it is not available + * Returns the offset of the current input record; could be {@code -1} if it is not + * available. + * + * For example, if this method is invoked within a {@link Punctuator#punctuate(long) + * punctuation callback}, or while processing a record that was forwarded by a punctuation + * callback, the record won't have an associated offset. + * Another example is + * {@link org.apache.kafka.streams.kstream.KTable#transformValues(ValueTransformerWithKeySupplier, String...)} + * (and siblings), that do not always guarantee to provide a valid offset, as they might be + * executed "out-of-band" due to some internal optimizations applied by the Kafka Streams DSL. + * + * @return the offset */ -String topic(); +long offset(); /** - * @return The partition the record was received on; - * could be -1 if it is not available + * Returns the current timestamp. Review comment: For `RecordContext` interface, from my understanding it cannot be `-1` (because `RecordContext` is only used in `TopicNameExtractor`) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-12912) fdgfdgfg
[ https://issues.apache.org/jira/browse/KAFKA-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax resolved KAFKA-12912. - Resolution: Invalid > fdgfdgfg > > > Key: KAFKA-12912 > URL: https://issues.apache.org/jira/browse/KAFKA-12912 > Project: Kafka > Issue Type: Bug >Reporter: mushfiqur rahoman >Priority: Major > > fdgdfgdfgdfgdfgdfgfdgdfgdfgdfgdfgdgfgdfgdf -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] jlprat commented on a change in pull request #10810: MINOR: Improve Kafka Streams JavaDocs with regard to record metadata
jlprat commented on a change in pull request #10810: URL: https://github.com/apache/kafka/pull/10810#discussion_r647715006 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/api/ProcessorContext.java ## @@ -53,9 +53,9 @@ TaskId taskId(); /** - * The metadata of the source record, if is one. Processors may be invoked to + * Return the metadata of the current record if available. Processors may be invoked to Review comment: If've seen you have changed all the others to say `Return`, then this is fine -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-12914) StreamSourceNode.toString() throws with StreamsBuilder.stream(Pattern) ctor
[ https://issues.apache.org/jira/browse/KAFKA-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax reassigned KAFKA-12914: --- Assignee: Matthias J. Sax > StreamSourceNode.toString() throws with StreamsBuilder.stream(Pattern) ctor > --- > > Key: KAFKA-12914 > URL: https://issues.apache.org/jira/browse/KAFKA-12914 > Project: Kafka > Issue Type: Bug > Components: streams >Affects Versions: 2.8.0 >Reporter: Will Bartlett >Assignee: Matthias J. Sax >Priority: Minor > > Hi, > I came across what looks like a bug. > h2. Repro > {code:java} > import org.apache.kafka.common.serialization.Serdes > import org.apache.kafka.streams.KafkaStreams > import org.apache.kafka.streams.StreamsBuilder > import org.apache.kafka.streams.kstream.Consumed > import java.util.* > import java.util.regex.Pattern > fun main() { > val builder = StreamsBuilder() > builder.stream(Pattern.compile("foo"), Consumed.with(Serdes.Long(), > Serdes.Long())) > val streams = KafkaStreams(builder.build(), Properties()) > streams.start() > } > {code} > {code:bash} > SLF4J: Failed toString() invocation on an object of type > [java.util.LinkedHashSet] > Reported exception: > java.lang.NullPointerException > at > java.base/java.util.Collections$UnmodifiableCollection.(Collections.java:1030) > at > java.base/java.util.Collections$UnmodifiableSet.(Collections.java:1132) > at > java.base/java.util.Collections.unmodifiableSet(Collections.java:1122) > at > org.apache.kafka.streams.kstream.internals.graph.SourceGraphNode.topicNames(SourceGraphNode.java:55) > at > org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode.toString(StreamSourceNode.java:65) > at java.base/java.lang.String.valueOf(String.java:3352) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:166) > at > java.base/java.util.AbstractCollection.toString(AbstractCollection.java:457) > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:277) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:249) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:211) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:161) > at > ch.qos.logback.classic.spi.LoggingEvent.getFormattedMessage(LoggingEvent.java:293) > at > ch.qos.logback.classic.spi.LoggingEvent.prepareForDeferredProcessing(LoggingEvent.java:206) > at > ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:223) > at > ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102) > at > ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84) > at > ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51) > at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270) > at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257) > at > ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421) > at ch.qos.logback.classic.Logger.filterAndLog_2(Logger.java:414) > at ch.qos.logback.classic.Logger.debug(Logger.java:490) > at > org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:305) > at > org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:624) > at > org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:613) > at ApplicationKt.main(Application.kt:11) > at ApplicationKt.main(Application.kt) > SLF4J: Failed toString() invocation on an object of type > [org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode] > Reported exception: > java.lang.NullPointerException > at > java.base/java.util.Collections$UnmodifiableCollection.(Collections.java:1030) > at > java.base/java.util.Collections$UnmodifiableSet.(Collections.java:1132) > at > java.base/java.util.Collections.unmodifiableSet(Collections.java:1122) > at > org.apache.kafka.streams.kstream.internals.graph.SourceGraphNode.topicNames(SourceGraphNode.java:55) > at > org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode.toString(StreamSourceNode.java:65) > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:277) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:249) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:211) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:161) > a
[GitHub] [kafka] mjsax opened a new pull request #10846: KAFKA-12914: StreamSourceNode should return `null` topic name for pattern subscription
mjsax opened a new pull request #10846: URL: https://github.com/apache/kafka/pull/10846 Call for review @cadonna -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kpatelatwork commented on a change in pull request #10841: KAFKA-12482 Remove deprecated rest.host.name and rest.port configs
kpatelatwork commented on a change in pull request #10841: URL: https://github.com/apache/kafka/pull/10841#discussion_r647722430 ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java ## @@ -146,25 +146,6 @@ + "data to be committed in a future attempt."; public static final long OFFSET_COMMIT_TIMEOUT_MS_DEFAULT = 5000L; -/** - * @deprecated As of 1.1.0. Only used when listeners is not set. Use listeners instead. - */ -@Deprecated -public static final String REST_HOST_NAME_CONFIG = "rest.host.name"; Review comment: Thanks for noticing this, I added the description. ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java ## @@ -146,25 +146,6 @@ + "data to be committed in a future attempt."; public static final long OFFSET_COMMIT_TIMEOUT_MS_DEFAULT = 5000L; -/** - * @deprecated As of 1.1.0. Only used when listeners is not set. Use listeners instead. - */ -@Deprecated -public static final String REST_HOST_NAME_CONFIG = "rest.host.name"; -private static final String REST_HOST_NAME_DOC -= "Hostname for the REST API. If this is set, it will only bind to this interface.\n" + -"Deprecated, only used when listeners is not set. Use listeners instead."; - -/** - * @deprecated As of 1.1.0. Only used when listeners is not set. Use listeners instead. - */ -@Deprecated -public static final String REST_PORT_CONFIG = "rest.port"; Review comment: Thanks for noticing this, I added the description. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kpatelatwork commented on a change in pull request #10841: KAFKA-12482 Remove deprecated rest.host.name and rest.port configs
kpatelatwork commented on a change in pull request #10841: URL: https://github.com/apache/kafka/pull/10841#discussion_r647724197 ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java ## @@ -304,8 +285,6 @@ protected static ConfigDef baseConfigDef() { Importance.LOW, OFFSET_COMMIT_INTERVAL_MS_DOC) .define(OFFSET_COMMIT_TIMEOUT_MS_CONFIG, Type.LONG, OFFSET_COMMIT_TIMEOUT_MS_DEFAULT, Importance.LOW, OFFSET_COMMIT_TIMEOUT_MS_DOC) -.define(REST_HOST_NAME_CONFIG, Type.STRING, null, Importance.LOW, REST_HOST_NAME_DOC) -.define(REST_PORT_CONFIG, Type.INT, REST_PORT_DEFAULT, Importance.LOW, REST_PORT_DOC) .define(LISTENERS_CONFIG, Type.LIST, null, Importance.LOW, LISTENERS_DOC) Review comment: Thanks @rhauch , It was an oversight, I just fixed it. ## File path: connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java ## @@ -105,32 +105,19 @@ public RestServer(WorkerConfig config) { createConnectors(listeners, adminListeners); } -@SuppressWarnings("deprecation") -List parseListeners() { -List listeners = config.getList(WorkerConfig.LISTENERS_CONFIG); -if (listeners == null || listeners.size() == 0) { -String hostname = config.getString(WorkerConfig.REST_HOST_NAME_CONFIG); - -if (hostname == null) -hostname = ""; - -listeners = Collections.singletonList(String.format("%s://%s:%d", PROTOCOL_HTTP, hostname, config.getInt(WorkerConfig.REST_PORT_CONFIG))); -} - -return listeners; -} - /** * Adds Jetty connector for each configured listener */ public void createConnectors(List listeners, List adminListeners) { List connectors = new ArrayList<>(); -for (String listener : listeners) { -if (!listener.isEmpty()) { -Connector connector = createConnector(listener); -connectors.add(connector); -log.info("Added connector for {}", listener); +if (listeners != null && !listeners.isEmpty()) { +for (String listener : listeners) { +if (!listener.isEmpty()) { +Connector connector = createConnector(listener); +connectors.add(connector); +log.info("Added connector for {}", listener); +} Review comment: Added the tests and validator -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] kpatelatwork commented on pull request #10841: KAFKA-12482 Remove deprecated rest.host.name and rest.port configs
kpatelatwork commented on pull request #10841: URL: https://github.com/apache/kafka/pull/10841#issuecomment-857065222 @rhauch and @wenbingshen I updated the PR with all review comments. When you get time could you please check, if it looks good now? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12917) Watch Animated 'America: The Motion Picture' Trailer Full Hd Download-Now
mushfiqur rahoman created KAFKA-12917: - Summary: Watch Animated 'America: The Motion Picture' Trailer Full Hd Download-Now Key: KAFKA-12917 URL: https://issues.apache.org/jira/browse/KAFKA-12917 Project: Kafka Issue Type: Bug Reporter: mushfiqur rahoman America’ Trailer: Channing Tatum Voices Foul-Mouthed George Washington in Netflix Comedy Channing Tatum Leads a Revolution in America: The Motion Picture Trailer Channing Tatum, Simon Pegg, Killer Mike Star in First Trailer for America: The Motion Picture: Watch Online Full HD Free ### Watch Here ▶️▶️ [https://streamsable.com/movies/] Download Here ▶️▶️ [https://streamsable.com/movies/] ### To help usher in a (mostly Covid-free?) 4th of July weekend, Netflix has something special lined up: an animated film that’s an R-Rated take on the American Revolution. “America: The Motion Picture” offers a radically different take on the familiar history of America’s inception as a country. George Washington and other founding fathers rally the colonial troops to victory against the British but in a totally wild and anachronistic fashion. Here’s the official synopsis: READ MORE: Netflix Unveils Massive Summer Slate Teaser Trailer: New Films With Felicity Jones, Jason Mamoa, Shailene Woodley, Kevin Hart, Liam Neeson & More In this wildly tongue-in-cheek animated revisionist history, a chainsaw-wielding George Washington assembles a team of rabble-rousers — including beer-loving bro Sam Adams, famed scientist Thomas Edison, acclaimed horseman Paul Revere, and a very pissed off Geronimo — to defeat Benedict Arnold and King James in the American Revolution. Who will win? No one knows, but you can be sure of one thing: these are not your father’s Founding… uh, Fathers. Channing Tatum leads the voice cast as George Washington. Alongside him is Simon Pegg as King James, Bobby Moynihan as Paul Revere, Raoul Trujillo as Geronimo, and Jason Mantzoukas as Sam Adams. Judy Greer is also on board as Martha Dandridge, as is Olivia Munn as Thomas Edison (yes, you read that right). Will Forte, Andy Samberg, rapper Killer Mike, and Amber Nash are also part of the cast. READ MORE: The 100 Most Anticipated Films of 2021 Matt Thompson, one of the executive producers of the cult animated show “Archer,” directs David Callaham‘s (‘Wonder Woman;’ ‘Shang-Chi And The Legend Of The Ten Rings‘) screenplay. Thompson and Callaham also serve as producers with Adam Reed, also on the “Archer” team. Tatum also has a producer’s credit with Peter Kiernan and Reed Carolin under his Free Association company. Phil Lord and Christopher Miller, the dream team behind “The Lego Movie,” also serve as producers with Will Allegra through Lord Miller. READ MORE: Channing Tatum Reuniting With Lord And Miller For Universal Monster Movie What other crazy surprises does “America: The Motion Picture” has in store for its audience? Find out on June 30, when the film hits Netflix. Check out the trailer below. Channing Tatum's R-rated George Washington and the rest of the Founding Fathers unite in a trailer for Netflix's America: The Motion Picture. The trailer begins by reminding us this animated film comes "From the Founding Fathers who brought you Archer, Spider-Man: Into the Spider-Verse, The Expendables and Magic Mike." The Magic Mike part then comes into play when a scene of gyrating dancers with neon clothing is quickly shown. Next, we are introduced to Tatum's George Washington, who delivers the surprising declaration, "I'm George Washington. Let's go start a fucking revolution." Netflix has released a ridiculous trailer for its star-studded animated comedy “America: The Motion Picture,” which stars Channing Tatum as the voice of a beefed-up and vulgar George Washington in a satirical take on the American Revolution. The movie hails from “Archer” producer Matt Thompson, who directs a script by “Wonder Woman” writer Dave Callahan. With Tatum in an executive producer role alongside partner Reid Carolin as well as Phil Lord and Chris Miller, the wacky historical comedy is sure to be a hit with its target audience. Here’s the official synopsis: “For, like, thousands of years, the origins of the United States of America have remained shrouded in mystery, lost to the sands of time. Who built this ‘country tis of thee,’ and why? Only the dinosaurs know… until now. For the first time in human history, the incredible, completely true story of America’s origins are revealed in ‘America: The Motion Picture’ — a once-in-a-lifetime cultural event available the only way the Founding Fathers ever intended their story be told.” Netflix has released the official trailer and key art for their newest animated film America: The Motion Picture, featuring a voice cast led by Channing Tatum as George
[jira] [Created] (KAFKA-12918) Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd Download
mushfiqur rahoman created KAFKA-12918: - Summary: Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd Download Key: KAFKA-12918 URL: https://issues.apache.org/jira/browse/KAFKA-12918 Project: Kafka Issue Type: Bug Reporter: mushfiqur rahoman America’ Trailer: Channing Tatum Voices Foul-Mouthed George Washington in Netflix Comedy Channing Tatum Leads a Revolution in America: The Motion Picture Trailer Channing Tatum, Simon Pegg, Killer Mike Star in First Trailer for America: The Motion Picture: Watch Online Full HD Free ### Watch Here ▶️▶️ [https://streamsable.com/movies/] Download Here ▶️▶️ [https://streamsable.com/movies/] ### To help usher in a (mostly Covid-free?) 4th of July weekend, Netflix has something special lined up: an animated film that’s an R-Rated take on the American Revolution. “America: The Motion Picture” offers a radically different take on the familiar history of America’s inception as a country. George Washington and other founding fathers rally the colonial troops to victory against the British but in a totally wild and anachronistic fashion. Here’s the official synopsis: READ MORE: Netflix Unveils Massive Summer Slate Teaser Trailer: New Films With Felicity Jones, Jason Mamoa, Shailene Woodley, Kevin Hart, Liam Neeson & More In this wildly tongue-in-cheek animated revisionist history, a chainsaw-wielding George Washington assembles a team of rabble-rousers — including beer-loving bro Sam Adams, famed scientist Thomas Edison, acclaimed horseman Paul Revere, and a very pissed off Geronimo — to defeat Benedict Arnold and King James in the American Revolution. Who will win? No one knows, but you can be sure of one thing: these are not your father’s Founding… uh, Fathers. Channing Tatum leads the voice cast as George Washington. Alongside him is Simon Pegg as King James, Bobby Moynihan as Paul Revere, Raoul Trujillo as Geronimo, and Jason Mantzoukas as Sam Adams. Judy Greer is also on board as Martha Dandridge, as is Olivia Munn as Thomas Edison (yes, you read that right). Will Forte, Andy Samberg, rapper Killer Mike, and Amber Nash are also part of the cast. READ MORE: The 100 Most Anticipated Films of 2021 Matt Thompson, one of the executive producers of the cult animated show “Archer,” directs David Callaham‘s (‘Wonder Woman;’ ‘Shang-Chi And The Legend Of The Ten Rings‘) screenplay. Thompson and Callaham also serve as producers with Adam Reed, also on the “Archer” team. Tatum also has a producer’s credit with Peter Kiernan and Reed Carolin under his Free Association company. Phil Lord and Christopher Miller, the dream team behind “The Lego Movie,” also serve as producers with Will Allegra through Lord Miller. READ MORE: Channing Tatum Reuniting With Lord And Miller For Universal Monster Movie What other crazy surprises does “America: The Motion Picture” has in store for its audience? Find out on June 30, when the film hits Netflix. Check out the trailer below. Channing Tatum's R-rated George Washington and the rest of the Founding Fathers unite in a trailer for Netflix's America: The Motion Picture. The trailer begins by reminding us this animated film comes "From the Founding Fathers who brought you Archer, Spider-Man: Into the Spider-Verse, The Expendables and Magic Mike." The Magic Mike part then comes into play when a scene of gyrating dancers with neon clothing is quickly shown. Next, we are introduced to Tatum's George Washington, who delivers the surprising declaration, "I'm George Washington. Let's go start a fucking revolution." Netflix has released a ridiculous trailer for its star-studded animated comedy “America: The Motion Picture,” which stars Channing Tatum as the voice of a beefed-up and vulgar George Washington in a satirical take on the American Revolution. The movie hails from “Archer” producer Matt Thompson, who directs a script by “Wonder Woman” writer Dave Callahan. With Tatum in an executive producer role alongside partner Reid Carolin as well as Phil Lord and Chris Miller, the wacky historical comedy is sure to be a hit with its target audience. Here’s the official synopsis: “For, like, thousands of years, the origins of the United States of America have remained shrouded in mystery, lost to the sands of time. Who built this ‘country tis of thee,’ and why? Only the dinosaurs know… until now. For the first time in human history, the incredible, completely true story of America’s origins are revealed in ‘America: The Motion Picture’ — a once-in-a-lifetime cultural event available the only way the Founding Fathers ever intended their story be told.” Netflix has released the official trailer and key art for their newest animated film America: The Motion Picture, featuring a voice cast led by Channing Tatum as
[jira] [Created] (KAFKA-12919) Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd Download-Here
mushfiqur rahoman created KAFKA-12919: - Summary: Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd Download-Here Key: KAFKA-12919 URL: https://issues.apache.org/jira/browse/KAFKA-12919 Project: Kafka Issue Type: Bug Reporter: mushfiqur rahoman America’ Trailer: Channing Tatum Voices Foul-Mouthed George Washington in Netflix Comedy Channing Tatum Leads a Revolution in America: The Motion Picture Trailer Channing Tatum, Simon Pegg, Killer Mike Star in First Trailer for America: The Motion Picture: Watch Online Full HD Free ### Watch Here ▶️▶️ [https://streamsable.com/movies/] Download Here ▶️▶️ [https://streamsable.com/movies/] ### To help usher in a (mostly Covid-free?) 4th of July weekend, Netflix has something special lined up: an animated film that’s an R-Rated take on the American Revolution. “America: The Motion Picture” offers a radically different take on the familiar history of America’s inception as a country. George Washington and other founding fathers rally the colonial troops to victory against the British but in a totally wild and anachronistic fashion. Here’s the official synopsis: READ MORE: Netflix Unveils Massive Summer Slate Teaser Trailer: New Films With Felicity Jones, Jason Mamoa, Shailene Woodley, Kevin Hart, Liam Neeson & More In this wildly tongue-in-cheek animated revisionist history, a chainsaw-wielding George Washington assembles a team of rabble-rousers — including beer-loving bro Sam Adams, famed scientist Thomas Edison, acclaimed horseman Paul Revere, and a very pissed off Geronimo — to defeat Benedict Arnold and King James in the American Revolution. Who will win? No one knows, but you can be sure of one thing: these are not your father’s Founding… uh, Fathers. Channing Tatum leads the voice cast as George Washington. Alongside him is Simon Pegg as King James, Bobby Moynihan as Paul Revere, Raoul Trujillo as Geronimo, and Jason Mantzoukas as Sam Adams. Judy Greer is also on board as Martha Dandridge, as is Olivia Munn as Thomas Edison (yes, you read that right). Will Forte, Andy Samberg, rapper Killer Mike, and Amber Nash are also part of the cast. READ MORE: The 100 Most Anticipated Films of 2021 Matt Thompson, one of the executive producers of the cult animated show “Archer,” directs David Callaham‘s (‘Wonder Woman;’ ‘Shang-Chi And The Legend Of The Ten Rings‘) screenplay. Thompson and Callaham also serve as producers with Adam Reed, also on the “Archer” team. Tatum also has a producer’s credit with Peter Kiernan and Reed Carolin under his Free Association company. Phil Lord and Christopher Miller, the dream team behind “The Lego Movie,” also serve as producers with Will Allegra through Lord Miller. READ MORE: Channing Tatum Reuniting With Lord And Miller For Universal Monster Movie What other crazy surprises does “America: The Motion Picture” has in store for its audience? Find out on June 30, when the film hits Netflix. Check out the trailer below. Channing Tatum's R-rated George Washington and the rest of the Founding Fathers unite in a trailer for Netflix's America: The Motion Picture. The trailer begins by reminding us this animated film comes "From the Founding Fathers who brought you Archer, Spider-Man: Into the Spider-Verse, The Expendables and Magic Mike." The Magic Mike part then comes into play when a scene of gyrating dancers with neon clothing is quickly shown. Next, we are introduced to Tatum's George Washington, who delivers the surprising declaration, "I'm George Washington. Let's go start a fucking revolution." Netflix has released a ridiculous trailer for its star-studded animated comedy “America: The Motion Picture,” which stars Channing Tatum as the voice of a beefed-up and vulgar George Washington in a satirical take on the American Revolution. The movie hails from “Archer” producer Matt Thompson, who directs a script by “Wonder Woman” writer Dave Callahan. With Tatum in an executive producer role alongside partner Reid Carolin as well as Phil Lord and Chris Miller, the wacky historical comedy is sure to be a hit with its target audience. Here’s the official synopsis: “For, like, thousands of years, the origins of the United States of America have remained shrouded in mystery, lost to the sands of time. Who built this ‘country tis of thee,’ and why? Only the dinosaurs know… until now. For the first time in human history, the incredible, completely true story of America’s origins are revealed in ‘America: The Motion Picture’ — a once-in-a-lifetime cultural event available the only way the Founding Fathers ever intended their story be told.” Netflix has released the official trailer and key art for their newest animated film America: The Motion Picture, featuring a voice cast led by Channing Tatu
[jira] [Resolved] (KAFKA-12917) Watch Animated 'America: The Motion Picture' Trailer Full Hd Download-Now
[ https://issues.apache.org/jira/browse/KAFKA-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Justine Olshan resolved KAFKA-12917. Resolution: Invalid > Watch Animated 'America: The Motion Picture' Trailer Full Hd Download-Now > - > > Key: KAFKA-12917 > URL: https://issues.apache.org/jira/browse/KAFKA-12917 > Project: Kafka > Issue Type: Bug >Reporter: mushfiqur rahoman >Priority: Major > > America’ Trailer: Channing Tatum Voices Foul-Mouthed George Washington in > Netflix Comedy Channing Tatum Leads a Revolution in America: The Motion > Picture Trailer Channing Tatum, Simon Pegg, Killer Mike Star in First Trailer > for America: The Motion Picture: Watch Online Full HD Free > ### > Watch Here ▶️▶️ [https://streamsable.com/movies/] > Download Here ▶️▶️ [https://streamsable.com/movies/] > ### > To help usher in a (mostly Covid-free?) 4th of July weekend, Netflix has > something special lined up: an animated film that’s an R-Rated take on the > American Revolution. “America: The Motion Picture” offers a radically > different take on the familiar history of America’s inception as a country. > George Washington and other founding fathers rally the colonial troops to > victory against the British but in a totally wild and anachronistic fashion. > Here’s the official synopsis: > READ MORE: Netflix Unveils Massive Summer Slate Teaser Trailer: New Films > With Felicity Jones, Jason Mamoa, Shailene Woodley, Kevin Hart, Liam Neeson & > More > In this wildly tongue-in-cheek animated revisionist history, a > chainsaw-wielding George Washington assembles a team of rabble-rousers — > including beer-loving bro Sam Adams, famed scientist Thomas Edison, acclaimed > horseman Paul Revere, and a very pissed off Geronimo — to defeat Benedict > Arnold and King James in the American Revolution. Who will win? No one knows, > but you can be sure of one thing: these are not your father’s Founding… uh, > Fathers. > Channing Tatum leads the voice cast as George Washington. Alongside him is > Simon Pegg as King James, Bobby Moynihan as Paul Revere, Raoul Trujillo as > Geronimo, and Jason Mantzoukas as Sam Adams. Judy Greer is also on board as > Martha Dandridge, as is Olivia Munn as Thomas Edison (yes, you read that > right). Will Forte, Andy Samberg, rapper Killer Mike, and Amber Nash are also > part of the cast. > READ MORE: The 100 Most Anticipated Films of 2021 > Matt Thompson, one of the executive producers of the cult animated show > “Archer,” directs David Callaham‘s (‘Wonder Woman;’ ‘Shang-Chi And The Legend > Of The Ten Rings‘) screenplay. Thompson and Callaham also serve as producers > with Adam Reed, also on the “Archer” team. Tatum also has a producer’s credit > with Peter Kiernan and Reed Carolin under his Free Association company. Phil > Lord and Christopher Miller, the dream team behind “The Lego Movie,” also > serve as producers with Will Allegra through Lord Miller. > READ MORE: Channing Tatum Reuniting With Lord And Miller For Universal > Monster Movie > What other crazy surprises does “America: The Motion Picture” has in store > for its audience? Find out on June 30, when the film hits Netflix. Check out > the trailer below. > Channing Tatum's R-rated George Washington and the rest of the Founding > Fathers unite in a trailer for Netflix's America: The Motion Picture. > The trailer begins by reminding us this animated film comes "From the > Founding Fathers who brought you Archer, Spider-Man: Into the Spider-Verse, > The Expendables and Magic Mike." The Magic Mike part then comes into play > when a scene of gyrating dancers with neon clothing is quickly shown. Next, > we are introduced to Tatum's George Washington, who delivers the surprising > declaration, "I'm George Washington. Let's go start a fucking revolution." > Netflix has released a ridiculous trailer for its star-studded animated > comedy “America: The Motion Picture,” which stars Channing Tatum as the voice > of a beefed-up and vulgar George Washington in a satirical take on the > American Revolution. The movie hails from “Archer” producer Matt Thompson, > who directs a script by “Wonder Woman” writer Dave Callahan. With Tatum in an > executive producer role alongside partner Reid Carolin as well as Phil Lord > and Chris Miller, the wacky historical comedy is sure to be a hit with its > target audience. > Here’s the official synopsis: “For, like, thousands of years, the origins of > the United States of America have remained shrouded in mystery, lost to the > sands of time. Who built this ‘country tis of thee,’ and why? Only the > dinosaurs know… until now. For the first time in human history, the > incredible, completely true story of Am
[jira] [Resolved] (KAFKA-12918) Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd Download
[ https://issues.apache.org/jira/browse/KAFKA-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guozhang Wang resolved KAFKA-12918. --- > Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd > Download > > > Key: KAFKA-12918 > URL: https://issues.apache.org/jira/browse/KAFKA-12918 > Project: Kafka > Issue Type: Bug >Reporter: mushfiqur rahoman >Priority: Major > > America’ Trailer: Channing Tatum Voices Foul-Mouthed George Washington in > Netflix Comedy Channing Tatum Leads a Revolution in America: The Motion > Picture Trailer Channing Tatum, Simon Pegg, Killer Mike Star in First Trailer > for America: The Motion Picture: Watch Online Full HD Free > ### > Watch Here ▶️▶️ [https://streamsable.com/movies/] > Download Here ▶️▶️ [https://streamsable.com/movies/] > ### > To help usher in a (mostly Covid-free?) 4th of July weekend, Netflix has > something special lined up: an animated film that’s an R-Rated take on the > American Revolution. “America: The Motion Picture” offers a radically > different take on the familiar history of America’s inception as a country. > George Washington and other founding fathers rally the colonial troops to > victory against the British but in a totally wild and anachronistic fashion. > Here’s the official synopsis: > READ MORE: Netflix Unveils Massive Summer Slate Teaser Trailer: New Films > With Felicity Jones, Jason Mamoa, Shailene Woodley, Kevin Hart, Liam Neeson & > More > In this wildly tongue-in-cheek animated revisionist history, a > chainsaw-wielding George Washington assembles a team of rabble-rousers — > including beer-loving bro Sam Adams, famed scientist Thomas Edison, acclaimed > horseman Paul Revere, and a very pissed off Geronimo — to defeat Benedict > Arnold and King James in the American Revolution. Who will win? No one knows, > but you can be sure of one thing: these are not your father’s Founding… uh, > Fathers. > Channing Tatum leads the voice cast as George Washington. Alongside him is > Simon Pegg as King James, Bobby Moynihan as Paul Revere, Raoul Trujillo as > Geronimo, and Jason Mantzoukas as Sam Adams. Judy Greer is also on board as > Martha Dandridge, as is Olivia Munn as Thomas Edison (yes, you read that > right). Will Forte, Andy Samberg, rapper Killer Mike, and Amber Nash are also > part of the cast. > READ MORE: The 100 Most Anticipated Films of 2021 > Matt Thompson, one of the executive producers of the cult animated show > “Archer,” directs David Callaham‘s (‘Wonder Woman;’ ‘Shang-Chi And The Legend > Of The Ten Rings‘) screenplay. Thompson and Callaham also serve as producers > with Adam Reed, also on the “Archer” team. Tatum also has a producer’s credit > with Peter Kiernan and Reed Carolin under his Free Association company. Phil > Lord and Christopher Miller, the dream team behind “The Lego Movie,” also > serve as producers with Will Allegra through Lord Miller. > READ MORE: Channing Tatum Reuniting With Lord And Miller For Universal > Monster Movie > What other crazy surprises does “America: The Motion Picture” has in store > for its audience? Find out on June 30, when the film hits Netflix. Check out > the trailer below. > Channing Tatum's R-rated George Washington and the rest of the Founding > Fathers unite in a trailer for Netflix's America: The Motion Picture. > The trailer begins by reminding us this animated film comes "From the > Founding Fathers who brought you Archer, Spider-Man: Into the Spider-Verse, > The Expendables and Magic Mike." The Magic Mike part then comes into play > when a scene of gyrating dancers with neon clothing is quickly shown. Next, > we are introduced to Tatum's George Washington, who delivers the surprising > declaration, "I'm George Washington. Let's go start a fucking revolution." > Netflix has released a ridiculous trailer for its star-studded animated > comedy “America: The Motion Picture,” which stars Channing Tatum as the voice > of a beefed-up and vulgar George Washington in a satirical take on the > American Revolution. The movie hails from “Archer” producer Matt Thompson, > who directs a script by “Wonder Woman” writer Dave Callahan. With Tatum in an > executive producer role alongside partner Reid Carolin as well as Phil Lord > and Chris Miller, the wacky historical comedy is sure to be a hit with its > target audience. > Here’s the official synopsis: “For, like, thousands of years, the origins of > the United States of America have remained shrouded in mystery, lost to the > sands of time. Who built this ‘country tis of thee,’ and why? Only the > dinosaurs know… until now. For the first time in human history, the > incredible, completely true story of America’s o
[jira] [Closed] (KAFKA-12918) Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd Download
[ https://issues.apache.org/jira/browse/KAFKA-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guozhang Wang closed KAFKA-12918. - > Watch-Free Animated 'America: The Motion Picture' Trailer 2021 Full Hd > Download > > > Key: KAFKA-12918 > URL: https://issues.apache.org/jira/browse/KAFKA-12918 > Project: Kafka > Issue Type: Bug >Reporter: mushfiqur rahoman >Priority: Major > > America’ Trailer: Channing Tatum Voices Foul-Mouthed George Washington in > Netflix Comedy Channing Tatum Leads a Revolution in America: The Motion > Picture Trailer Channing Tatum, Simon Pegg, Killer Mike Star in First Trailer > for America: The Motion Picture: Watch Online Full HD Free > ### > Watch Here ▶️▶️ [https://streamsable.com/movies/] > Download Here ▶️▶️ [https://streamsable.com/movies/] > ### > To help usher in a (mostly Covid-free?) 4th of July weekend, Netflix has > something special lined up: an animated film that’s an R-Rated take on the > American Revolution. “America: The Motion Picture” offers a radically > different take on the familiar history of America’s inception as a country. > George Washington and other founding fathers rally the colonial troops to > victory against the British but in a totally wild and anachronistic fashion. > Here’s the official synopsis: > READ MORE: Netflix Unveils Massive Summer Slate Teaser Trailer: New Films > With Felicity Jones, Jason Mamoa, Shailene Woodley, Kevin Hart, Liam Neeson & > More > In this wildly tongue-in-cheek animated revisionist history, a > chainsaw-wielding George Washington assembles a team of rabble-rousers — > including beer-loving bro Sam Adams, famed scientist Thomas Edison, acclaimed > horseman Paul Revere, and a very pissed off Geronimo — to defeat Benedict > Arnold and King James in the American Revolution. Who will win? No one knows, > but you can be sure of one thing: these are not your father’s Founding… uh, > Fathers. > Channing Tatum leads the voice cast as George Washington. Alongside him is > Simon Pegg as King James, Bobby Moynihan as Paul Revere, Raoul Trujillo as > Geronimo, and Jason Mantzoukas as Sam Adams. Judy Greer is also on board as > Martha Dandridge, as is Olivia Munn as Thomas Edison (yes, you read that > right). Will Forte, Andy Samberg, rapper Killer Mike, and Amber Nash are also > part of the cast. > READ MORE: The 100 Most Anticipated Films of 2021 > Matt Thompson, one of the executive producers of the cult animated show > “Archer,” directs David Callaham‘s (‘Wonder Woman;’ ‘Shang-Chi And The Legend > Of The Ten Rings‘) screenplay. Thompson and Callaham also serve as producers > with Adam Reed, also on the “Archer” team. Tatum also has a producer’s credit > with Peter Kiernan and Reed Carolin under his Free Association company. Phil > Lord and Christopher Miller, the dream team behind “The Lego Movie,” also > serve as producers with Will Allegra through Lord Miller. > READ MORE: Channing Tatum Reuniting With Lord And Miller For Universal > Monster Movie > What other crazy surprises does “America: The Motion Picture” has in store > for its audience? Find out on June 30, when the film hits Netflix. Check out > the trailer below. > Channing Tatum's R-rated George Washington and the rest of the Founding > Fathers unite in a trailer for Netflix's America: The Motion Picture. > The trailer begins by reminding us this animated film comes "From the > Founding Fathers who brought you Archer, Spider-Man: Into the Spider-Verse, > The Expendables and Magic Mike." The Magic Mike part then comes into play > when a scene of gyrating dancers with neon clothing is quickly shown. Next, > we are introduced to Tatum's George Washington, who delivers the surprising > declaration, "I'm George Washington. Let's go start a fucking revolution." > Netflix has released a ridiculous trailer for its star-studded animated > comedy “America: The Motion Picture,” which stars Channing Tatum as the voice > of a beefed-up and vulgar George Washington in a satirical take on the > American Revolution. The movie hails from “Archer” producer Matt Thompson, > who directs a script by “Wonder Woman” writer Dave Callahan. With Tatum in an > executive producer role alongside partner Reid Carolin as well as Phil Lord > and Chris Miller, the wacky historical comedy is sure to be a hit with its > target audience. > Here’s the official synopsis: “For, like, thousands of years, the origins of > the United States of America have remained shrouded in mystery, lost to the > sands of time. Who built this ‘country tis of thee,’ and why? Only the > dinosaurs know… until now. For the first time in human history, the > incredible, completely true story of America’s origi
[GitHub] [kafka] hachikuji commented on a change in pull request #10786: KAFKA-12787: Integrate controller snapshoting with raft client
hachikuji commented on a change in pull request #10786: URL: https://github.com/apache/kafka/pull/10786#discussion_r647803084 ## File path: metadata/src/main/java/org/apache/kafka/controller/SnapshotGenerator.java ## @@ -74,56 +74,49 @@ String name() { this.batch = null; this.section = null; this.numRecords = 0; -this.numWriteTries = 0; } /** * Returns the epoch of the snapshot that we are generating. */ long epoch() { -return writer.epoch(); +return writer.lastOffset(); Review comment: Is this correct? Seems likely to cause confusion if it is. ## File path: metadata/src/main/java/org/apache/kafka/controller/QuorumController.java ## @@ -1009,7 +999,7 @@ private QuorumController(LogContext logContext, snapshotRegistry, sessionTimeoutNs, replicaPlacer); this.featureControl = new FeatureControlManager(supportedFeatures, snapshotRegistry); this.producerIdControlManager = new ProducerIdControlManager(clusterControl, snapshotRegistry); -this.snapshotGeneratorManager = new SnapshotGeneratorManager(snapshotWriterBuilder); +this.snapshotGeneratorManager = new SnapshotGeneratorManager(raftClient::createSnapshot); Review comment: Passing through the function is a tad odd. We actually could just use the implicit reference to `raftClient`. Was this done for testing or something? ## File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala ## @@ -233,18 +233,40 @@ final class KafkaMetadataLog private ( log.topicId.get } - override def createSnapshot(snapshotId: OffsetAndEpoch): RawSnapshotWriter = { -// Do not let the state machine create snapshots older than the latest snapshot -latestSnapshotId().ifPresent { latest => - if (latest.epoch > snapshotId.epoch || latest.offset > snapshotId.offset) { -// Since snapshots are less than the high-watermark absolute offset comparison is okay. -throw new IllegalArgumentException( - s"Attempting to create a snapshot ($snapshotId) that is not greater than the latest snapshot ($latest)" -) - } + override def createSnapshot(snapshotId: OffsetAndEpoch): Optional[RawSnapshotWriter] = { +if (snapshots.contains(snapshotId)) { + Optional.empty() +} else { + Optional.of(FileRawSnapshotWriter.create(log.dir.toPath, snapshotId, Optional.of(this))) +} + } + + override def createSnapshotFromEndOffset(endOffset: Long): Optional[RawSnapshotWriter] = { +val highWatermarkOffset = highWatermark.offset +if (endOffset > highWatermarkOffset) { + throw new IllegalArgumentException( +s"Cannot create a snapshot for an end offset ($endOffset) greater than the high-watermark ($highWatermarkOffset)" + ) +} + +if (endOffset < startOffset) { + throw new IllegalArgumentException( +s"Cannot create a snapshot for an end offset ($endOffset) less than the log start offset ($startOffset)" + ) +} + +val epoch = log.leaderEpochCache.flatMap(_.findEpochEntryByEndOffset(endOffset)) match { + case Some(epochEntry) => +epochEntry.epoch + case None => +// Assume that the end offset falls in the current epoch since based on the check above: Review comment: This confuses me a little bit. The logic in `findEpochEntryByEndOffset` returns the first epoch which has a start offset less than the end offset. Wouldn't that already cover the case of the current epoch? It seems like the case that is uncovered is when the offset is smaller than the start offset of the first cached epoch, but that should be an error? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] gharris1727 commented on pull request #8259: KAFKA-7421: Ensure Connect's PluginClassLoader is truly parallel capable and resolve deadlock occurrences
gharris1727 commented on pull request #8259: URL: https://github.com/apache/kafka/pull/8259#issuecomment-857215526 @rhauch @gharris1727 I've applied the above feedback. PTAL, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12920) Consumer's cooperative sticky assignor need to clear generation / assignment data upon `onPartitionsLost`
Guozhang Wang created KAFKA-12920: - Summary: Consumer's cooperative sticky assignor need to clear generation / assignment data upon `onPartitionsLost` Key: KAFKA-12920 URL: https://issues.apache.org/jira/browse/KAFKA-12920 Project: Kafka Issue Type: Improvement Reporter: Guozhang Wang Consumer's cooperative-sticky assignor does not track the owned partitions inside the assignor --- i.e. when it reset its state in event of ``onPartitionsLost``, the ``memberAssignment`` and ``generation`` inside the assignor would not be cleared. This would cause a member to join with empty generation on the protocol while with non-empty user-data encoding the old assignment still (and hence pass the validation check on broker side during JoinGroup), and eventually cause a single partition to be assigned to multiple consumers within a generation. We should let the assignor to also clear its assignment/generation when ``onPartitionsLost`` is triggered in order to avoid this scenario. Note that 1) for the regular sticky assignor the generation would still have an older value, and this would cause the previously owned partitions to be discarded during the assignment, and 2) for Streams' sticky assignor, it’s encoding would indeed be cleared along with ``onPartitionsLost``. Hence only Consumer's cooperative-sticky assignor have this issue to solve. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (KAFKA-12920) Consumer's cooperative sticky assignor need to clear generation / assignment data upon `onPartitionsLost`
[ https://issues.apache.org/jira/browse/KAFKA-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guozhang Wang updated KAFKA-12920: -- Issue Type: Bug (was: Improvement) > Consumer's cooperative sticky assignor need to clear generation / assignment > data upon `onPartitionsLost` > - > > Key: KAFKA-12920 > URL: https://issues.apache.org/jira/browse/KAFKA-12920 > Project: Kafka > Issue Type: Bug >Reporter: Guozhang Wang >Priority: Major > Labels: bug, consumer > > Consumer's cooperative-sticky assignor does not track the owned partitions > inside the assignor --- i.e. when it reset its state in event of > ``onPartitionsLost``, the ``memberAssignment`` and ``generation`` inside the > assignor would not be cleared. This would cause a member to join with empty > generation on the protocol while with non-empty user-data encoding the old > assignment still (and hence pass the validation check on broker side during > JoinGroup), and eventually cause a single partition to be assigned to > multiple consumers within a generation. > We should let the assignor to also clear its assignment/generation when > ``onPartitionsLost`` is triggered in order to avoid this scenario. > Note that 1) for the regular sticky assignor the generation would still have > an older value, and this would cause the previously owned partitions to be > discarded during the assignment, and 2) for Streams' sticky assignor, it’s > encoding would indeed be cleared along with ``onPartitionsLost``. Hence only > Consumer's cooperative-sticky assignor have this issue to solve. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] gharris1727 edited a comment on pull request #8259: KAFKA-7421: Ensure Connect's PluginClassLoader is truly parallel capable and resolve deadlock occurrences
gharris1727 edited a comment on pull request #8259: URL: https://github.com/apache/kafka/pull/8259#issuecomment-857215526 @rhauch @kkonstantine I've applied the above feedback. PTAL, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dongjinleekr commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
dongjinleekr commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647902119 ## File path: core/src/main/scala/kafka/tools/ConsoleConsumer.scala ## @@ -248,7 +248,7 @@ object ConsoleConsumer extends Logging { val timeoutMsOpt = parser.accepts("timeout-ms", "If specified, exit if no message is available for consumption for the specified interval.") .withRequiredArg .describedAs("timeout_ms") - .ofType(classOf[java.lang.Integer]) + .ofType(classOf[java.lang.Long]) Review comment: As I described above, it causes a type problem in Scala: > As of present, `ConsoleConsumer` is taking timeout ms parameter as `Integer`. (see `ConsumerConfig#timeoutMsOpt`) For this reason, `ConsumerConfig#timeoutMs` is `Integer` and in turn, `timeoutMs` variable in `ConsoleConsumer#run` becomes `Any` - since it can either of `Integer` or `Long`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dongjinleekr commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
dongjinleekr commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647902481 ## File path: core/src/main/scala/kafka/server/KafkaConfig.scala ## @@ -402,43 +402,43 @@ object KafkaConfig { val RackProp = "broker.rack" /** * Log Configuration ***/ val NumPartitionsProp = "num.partitions" - val LogDirsProp = "log.dirs" - val LogDirProp = "log.dir" - val LogSegmentBytesProp = "log.segment.bytes" - - val LogRollTimeMillisProp = "log.roll.ms" - val LogRollTimeHoursProp = "log.roll.hours" - - val LogRollTimeJitterMillisProp = "log.roll.jitter.ms" - val LogRollTimeJitterHoursProp = "log.roll.jitter.hours" - - val LogRetentionTimeMillisProp = "log.retention.ms" - val LogRetentionTimeMinutesProp = "log.retention.minutes" - val LogRetentionTimeHoursProp = "log.retention.hours" - - val LogRetentionBytesProp = "log.retention.bytes" - val LogCleanupIntervalMsProp = "log.retention.check.interval.ms" - val LogCleanupPolicyProp = "log.cleanup.policy" - val LogCleanerThreadsProp = "log.cleaner.threads" - val LogCleanerIoMaxBytesPerSecondProp = "log.cleaner.io.max.bytes.per.second" - val LogCleanerDedupeBufferSizeProp = "log.cleaner.dedupe.buffer.size" - val LogCleanerIoBufferSizeProp = "log.cleaner.io.buffer.size" - val LogCleanerDedupeBufferLoadFactorProp = "log.cleaner.io.buffer.load.factor" - val LogCleanerBackoffMsProp = "log.cleaner.backoff.ms" - val LogCleanerMinCleanRatioProp = "log.cleaner.min.cleanable.ratio" - val LogCleanerEnableProp = "log.cleaner.enable" - val LogCleanerDeleteRetentionMsProp = "log.cleaner.delete.retention.ms" - val LogCleanerMinCompactionLagMsProp = "log.cleaner.min.compaction.lag.ms" - val LogCleanerMaxCompactionLagMsProp = "log.cleaner.max.compaction.lag.ms" - val LogIndexSizeMaxBytesProp = "log.index.size.max.bytes" - val LogIndexIntervalBytesProp = "log.index.interval.bytes" - val LogFlushIntervalMessagesProp = "log.flush.interval.messages" - val LogDeleteDelayMsProp = "log.segment.delete.delay.ms" - val LogFlushSchedulerIntervalMsProp = "log.flush.scheduler.interval.ms" - val LogFlushIntervalMsProp = "log.flush.interval.ms" - val LogFlushOffsetCheckpointIntervalMsProp = "log.flush.offset.checkpoint.interval.ms" - val LogFlushStartOffsetCheckpointIntervalMsProp = "log.flush.start.offset.checkpoint.interval.ms" - val LogPreAllocateProp = "log.preallocate" + val LogDirsProp = LogConfigPrefix + "dirs" + val LogDirProp = LogConfigPrefix + "dir" + val LogSegmentBytesProp = LogConfigPrefix + "segment.bytes" + + val LogRollTimeMillisProp = LogConfigPrefix + "roll.ms" + val LogRollTimeHoursProp = LogConfigPrefix + "roll.hours" + + val LogRollTimeJitterMillisProp = LogConfigPrefix + "roll.jitter.ms" + val LogRollTimeJitterHoursProp = LogConfigPrefix + "roll.jitter.hours" + + val LogRetentionTimeMillisProp = LogConfigPrefix + "retention.ms" + val LogRetentionTimeMinutesProp = LogConfigPrefix + "retention.minutes" + val LogRetentionTimeHoursProp = LogConfigPrefix + "retention.hours" + + val LogRetentionBytesProp = LogConfigPrefix + "retention.bytes" + val LogCleanupIntervalMsProp = LogConfigPrefix + "retention.check.interval.ms" + val LogCleanupPolicyProp = LogConfigPrefix + "cleanup.policy" + val LogCleanerThreadsProp = LogConfigPrefix + "cleaner.threads" + val LogCleanerIoMaxBytesPerSecondProp = LogConfigPrefix + "cleaner.io.max.bytes.per.second" + val LogCleanerDedupeBufferSizeProp = LogConfigPrefix + "cleaner.dedupe.buffer.size" + val LogCleanerIoBufferSizeProp = LogConfigPrefix + "cleaner.io.buffer.size" + val LogCleanerDedupeBufferLoadFactorProp = LogConfigPrefix + "cleaner.io.buffer.load.factor" + val LogCleanerBackoffMsProp = LogConfigPrefix + "cleaner.backoff.ms" + val LogCleanerMinCleanRatioProp = LogConfigPrefix + "cleaner.min.cleanable.ratio" + val LogCleanerEnableProp = LogConfigPrefix + "cleaner.enable" + val LogCleanerDeleteRetentionMsProp = LogConfigPrefix + "cleaner.delete.retention.ms" + val LogCleanerMinCompactionLagMsProp = LogConfigPrefix + "cleaner.min.compaction.lag.ms" + val LogCleanerMaxCompactionLagMsProp = LogConfigPrefix + "cleaner.max.compaction.lag.ms" + val LogIndexSizeMaxBytesProp = LogConfigPrefix + "index.size.max.bytes" + val LogIndexIntervalBytesProp = LogConfigPrefix + "index.interval.bytes" + val LogFlushIntervalMessagesProp = LogConfigPrefix + "flush.interval.messages" + val LogDeleteDelayMsProp = LogConfigPrefix + "segment.delete.delay.ms" + val LogFlushSchedulerIntervalMsProp = LogConfigPrefix + "flush.scheduler.interval.ms" + val LogFlushIntervalMsProp = LogConfigPrefix + "flush.interval.ms" + val LogFlushOffsetCheckpointIntervalMsProp = LogConfigPrefix + "flush.offset.checkpoint.interval.ms" + val LogFlushStartOffsetCheckpointIntervalMsProp = LogConfigPrefix + "flush.start.offset.checkpoint.interval.ms" + val LogPreAllocateProp = LogConfigPrefix + "preallocate" Review comment:
[GitHub] [kafka] mjsax commented on pull request #10846: KAFKA-12914: StreamSourceNode should return `null` topic name for pattern subscription
mjsax commented on pull request #10846: URL: https://github.com/apache/kafka/pull/10846#issuecomment-857329691 > Just a thought that do you think it's better to return empty set when null? For `SourceGraphNode` we either set `topicName` or `pattern` `null`, and we also return `null` for the pattern case if `topicName` is used and `pattern is `null`. So I thought it might be better aligned to just return `null` if `topicName` is `null`, too. But I don't have a strong opinion. As an afterthought, it might even be better to change both return types to `Optional` ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] showuon commented on pull request #10846: KAFKA-12914: StreamSourceNode should return `null` topic name for pattern subscription
showuon commented on pull request #10846: URL: https://github.com/apache/kafka/pull/10846#issuecomment-857330899 > As an afterthought, it might even be better to change both return types to `Optional` ? Sounds good! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-12921) Upgrade ZSTD JNI from 1.4.9-1 to 1.5.0-1
David Christle created KAFKA-12921: -- Summary: Upgrade ZSTD JNI from 1.4.9-1 to 1.5.0-1 Key: KAFKA-12921 URL: https://issues.apache.org/jira/browse/KAFKA-12921 Project: Kafka Issue Type: Improvement Components: build Affects Versions: 3.0.0 Reporter: David Christle Fix For: 3.0.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [kafka] mjsax commented on a change in pull request #10813: KAFKA-9559: Change default serde to be `null`
mjsax commented on a change in pull request #10813: URL: https://github.com/apache/kafka/pull/10813#discussion_r647928413 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/SinkNode.java ## @@ -60,8 +60,8 @@ public void init(final InternalProcessorContext context) { this.context = context; final Serializer contextKeySerializer = ProcessorContextUtils.getKeySerializer(context); final Serializer contextValueSerializer = ProcessorContextUtils.getValueSerializer(context); -keySerializer = prepareKeySerializer(keySerializer, contextKeySerializer, contextValueSerializer); -valSerializer = prepareValueSerializer(valSerializer, contextKeySerializer, contextValueSerializer); +keySerializer = prepareKeySerializer(keySerializer, contextKeySerializer, contextValueSerializer, this.name()); +valSerializer = prepareValueSerializer(valSerializer, contextKeySerializer, contextValueSerializer, this.name()); Review comment: My argument was mainly about unifying code, ie, try to avoid `null`-checks on different places, but do the `null`-check on a unified place (to avoid that we forget the `null`-check). Might be good to get the opinion of other on this question. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mjsax commented on a change in pull request #10813: KAFKA-9559: Change default serde to be `null`
mjsax commented on a change in pull request #10813: URL: https://github.com/apache/kafka/pull/10813#discussion_r647929736 ## File path: streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordDeserializer.java ## @@ -78,6 +79,10 @@ throw new StreamsException("Fatal user code error in deserialization error callback", fatalUserException); } +if (deserializationException instanceof ConfigException) { Review comment: My thought was, that for a `ConfigException` we are doomed to fail anyway, and thus it seems not make sense to call the handler to allow the user to "swallow" the exception by returning `CONTINUE` ? Also, even if the user does return `CONTINUE`, it seems we could ignore it and rethrow the `ConfigException` and die anyway, what seems to defeat the purpose of (calling) the exception handler to begin with? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dchristle opened a new pull request #10847: KAFKA-12921: Upgrade ZSTD JNI from 1.4.9-1 to 1.5.0-1
dchristle opened a new pull request #10847: URL: https://github.com/apache/kafka/pull/10847 This PR aims to upgrade `zstd-jni` from `1.4.9-1` to `1.5.0-1`. This change will incorporate a number of bug fixes and performance improvements made in the `1.4.x` branch and `1.5.0` of `zstd`: - https://github.com/facebook/zstd/releases/tag/v1.5.0 - https://github.com/facebook/zstd/releases/tag/v1.4.9 - https://github.com/facebook/zstd/releases/tag/v1.4.8 - https://github.com/facebook/zstd/releases/tag/v1.4.7 - https://github.com/facebook/zstd/releases/tag/v1.4.5 - https://github.com/facebook/zstd/releases/tag/v1.4.4 - https://github.com/facebook/zstd/releases/tag/v1.4.3 - https://github.com/facebook/zstd/releases/tag/v1.4.2 The most recent `1.5.0` release offers +25%-140% (compression) +15% (decompression) performance improvements. Since this is a dependency change, this should pass all the existing CIs. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] showuon commented on a change in pull request #10794: KAFKA-12677: parse envelope response to check if not_controller error existed
showuon commented on a change in pull request #10794: URL: https://github.com/apache/kafka/pull/10794#discussion_r647946747 ## File path: build.gradle ## @@ -829,6 +829,7 @@ project(':core') { testImplementation(libs.jfreechart) { exclude group: 'junit', module: 'junit' } +testImplementation libs.mockitoInline // supports mocking static methods, final classes, etc. Review comment: @ijuma , I've removed the `mockitoInline` library and mock with original Mockito lib. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] showuon commented on a change in pull request #10794: KAFKA-12677: parse envelope response to check if not_controller error existed
showuon commented on a change in pull request #10794: URL: https://github.com/apache/kafka/pull/10794#discussion_r647946840 ## File path: core/src/main/scala/kafka/server/BrokerToControllerChannelManager.scala ## @@ -350,7 +372,8 @@ class BrokerToControllerRequestThread( } else if (response.wasDisconnected()) { updateControllerAddress(null) requestQueue.putFirst(queueItem) -} else if (response.responseBody().errorCounts().containsKey(Errors.NOT_CONTROLLER)) { +} else if (response.responseBody().errorCounts().containsKey(Errors.NOT_CONTROLLER) || + maybeCheckNotControllerErrorInsideEnvelopeResponse(queueItem.requestHeader, response.responseBody())) { Review comment: @wenbingshen , thanks for the comment. I've addressed your comment. Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ijuma commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
ijuma commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647966061 ## File path: core/src/main/scala/kafka/tools/ConsoleConsumer.scala ## @@ -248,7 +248,7 @@ object ConsoleConsumer extends Logging { val timeoutMsOpt = parser.accepts("timeout-ms", "If specified, exit if no message is available for consumption for the specified interval.") .withRequiredArg .describedAs("timeout_ms") - .ofType(classOf[java.lang.Integer]) + .ofType(classOf[java.lang.Long]) Review comment: You could change the code in `run` to return `Int.MaxValue` instead of `Long.MaxValue`. That would not impact the command line parameters. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] socutes commented on pull request #10749: KAFKA-12773: Use UncheckedIOException when wrapping IOException
socutes commented on pull request #10749: URL: https://github.com/apache/kafka/pull/10749#issuecomment-857382741 @mumrah Code format fix completed! Thank you very much for your Review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dchristle commented on pull request #10847: KAFKA-12921: Upgrade ZSTD JNI from 1.4.9-1 to 1.5.0-1
dchristle commented on pull request #10847: URL: https://github.com/apache/kafka/pull/10847#issuecomment-857396796 retest this please -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dongjinleekr commented on a change in pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
dongjinleekr commented on a change in pull request #10678: URL: https://github.com/apache/kafka/pull/10678#discussion_r647996559 ## File path: core/src/main/scala/kafka/tools/ConsoleConsumer.scala ## @@ -248,7 +248,7 @@ object ConsoleConsumer extends Logging { val timeoutMsOpt = parser.accepts("timeout-ms", "If specified, exit if no message is available for consumption for the specified interval.") .withRequiredArg .describedAs("timeout_ms") - .ofType(classOf[java.lang.Integer]) + .ofType(classOf[java.lang.Long]) Review comment: Oh, I found much simple solution: call `toLong` in `ConsoleConsumer#run`. ```scala val timeoutMs = if (conf.timeoutMs >= 0) conf.timeoutMs.toLong else Long.MaxValue ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] dongjinleekr commented on pull request #10678: TRIVIAL: Fix type inconsistencies, unthrown exceptions, etc
dongjinleekr commented on pull request #10678: URL: https://github.com/apache/kafka/pull/10678#issuecomment-857413304 Rebased onto the latest trunk, adapting @ijuma's comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] satishd opened a new pull request #10848: MINOR Updated transaction index as optional in LogSegmentData.
satishd opened a new pull request #10848: URL: https://github.com/apache/kafka/pull/10848 - Updated transaction index as optional in `LogSegmentData`. - Added a unit test for the introduced change. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] satishd commented on pull request #10848: MINOR Updated transaction index as optional in LogSegmentData.
satishd commented on pull request #10848: URL: https://github.com/apache/kafka/pull/10848#issuecomment-857423198 @junrao : Pl take a look at this minor PR making transaction index as optional in `LogSegmentData` as we discussed earlier. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] socutes edited a comment on pull request #10749: KAFKA-12773: Use UncheckedIOException when wrapping IOException
socutes edited a comment on pull request #10749: URL: https://github.com/apache/kafka/pull/10749#issuecomment-857382741 @mumrah Code format fix completed! Thank you very much for your Review. Before I submit the code, I execute the command "./gradlew checkstyleMain checkstyleTest "to verify the code style. To ensure that the code format conforms to the standard. As for the code format, do you have any documents and tools to suggest? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org