Re: [DISCUSS] KIP-1066: Mechanism to cordon brokers and log directories

2024-07-14 Thread Kamal Chandraprakash
Hi Mickael,

In the BrokerHearbeatRequest.json, the flexibleVersions are bumped from
"0+" to "1+". Is it a typo?


On Fri, Jul 12, 2024 at 11:42 PM David Arthur  wrote:

> Mickael, thanks for the KIP! I think this could be quite a useful feature.
>
> DA1: Having to know each of the log dirs for a broker seems a bit
> inconvenient for cases where we want to cordon off a whole broker. I do
> think having the ability to cordon off a specific log dir is useful for
> JBOD, but I imagine a common case might be to cordon off the whole broker.
>
> DA2: Looks like the new "cordoned.log.dirs" can be configured statically
> and updated dynamically per-broker. What do you think about a new metadata
> record and RPC instead of using a config? From my understanding, the
> BrokerRegistration/Heartbeat is more about the lifecycle of a broker
> whereas cordoning a broker is an operator driven action. It might make
> sense to have a separate record for this. We could include additional
> fields like a timestamp, a reason/comment field (e.g., "decommissioning",
> "disk failure", "new broker" etc), stuff like that.
>
> This would also allow cordoning to be done while a broker is offline or
> before it has been provisioned. Not sure how likely that is, but might be
> useful?
>
> DA3: Can we consider having a configuration to enable/disable the new
> replica placer behavior? This would be separate from the new
> MetadataVersion for the RPC/record changes.
>
> DA4: In the Motivation section, you mention the cluster expansion scenario.
> For this scenario, is the expectation that the operator will cordon off the
> existing full brokers so placements only happen on the new brokers?
>
> Cheers,
> David
>
> On Fri, Jul 12, 2024 at 8:53 AM Mickael Maison 
> wrote:
>
> > Hi Kamal,
> >
> > Thanks for taking a look at the KIP!
> >
> > I briefly considered that option initially but I found it not very
> > practical once you have more than a few cordoned log directories.
> > I find your example is already not very easy to read, and it only has
> > 2 entries. Also if the configuration is at the cluster level it'sis
> > not easy to see if a broker has all its log directories cordoned, and
> > you still need to describe a specific broker's configuration to find
> > the "name" of a log directory you want to cordon.
> >
> > I think an easy way to get an overall view of the cordoned log
> > directories/brokers will be via the kafka-log-dirs.sh tool. I am also
> > considering adding metrics like we have today for LogDirectoryOffline
> > to ease monitoring.
> >
> > Thanks,
> > Mickael
> >
> > On Thu, Jul 11, 2024 at 8:41 PM Kamal Chandraprakash
> >  wrote:
> > >
> > > Hi Mickael,
> > >
> > > Thanks for the KIP!
> > >
> > > This is a useful feature which helps to decommission the nodes by
> > > essentially
> > > creating a new replica exclude broker list.
> > >
> > > To cordon a list of brokers, we have to apply the config on each of the
> > > broker nodes
> > > and similarly to see the list of cordoned brokers, we have to either
> > query
> > > individual broker
> > > config in the cluster or query each of the broker log directory.
> > >
> > > Can we move the configuration from the broker to cluster level? (eg)
> > >
> > > bin/kafka-configs.sh --bootstrap-server localhost:9092
> --broker-defaults
> > > --alter --add-config
> > > cordoned.log.dirs=[broker.id.0]:[log.dir.0],[broker.id.1]:[log.dir.1]
> > >
> > > This will provide a consistent view of the list of cordoned brokers in
> > the
> > > cluster.
> > >
> > > --
> > > Kamal
> > >
> > > On Wed, Jul 10, 2024 at 7:53 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Luke,
> > > >
> > > > 4. You're right this scenario can happen. In this case I think the
> > > > broker should enforce its new state and not create the replica as all
> > > > its log directories are now cordoned. The replica will be offline and
> > > > an administrator would need to reassign it to another broker. I
> expect
> > > > most users will rely on kafka-configs.sh to manage the cordoned log
> > > > directories instead of updating the broker properties, so it should
> > > > not be a common issue.
> > > >
> > > > 6. To resolve this issue, the user can uncordon a log directory and
> > > > retry an operation that triggers the creation of the internal topics.
> > > > For example start a consumer using a group should make the broker
> > > > retry creating the __consumer_offsets topic.
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > > On Wed, Jul 10, 2024 at 4:14 PM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > > >
> > > > > Hi Chia-Ping,
> > > > >
> > > > > Question 1) Yes that's a good summary. I'd also add that managing
> > > > > cordoned log directories is intended to be done by cluster
> > > > > administrators who also know about operations in-progress or
> planned
> > > > > such as scaling or adding/removing log directories. In practice you
> > > > > can't expect use

Re: [DISCUSS] KIP-1051 Statically configured log replication throttling

2024-07-14 Thread Harry Fallows
Hi Kamal,

Thank you for reading KIP-1051!

Yes, it's true that it can impact regular replication traffic. However, network 
throughput is bounded so regardless of whether we allow it as a config in Kafka 
or not, there is always a chance that replication traffic will get throttled. 
Having it as a config will at least ensure that the entire bandwidth is not 
taken up by replication traffic.

I agree, the nature of the leader replication throttling is dependent on how 
many followers there are, however, I don't think it's dependent on the 
partition assignment strategy or the number of brokers; it should only be 
dependent on the replication factor. I think it's key to point out here that 
these configurations do not need to be "optimised" for use cases with different 
replication factors, they just need to be set to match the infrastructure that 
they are deployed in. For example if you have a maximum network bandwidth of 
200MB/s and a replication factor of 3, you may set 
follower.replication.throttled.replicas to 150MB/s, to reserve some bandwidth 
for other traffic (e.g. producing and consuming). In this case, if you start 
with all replicas in sync, I don't think it's possible for the follower 
throttling to be the sole cause of a replica falling out of sync. It may be the 
case that it takes longer for an out-of-sync replica to become in sync, but in 
that case the replication throttling just serves to mitigate other traffic from 
getting throttled (e.g. producer traffic to a different partition). Even so, it 
is possible that misconfiguring these values could cause issues, so the 
potential consequences should be clearly documented.

I think the concern about producing spikes causing ISR issues is only an issue 
if these values are poorly configured. I think in general if these values are 
always configured as >= (replicationFactor/(replicationFactor+1))*maxBandwidth 
(e.g. like the above example: 3/(3+1) * 200 = 150), then even if 100% of the 
non-replication traffic is producer traffic, all followers should be able to 
stay in sync.

I like the idea of emitting a metric for when a quota is breached, what do you 
think about having it as a gauge for number of partitions that are currently 
leader of follower throttled (similar to the URP metric)?

Kind regards,
Harry

On Thursday, 11 July 2024 at 19:02, Kamal Chandraprakash 
 wrote:

> Hi Harry Fallows,
>
> Thanks for the KIP!
>
> I went over both the KIP-1051 and KIP-1009. Assuming that the
> leader.replication.throttled.replicas
> and follower.replication.throttled.replicas are set to Wildcard (*) to
> apply for all the partitions in the
> broker. If we set a static value for leader and follower replication
> throttled rate, then it might impact
> the normal replication traffic.
>
> Throttling rate depends on the number of brokers in the cluster. If the
> cluster contains 100+ brokers, then
> the leader.replication.throttled.rate is shared across all the followers.
> The number of followers reading
> data from the leader depends on the partition assignment strategy. If the
> leader replication throttle is breached,
> then the follower might fail to catch-up with the leader.
>
> If there are sudden spikes in a specific set of topics/partitions in the
> cluster, then the replicas might fail to join
> the isr and can impact the cluster reliability. If we are going with this
> proposal, then we may also have to emit
> a metric to inform the administrator that the leader/follower replication
> quota is breached.
>
> --
> Kamal
>
> On Thu, Jul 4, 2024 at 8:10 PM Harry Fallows
> harryfall...@protonmail.com.invalid wrote:
>
> > Hi everyone,
> >
> > Bumping this one last time before I call a vote. Please take a look if
> > you're interested in replication throttling and/or static/dynamic config.
> >
> > Kind regards,
> > Harry
> >
> > On Thursday, 13 June 2024 at 19:39, Harry Fallows <
> > harryfall...@protonmail.com.INVALID> wrote:
> >
> > > Hi Hector,
> > >
> > > I did see your colleague's KIP, and I actually mentioned it in the KIP
> > > that I have written. As I see it, both of these KIPs move towards more
> > > easily configurable replication throttling and both should be implemented.
> > > KIP-1009 makes it easier to enable throttling and KIP-1051 makes it easier
> > > to apply a throttle rate. I did try to look at supporting KIP-1009 in the
> > > discussion thread, however, I only subscribed to the mailing list after it
> > > was published and I couldn't figure out how to respond to it in Pony mail.
> > > I would be definitely be interested in partnering up to get both changes
> > > across the line, whether that be by combining them or supporting both
> > > individually (I'm not sure which is best, this is my first contribution!).
> > >
> > > I also see that KAFKA-10190 is mentioned in KIP-1009 as a related
> > > ticket. Coincidentally, I raised a PR to address this bug a couple of days
> > > ago (https://github.com/apache/kafka/pull/16280). I thi

Re: Proposal to Include Record Headers in ProducerInterceptor onAcknowledgement Method

2024-07-14 Thread Matthias J. Sax

Hey,

Resurrecting KIP-512 might be a good idea. Just follow up on the Jira 
ticket and/or old DISCUSS thread, and ask to take over -- if the 
original author does not reply in a few days, just take it on; given 
that the KIP is many years old, I would be surprised if anybody objects 
that you resurrect it and take over.




However, I would personally not add Headers to RecordMetadata, but 
follow the idea to pass Headers as additional parameter (which was you 
original proposal, and was also mentioned as alternative on the KIP-512 
discuss thread).


As pointed out on the KIP-512 discussion, RecordMetadata is "Kafka 
metadata" about the records (send back by the broker) like the offset. 
Headers seems to be of a different quality because it's 
"user/application metadata"), and I believe it would become a convoluted 
API to add them to RecordMetadata where Headers IMHO do not belong to.


But it seems we are already starting a discussion which should actually 
take place on the KIP discuss thread :) -- And it's of course just my 
personal opinion. In the end, the question will be what the majority of 
people believe is better.




The other argument on KIP-512 was, that no change is necessary at all 
because Headers exist already client side. While I see the point, I 
don't really agree to it... In the end, while technically correct, it's 
quite complicated and clumsy to keep a copy of the Headers in the 
application code, and link them back and pass into an interceptor or 
callback to be used there. So letting the producer pass them into the 
interceptor and/or callback, seems to be a good improvement for user 
experience and ease of use.




Overall, it seem that the motivation must be clearly explained and the 
argument of "no needed" but be overcome in a convincing way, to get the 
KIP accepted. The API design question seems to be of second nature, and 
I am sure an agreement can be found, if people agree that a change is 
needed in the first place.




-Matthias


On 7/13/24 5:15 PM, Rich C. wrote:

Hi Matthias,

Thank you for the positive feedback. I am new to Kafka contributions and
have just requested a Confluence account for creating the KIP. I am
currently waiting for the account creation.

I noticed that KIP-512
,
which addressed a similar issue with headers, was proposed five years ago.
Do you suggest that I edit and improve KIP-512 (pending agreement from the
original author) or create a new KIP?

My previous email focused on WHY we need this feature. This follow-up email
will discuss **HOW** we can implement it.

**Implementation Options**

1. **Option 1: Add Headers to RecordMetadata** (Original KIP-512 Approach)
- **Prototype**: [
https://github.com/jychen7/kafka/pull/1](https://github.com/jychen7/kafka/pull/1)

2. **Option 2: Add a New onAcknowledgement Method with an Additional
Parameter for Headers or Producer Record**
- This option was brainstormed from the original thread

- **Prototype**: [
https://github.com/jychen7/kafka/pull/2](https://github.com/jychen7/kafka/pull/2)

I prefer Option 1 because it is backward compatible, does not introduce any
new API, and does not require deprecating any existing APIs. In my opinion,
headers are a form of metadata, and RecordMetadata is created on the client
side without needing extra information from the broker.

What are your thoughts on these options? If there are other suggestions, I
am open to considering them as well.

Best regards,
Rich


On Sat, Jul 13, 2024 at 1:50 PM Matthias J. Sax  wrote:


Hi Rich,

thanks for reaching out. I don't see any reason to object to this
proposal. The best way forward would be to write a KIP and collect
feedback from there.


-Matthias

On 7/12/24 9:14 PM, Rich C. wrote:

Dear Kafka Development Community,

I propose enhancing the Kafka ProducerInterceptor interface, specifically
to make Record Headers available in the onAcknowledgement method. I would
appreciate your feedback on this proposal. If the feedback is positive, I
will follow up with a detailed discussion on implementing this feature.

*Current State*

At present, the topic, partition, offset, and timestamp are available in
the onAcknowledgement method. However, headers are not accessible.

*Why This Feature Is Important*

Two primary use cases highlight the importance of making headers

available

in the onAcknowledgement method:

1. *Latency Measurement*

Latency measurement is crucial for understanding the time taken for
messages to travel from the producer to Kafka and back to the producer as
an acknowledgment. The current setup does not allow for precise

measurement

of the producer-side latency (a) in the following scenario:

```
producer send -> (a) -> Kafka -> (b) -> acknowledge
```

 - If using CreateTime, the calculation is: `now - message.timestamp =
(a) + (b)`

[jira] [Created] (KAFKA-17134) Restarting a server (same JVM) configured for OAUTHBEARER fails with RejectedExecutionException

2024-07-14 Thread Keith Wall (Jira)
Keith Wall created KAFKA-17134:
--

 Summary: Restarting a server (same JVM) configured for OAUTHBEARER 
fails with RejectedExecutionException
 Key: KAFKA-17134
 URL: https://issues.apache.org/jira/browse/KAFKA-17134
 Project: Kafka
  Issue Type: Bug
Reporter: Keith Wall


If you programmatically restart a server (3.7.1) configured for OAUTHBEARER 
{*}within the same JVM{*}, the startup attempt fails with the stack trace given 
below.

The issue is that a closed {{VerificationKeyResolver}} gets left behind in the
{{{}OAuthBearerValidatorCallbackHandler.{}}}{{VERIFICATION_KEY_RESOLVER_CACHE}} 
after the server is shutdown.  On restart, as the server's config is unchanged, 
the closed {{VerificationKeyResolver}} gets reused.  The 
{{ScheduledThreadPoolExecutor}} is already in a closed state so the init call 
fails.
 
A reproducer for this problem is found here: 
[https://github.com/k-wall/oauth_bearer_leak/blob/main/src/main/java/OAuthBearerValidatorLeak.java#L51]

The reproducer can be used with this OAuth Server:

{{docker run --rm -p 8080:8080 ghcr.io/navikt/mock-oauth2-server:2.1.8}}

 

{{Exception in thread "main" org.apache.kafka.common.KafkaException: 
org.apache.kafka.common.KafkaException: The OAuth validator configuration 
encountered an error when initializing the VerificationKeyResolver}}
{{    at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:184)}}
{{    at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)}}
{{    at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)}}
{{    at kafka.network.Processor.(SocketServer.scala:973)}}
{{    at kafka.network.Acceptor.newProcessor(SocketServer.scala:879)}}
{{    at 
kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:849)}}
{{    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)}}
{{    at kafka.network.Acceptor.addProcessors(SocketServer.scala:848)}}
{{    at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:523)}}
{{    at 
kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:251)}}
{{    at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:175)}}
{{    at 
kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:175)}}
{{    at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:576)}}
{{    at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:574)}}
{{    at scala.collection.AbstractIterable.foreach(Iterable.scala:933)}}
{{    at kafka.network.SocketServer.(SocketServer.scala:175)}}
{{    at kafka.server.BrokerServer.startup(BrokerServer.scala:255)}}
{{    at 
kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)}}
{{    at 
kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)}}
{{    at scala.Option.foreach(Option.scala:437)}}
{{    at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)}}
{{    at OAuthBearerValidatorLeak.main(OAuthBearerValidatorLeak.java:51)}}
{{Caused by: org.apache.kafka.common.KafkaException: The OAuth validator 
configuration encountered an error when initializing the 
VerificationKeyResolver}}
{{    at 
org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler.init(OAuthBearerValidatorCallbackHandler.java:146)}}
{{    at 
org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler.configure(OAuthBearerValidatorCallbackHandler.java:136)}}
{{    at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:151)}}
{{    ... 21 more}}
{{Caused by: java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4f66ffc8[Not
 completed, task = 
java.util.concurrent.Executors$RunnableAdapter@1bc49bc5[Wrapped task = 
org.apache.kafka.common.security.oauthbearer.internals.secured.RefreshingHttpsJwks$$Lambda/0x0001373c7c88@7b6e5c12]]
 rejected from 
java.util.concurrent.ScheduledThreadPoolExecutor@39e67516[Terminated, pool size 
= 0, active threads = 0, queued tasks = 0, completed tasks = 0]}}
{{    at 
java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2081)}}
{{    at 
java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:841)}}
{{    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)}}
{{    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:632)}}
{{    at 
java.base/java.util.concurrent.Executors$DelegatedScheduledExecutorService.scheduleAtFixedRate(Executors.java:870)}}
{{    at 
org.apache.kafka.common.security.oauthbearer.internals.secured.RefreshingHttpsJwks.init(RefreshingHttpsJwks.java:198)

Re: [PR] Update powered-by_adding SPITHA.html [kafka-site]

2024-07-14 Thread via GitHub


VictorParkM commented on PR #611:
URL: https://github.com/apache/kafka-site/pull/611#issuecomment-2227518263

   Hi Matthias,
   
   I fixed all the flaws regarding ASF guidelines.
   
   
   Regards,
   
   Victor Park
   
   JeongHun Park
   
   Business Development / Team Manager
   
   SPITHA INC.
   43, Daesagwan-ro, Yongsan-gu, 04401
   SEOUL, SOUTH KOREA
   M) +82.010.4321.2320
   E) ***@***.*** ***@***.***>
   
   *spitha.io *
   
   
   
   The content of this email is confidential and intended for the recipient
   specified in message only. It is strictly forbidden to share any part of
   this message with any third party, without a written consent of the sender.
   If you received this message by mistake, please reply to this message and
   follow with its deletion, so that we can ensure such a mistake does not
   occur in the future.
   
   
   2024년 7월 9일 (화) 오전 9:55, Matthias J. Sax ***@***.***>님이 작성:
   
   > @VictorParkM  -- thanks for updating your
   > webpage adding ® to Apache Kafka. However, you should also have a
   > disclaimer on the web page about it.
   >
   > Apache®, Apache Kafka®, Kafka®, and associated open source project names
   > are trademarks of the Apache Software Foundation 
   >
   > Cf confluent.io :  who has a similar thing at
   > the bottom of their landing page.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > ,
   > or unsubscribe
   > 

   > .
   > You are receiving this because you were mentioned.Message ID:
   > ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: Proposal to Include Record Headers in ProducerInterceptor onAcknowledgement Method

2024-07-14 Thread Rich C.
Hi Matthias,

Thank you for your response. I have left a comment in JIRA to the original
author about taking over KIP-512. In the meantime, I am still waiting for
my Confluence account to be created to edit the KIP.

Overall, it seems that the motivation must be clearly explained and the
> argument of "no needed" must be overcome in a convincing way, to get the
> KIP accepted


I agree completely. That’s why my initial email focused on explaining why
this feature is necessary.

But it seems we are already starting a discussion which should actually
> take place on the KIP discuss thread


Once I have edited KIP-512, I will start a new KIP discussion thread, and
we can continue our discussion there.

Regards,
Rich


On Sun, Jul 14, 2024 at 1:05 PM Matthias J. Sax  wrote:

> Hey,
>
> Resurrecting KIP-512 might be a good idea. Just follow up on the Jira
> ticket and/or old DISCUSS thread, and ask to take over -- if the
> original author does not reply in a few days, just take it on; given
> that the KIP is many years old, I would be surprised if anybody objects
> that you resurrect it and take over.
>
>
>
> However, I would personally not add Headers to RecordMetadata, but
> follow the idea to pass Headers as additional parameter (which was you
> original proposal, and was also mentioned as alternative on the KIP-512
> discuss thread).
>
> As pointed out on the KIP-512 discussion, RecordMetadata is "Kafka
> metadata" about the records (send back by the broker) like the offset.
> Headers seems to be of a different quality because it's
> "user/application metadata"), and I believe it would become a convoluted
> API to add them to RecordMetadata where Headers IMHO do not belong to.
>
> But it seems we are already starting a discussion which should actually
> take place on the KIP discuss thread :) -- And it's of course just my
> personal opinion. In the end, the question will be what the majority of
> people believe is better.
>
>
>
> The other argument on KIP-512 was, that no change is necessary at all
> because Headers exist already client side. While I see the point, I
> don't really agree to it... In the end, while technically correct, it's
> quite complicated and clumsy to keep a copy of the Headers in the
> application code, and link them back and pass into an interceptor or
> callback to be used there. So letting the producer pass them into the
> interceptor and/or callback, seems to be a good improvement for user
> experience and ease of use.
>
>
>
> Overall, it seem that the motivation must be clearly explained and the
> argument of "no needed" but be overcome in a convincing way, to get the
> KIP accepted. The API design question seems to be of second nature, and
> I am sure an agreement can be found, if people agree that a change is
> needed in the first place.
>
>
>
> -Matthias
>
>
> On 7/13/24 5:15 PM, Rich C. wrote:
> > Hi Matthias,
> >
> > Thank you for the positive feedback. I am new to Kafka contributions and
> > have just requested a Confluence account for creating the KIP. I am
> > currently waiting for the account creation.
> >
> > I noticed that KIP-512
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData
> >,
> > which addressed a similar issue with headers, was proposed five years
> ago.
> > Do you suggest that I edit and improve KIP-512 (pending agreement from
> the
> > original author) or create a new KIP?
> >
> > My previous email focused on WHY we need this feature. This follow-up
> email
> > will discuss **HOW** we can implement it.
> >
> > **Implementation Options**
> >
> > 1. **Option 1: Add Headers to RecordMetadata** (Original KIP-512
> Approach)
> > - **Prototype**: [
> >
> https://github.com/jychen7/kafka/pull/1](https://github.com/jychen7/kafka/pull/1)
> >
> > 2. **Option 2: Add a New onAcknowledgement Method with an Additional
> > Parameter for Headers or Producer Record**
> > - This option was brainstormed from the original thread
> > 
> > - **Prototype**: [
> >
> https://github.com/jychen7/kafka/pull/2](https://github.com/jychen7/kafka/pull/2)
> >
> > I prefer Option 1 because it is backward compatible, does not introduce
> any
> > new API, and does not require deprecating any existing APIs. In my
> opinion,
> > headers are a form of metadata, and RecordMetadata is created on the
> client
> > side without needing extra information from the broker.
> >
> > What are your thoughts on these options? If there are other suggestions,
> I
> > am open to considering them as well.
> >
> > Best regards,
> > Rich
> >
> >
> > On Sat, Jul 13, 2024 at 1:50 PM Matthias J. Sax 
> wrote:
> >
> >> Hi Rich,
> >>
> >> thanks for reaching out. I don't see any reason to object to this
> >> proposal. The best way forward would be to write a KIP and collect
> >> feedback from there.
> >>
> >>
> >> -Matthias
> >>
> >> On 7/12/24 9:14 PM, Rich C. wrote:
> >>> Dear Kafka Develop

[jira] [Created] (KAFKA-17135) Add unit test for `ProducerStateManager#readSnapshot` and `ProducerStateManager#writeSnapshot`

2024-07-14 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17135:
--

 Summary: Add unit test for `ProducerStateManager#readSnapshot` and 
`ProducerStateManager#writeSnapshot`
 Key: KAFKA-17135
 URL: https://issues.apache.org/jira/browse/KAFKA-17135
 Project: Kafka
  Issue Type: Test
Reporter: Chia-Ping Tsai


We are going to introduce generated code to `ProducerStateManager`, so it would 
be nice to increase the test converge for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-17129) Revisit FindCoordinatorResponse in KafkaConsumerTest

2024-07-14 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17129.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Revisit FindCoordinatorResponse in KafkaConsumerTest
> 
>
> Key: KAFKA-17129
> URL: https://issues.apache.org/jira/browse/KAFKA-17129
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: PoAn Yang
>Assignee: PoAn Yang
>Priority: Major
> Fix For: 3.9.0
>
>
> Currently, we have many test cases put 
> `client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(...), 
> ...);` after `newConsumer`. If `FutureResponse` is not in `MockClient`, the 
> request can't get a response. This may cause some flaky tests.
>  
> In our KafkaConsumerTest design, when starting a `newConsumer` for 
> `AsyncKafkaConsumer`, it always sends `FindCoordinatorRequest`.
>  
> In `MockClient#send`, if a `FutureResponse` is missing, the request will be 
> add to `requests`. Even if `client.prepareResponseFrom` adds a new 
> `FutureResponse`, it can't be matched to an existent request, so the request 
> can't get a response.
>  
> It's better to put 
> `client.prepareResponseFrom(FindCoordinatorResponse.prepareResponse(...), 
> ...);` before `newConsumer`, so we don't miss any `FindCoordinatorRequest`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3106

2024-07-14 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-17136) Added Documentation for Shell Methods

2024-07-14 Thread Arnav Dadarya (Jira)
Arnav Dadarya created KAFKA-17136:
-

 Summary: Added Documentation for Shell Methods
 Key: KAFKA-17136
 URL: https://issues.apache.org/jira/browse/KAFKA-17136
 Project: Kafka
  Issue Type: Task
  Components: docs
Reporter: Arnav Dadarya


Added documentation to the longer (and harder to follow) methods in the shell 
programs

GitHub Pull Request:



--
This message was sent by Atlassian Jira
(v8.20.10#820010)