This is indeed very helpful.  Thank you!

It seems that I have 2 paths available to me:
1. Generate interest
2. Externalize my needs to some sort of proxy/preprocessing like Kafka Connect.

The issue with #2 is that it requires a more complex infrastructure... Which is 
costly in architecture, deployment, and operations. 
When we are talking about doing this at scale it's painful.

Again, thank you for finding the link, I'll study it. Maybe I can whip up a 
meaningful demo and get a buy-in that way.
Really appreciate your feedback.
Thank you, Andrew.


> On Dec 3, 2024, at 3:10 PM, Andrew Schofield 
> <andrew_schofield_j...@outlook.com> wrote:
> 
> Hi Max,
> Thanks for the KIP.
> 
> Discussion on KIPs essentially progresses by community members being 
> interested
> in the ideas being proposed. A KIP can only become accepted once interest and
> consensus has been built. That is not necessarily an easy thing to achieve.
> The type of KIP that you are proposing is very difficult. People are nervous 
> of
> running user-supplied code in such a critical code path.
> 
> As mentioned earlier in the discussion thread, there have been several ideas
> to do with server-side interceptors and none of them managed to get
> sufficient support to become accepted. The most recent KIP like this
> that I can remember was KIP-940.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-940:+Broker+extension+point+for+validating+record+contents+at+produce+time
> 
> That one dealt with the data at a higher level of abstraction than your KIP
> (records not requests), and it got as far as voting, but it didn't manage to 
> get
> the votes to become accepted. Actual request parsing seems too low-level and
> concrete to me. The Kafka protocol changes all the time. There have already 
> been
> 12 versions of the Produce request/response.
> 
> People quite often use Kafka proxies to intercept and manipulate requests
> on the way to the broker. I wonder if you can achieve what you need without
> running code on the Kafka brokers themselves.
> 
> I hope this context is helpful.
> 
> Thanks,
> Andrew
> ________________________________________
> From: Maxim Fortun <m...@maxf.net>
> Sent: 02 December 2024 22:00
> To: dev@kafka.apache.org <dev@kafka.apache.org>
> Subject: Re: [DISCUSS] KIP-1086: Add ability to specify a custom produce 
> request parser.
> 
> I could totally use some guidance on how to (who with?) have a discussion 
> about this feature.
> Are KIPs addressed in the order they are submitted?
> What is the likelihood of getting this, or a similar feature into the 4 
> releases?
> Again, the purpose of this is to allow to specify a custom produce(maybe 
> others?) request parsing.
> Thanks
> 
>> On Sep 3, 2024, at 4:42 PM, Maxim Fortun <m...@maxf.net> wrote:
>> 
>> Based on feedback from Kirk and Colin I added configuring the parser class 
>> name via server.properties, added some tests, and updated the docs to 
>> reflect this.
>> I find the config file name by re-parsing the command line. If anyone knows 
>> a better way of passing KafkaConfig to static initialization, I'd appreciate 
>> a nudge in the right direction. It's not the most efficient way of 
>> retrieving configs, but it is done only once at load time, so the overhead 
>> should be negligible while providing a consistent location for all configs. 
>> I have also left the system prop and env ways of passing this config in. 
>> Hopefully this is ok and is not considered a code bloat.
>> 
>> Thanks!
>> Max
>> 
>> 
>>> On Aug 29, 2024, at 10:25 AM, Maxim Fortun <m...@maxf.net> wrote:
>>> 
>>> Hi all,
>>> I would like to introduce a minor code change to allow custom produce 
>>> request parsers.
>>> 
>>> KIP: 
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=318606528
>>> JIRA: https://issues.apache.org/jira/browse/KAFKA-17348
>>> PR: https://github.com/apache/kafka/pull/16812
>>> 
>>> There are many potential benefits for this feature. A custom produce 
>>> request parser would allow to intercept all incoming messages before they 
>>> get into the broker and apply broker wide logic to the messages. This could 
>>> be a trace, a filter, a transform(such as lineage), forcing required 
>>> headers across all messages, compression, signing, encryption, or any other 
>>> message manipulation before it gets into the broker.
>>> 
>>> Please take a look.
>>> Any and all feedback is greatly appreciated.
>>> Thanks,
>>> Max
>>> 
>>> 
>>> 
>>> 
>> 
> 

Reply via email to