Re: [EXTERNAL] Re: Security vulnerabilities in kafka:2.13-2.6.0/2.7.0 docker image

2021-11-01 Thread Colin McCabe
It seems like your image does not show up on the mailing list. best, Colin On Wed, Sep 1, 2021, at 06:26, Ashish Patil wrote: > Hi Team > > I tried upgrading it to 2.13_2.8.0 but still have these vulnerabilities. > > > > What is your suggestion on this? > > Thanks > Ashish > > *From:*

Re: Stream to KTable internals

2021-11-01 Thread Guozhang Wang
Hello Chad, >From your earlier comment, you mentioned "In my scenario the records were written to the KTable topic before the record was written to the KStream topic." So I think Matthias and others have excluded this possibility while trying to help investigate. If only the matching records from

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Guozhang Wang
Hello Pushkar, I'm assuming you have the same Kafka version (2.5.1) at the Streams client side here: in those old versions, Kafka Streams relies on the embedded Producer clients to handle timeouts, which requires users to correctly configure such values. In newer version (2.8+) We have made Kafka

Re: Stream to KTable internals

2021-11-01 Thread Matthias J. Sax
Timestamp synchronization is not perfect, and as a matter of fact, we fixed a few gaps in 3.0.0 release. We actually hope, that we closed the last gaps in 3.0.0... *fingers-crossed* :) We are using a timestamp extractor that returns 0. You can do this, and it effectively "disables" timestamp

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Matthias J. Sax
As the error message suggests, you can increase `max.block.ms` for this case: If a broker is down, it may take some time for the producer to fail over to a different broker (before the producer can fail over, the broker must elect a new partition leader, and only afterward can inform the produc

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Matthias J. Sax
The `Producer#send()` call is actually not covered by the KIP because it may result in data loss if we try to handle the timeout directly. -- Kafka Streams does not have a copy of the data in the producer's send buffer and thus we cannot retry the `send()`. -- Instead, it's necessary to re-proc

Re: Producer Timeout issue in kafka streams task

2021-11-01 Thread Luke Chen
Hi Pushkar, In addition to Matthias and Guozhang's answer and clear explanation, I think there's still one thing you should focus on: > I could see that 2 of the 3 brokers restarted at the same time. It's a total 3 brokers cluster, and suddenly, 2 of them are broken. You should try to find out th