configure brokers with 2 listeners

2021-07-15 Thread Claudia Kesslau
Hi, I tried to add a second listener to my kafka brokers running in docker like this: listeners=INTERNAL://{{getenv "KAFKA_SERVER_IP"}}:{{getenv "KAFKA_PORT" "9092"}}, EXTERNAL://0.0.0.0:{{getenv "KAFKA_PORT_EXTERNAL" "9096"}} listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT

Confluent's parallel consumer

2021-07-15 Thread Pushkar Deole
Hi All, and Antony (author of below article) i came across this article which seemed interesting: Introducing Confluent’s Parallel Consumer Message Processing Client I would like to use the key-level orderin

Re: Confluent's parallel consumer

2021-07-15 Thread Israel Ekpo
Hi Pushkar, If you are selecting key-based ordering, you should not be concerned about the other keys from the same partitions being committed first If that is a concern for your use cases then you should go with partition based ordering to ensure that the events are processed in the sequence the

Re: Confluent's parallel consumer

2021-07-15 Thread Pushkar Deole
Well... with key-level ordering, i am mainly concerned about event loss, if any, in below mentioned scenario: 1. since event1 with key1 and event2 with key2 are both part of the same partition1 2. key1 event has offset 30 while key2 has offset 40 3. key2 is processed by background thread and offse

Re: Confluent's parallel consumer

2021-07-15 Thread Israel Ekpo
Hi Pushkar, When you use the term “node/instance” are you referring to the Kafka Brokers or the consuming clients that are retrieving events from the broker? Please could you elaborate/clarify? On Thu, Jul 15, 2021 at 10:00 AM Pushkar Deole wrote: > Well... with key-level ordering, i am mainly

Re: Confluent's parallel consumer

2021-07-15 Thread Pushkar Deole
It is consumer client node that has received events and is processing those... On Thu, Jul 15, 2021 at 8:49 PM Israel Ekpo wrote: > Hi Pushkar, > > When you use the term “node/instance” are you referring to the Kafka > Brokers or the consuming clients that are retrieving events from the > broker

Re: Confluent's parallel consumer

2021-07-15 Thread Israel Ekpo
Hi Pushkar, Based on what I understand about the library, I don't think you need to worry about data loss because there are mechanisms in place to track which offsets have been processed in the event something goes wrong during processing. If the processing client goes offline or is unresponsive

Re: does kafka streams guarantee EOS with external producer used in application?

2021-07-15 Thread Matthias J. Sax
The app cannot know. That is why you need to use sync-writes. Kafka Streams won't commit offset at long as custom code is executed, thus, if you call `producer.send()` and wait to the ack and block within `Processor.process()` you can be sure that no commit happens in between, ie., you can be sure