Hello Guozhang,
thanks for the response, I have some doubts about the "N-1
producer-consumer" case you mentioned and why I may need to configure the
transactional id there and how. Is this a case of N consumers sharing the
same producer right?
My current implementation is creating a consumer per
Hi Gabriel,
What I meant is that with KIP-447, the fencing is achieved by the time of
committing with the consumer metadata. If within a transaction, the
producer would always try to commit at least once on behalf of the
consumer, AND a zombie of the producer would always come from a zombie of a
c
Hi everyone,
We’re very excited to announce our Call for Speakers for Current 2022: The
Next Generation of Kafka Summit!
With the permission of the ASF, Current will include Kafka Summit as part
of the event.
We’re looking for talks about all aspects of event-driven design, streaming
technology,
Last question, the fencing occurs with the sendOffsetsToTransaction which
includes ConsumerGroupMetadata, I guess the generation.id is what matters
here since it is bumped with each rebalance.
But couldn't this happen?
1. Client A consumes from topic partition P1 with generation.id = 1 and a
produc
No problem.
The key is that at step 4, when the consumer re-joins it will be aware that
it has lost its previously assigned partitions and will trigger
`onPartitionsLost` on the rebalance callback. And since in your scenario
it's a 1-1 mapping from consumer to producer, it means the producer has
b