The pattern you apply seems to be an anti-pattern.
If you have only 2 partitions, only 2 consumer instances within a group
can consumer data -- hence, you should either only use 2 pods, or you
should increase the number of partitions.
What you try to do atm, won't work smoothly no matter what you
Hi,
If you choose not to close consumer, but only poll as a background thread
it, the rebalance wouldn’t take place. This is only if you are sending
heartbeats (via Poll() ) within the timeout period. Also, if one of the pod
dies, rebalance has to happen. You’d want a fare share of work.
An examp
Hi,
I have 2 partition in a topic and 3 pods/instances of my microservice
running in my k8s cluster. I wanted all 3 pods to pull message from 2
partitions. (I'm using same group id for all 3 pods/instances).
I have achieved this by closing consumer as soon as i pulled message from
partition. kafk