Curious on a couple questions...

Are most people(are you?) using the simple consumer vs the high level
consumer in production?


What is the common processing paradigm for maintaining a full pipeline for
kafka consumers for at-least-once messaging? E.g. you pull a batch of 1000
messages and:

option 1.
you wait for the slowest worker to finish working on that message, when you
get back 1000 acks internally you commit your offset and pull another batch

option 2.
you feed your workers n msgs at a time in sequence and move your offset up
as you work through your batch

option 3.
you maintain a full stream of 1000 messages ideally and as you get acks
back from your workers you see if you can move your offset up in the stream
to pull n more messages to fill up your pipeline so you're not blocked by
the slowest consumer (probability wise)


any good docs or articles on the subject would be great, thanks!

Reply via email to