I understand replication uses a multi-fetch concept to maintain the replicas of 
each partition. I have a use case where it might be beneficial to grab a 
“batch” of messages from a kafka topic and process them as one unit into a 
source system – in my use case, sending the messages to a Flume source.

My questions:

  *   Is it possible to fetch a back of messages in which you may not know the 
exact message size?
  *   If so, how are the offsets managed?

I am trying to avoid queuing them in memory and batching in my process for 
several reasons.

Thanks in advance…

Reply via email to