Hi Kafka team,

We have a use case where we need to consume from ~20 topics (each with 24 
partitions), we have a potential max message size of 20MB so we've set our 
consumer fetch.size to 20MB but that's causing very poor performance on our 
consumer (most of our messages are in the 10-100k range). Is it possible to set 
the fetch size to a lower number than the max message size and gracefully 
handle larger messages (as a trapped exception for example) in order to improve 
our throughput?

Thank you in advance for your help
CJ Woolard

Reply via email to