Hello CJ, You have to set the fetch size to be >= the maximum message size possible, otherwise the consumption will block upon encountering these large messages.
I am wondering by saying "poor performance" what do you mean exactly? Are you seeing low throughput, and can you share your consumer config values? Guozhang On Sun, Feb 8, 2015 at 7:39 AM, Cj <cjwool...@gmail.com> wrote: > > > Hi Kafka team, > > We have a use case where we need to consume from ~20 topics (each with 24 > partitions), we have a potential max message size of 20MB so we've set our > consumer fetch.size to 20MB but that's causing very poor performance on our > consumer (most of our messages are in the 10-100k range). Is it possible to > set the fetch size to a lower number than the max message size and > gracefully handle larger messages (as a trapped exception for example) in > order to improve our throughput? > > Thank you in advance for your help > CJ Woolard -- -- Guozhang