We have some legacy applications using an old (0.10.0.0) version of the 
consumer that are hitting RecordTooLargeExceptions with the following message:

org.apache.kafka.common.errors.RecordTooLargeException: There are some messages 
at [Partition=Offset]: {mytopic-0=13920987} whose size is larger than the fetch 
size 1048576 and hence cannot be ever returned. Increase the fetch size, or 
decrease the maximum message size the broker will allow.

We have not increased the maximum message size on either the broker nor topic 
level, and I'm quite confident no messages approaching that size are in the 
topic. Further, even if I increase the max.partition.fetch.bytes to a very 
large value such as Integer.MAX_VALUE, the error still occurs. I stumbled 
across https://issues.apache.org/jira/browse/KAFKA-4762 which seems to match 
what we're seeing, but our messages are not compressed. But sure enough, a test 
application using the 0.10.1.0 consumer is able to consume the topic with no 
issues. Unfortunately upgrading our legacy applications is difficult for other 
reasons. Any ideas what's happening here?



--

[https://dts-web-images.s3.amazonaws.com/Images/email+signatures/xperi_117.png]

Tommy Becker
Principal Engineer
Pronouns: he/him/his


O: 919.460.4747
E: thomas.bec...@xperi.com



www.xperi.com

________________________________

This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of 
Xperi is authorized to conclude any binding agreement on behalf of Xperi by 
email. Binding agreements with Xperi may only be made by a signed written 
agreement.

Reply via email to