I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
messages ~70 KB everything works fine. However, after the producer enqueues a
larger, 70 MB message, kafka appears to stop delivering the messages to the
consumer. I.e. not only is the large message not delivered but also subsequent
smaller messages. I know the producer succeeds because I use kafka callback for
the confirmation and I can see the messages in the kafka message log.
kafka config custom changes:
message.max.bytes=200000000 replica.fetch.max.bytes=200000000
consumer config:
props.put("fetch.message.max.bytes", "200000000");
props.put("max.partition.fetch.bytes", "200000000");