Deleted the topic and recreated (with max bytes set) but that did not help.What 
helped though is upping the java heap size.I monitored the consumer with jstat. 
I noticed 2 full garbage collection attempts right after publishing the large 
message. After that the consumer appeared dormant. Upping the java heap size 
allowed to consume the message. Wondering why the consumer remained silent, 
i.e. no out of heap memory error or anything. 

    On Tuesday, February 2, 2016 8:35 PM, Joe Lawson 
<[email protected]> wrote:
 

 Make sure the topic is created after message Max bytes is set.
On Feb 2, 2016 9:04 PM, "Tech Bolek" <[email protected]> wrote:

> I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> messages ~70 KB everything works fine. However, after the producer enqueues
> a larger, 70 MB  message, kafka appears to stop delivering the messages to
> the consumer. I.e. not only is the large message not delivered but also
> subsequent smaller messages. I know the producer succeeds because I use
> kafka callback for the confirmation and I can see the messages in the kafka
> message log.
> kafka config custom changes:
>    message.max.bytes=200000000    replica.fetch.max.bytes=200000000
> consumer config:
>  props.put("fetch.message.max.bytes",  "200000000");
> props.put("max.partition.fetch.bytes", "200000000");
>


  

Reply via email to