Kyle, the new producer will handle this exception with the specific
exception type, and the callback handling function can treat it
accordingly. Could you give the new producer client a try and see if that
is better handled now?
On Tue, Sep 23, 2014 at 8:30 PM, Kyle Banker wrote:
> Thanks so muc
Kyle,
We have developed a new (pure java) producer in trunk. It should have
better error messaging. Could you give it a try and see if it points out
the problem clearer to you?
Thanks,
Jun
On Tue, Sep 23, 2014 at 8:30 PM, Kyle Banker wrote:
> Thanks so much, Jun. That seems to have fixed the
Thanks so much, Jun. That seems to have fixed the problem. I increased both
message.max.bytes and replica.fetch.max.bytes on the broker.
For the benefit of future Kafka users, how hard would it be to build out
some clearer error messaging for this case?
On Mon, Sep 22, 2014 at 10:38 PM, Jun Rao
Also, don't forget to increase replica.fetch.max.bytes to be larger than
the max message size.
Thanks,
Jun
On Mon, Sep 22, 2014 at 9:35 PM, Jun Rao wrote:
> What version of Kafka are you using? Have you increased the max message
> size on the broker (default to 1MB)?
>
> Thanks,
>
> Jun
>
> On
What version of Kafka are you using? Have you increased the max message
size on the broker (default to 1MB)?
Thanks,
Jun
On Mon, Sep 22, 2014 at 3:41 PM, Kyle Banker wrote:
> I have a test data set of 1500 messages (~2.5 MB each) that I'm using to
> test Kafka throughput. I'm pushing this data
I have a test data set of 1500 messages (~2.5 MB each) that I'm using to
test Kafka throughput. I'm pushing this data using 40 Kafka producers, and
I'm losing about 10% of the message on each trial.
I'm seeing errors of the following form:
Failed to send producer request with correlation id 80 to