Thank you Dana and David!

That was my problem. The data stored in Kafka is very small ~2100 bytes per
record, but the work being done to process each message takes time so the
consumer doesn't get a chance to poll for new messages within the request
duration. What I did was decrease the fetch bytes and the problem hasn't
occurred.

Sadly, the error I received wasn't very clear as to what the problem was
located at.

Cheers

On Mon, May 2, 2016 at 11:53 PM, David Buschman <david.busch...@timeli.io>
wrote:

> To add to what Dana said, we fixed this issue on AWS with setting the
> “max.partition.fetch.bytes” to a smaller setting so out consumer would poll
> more frequently.
>
> Try setting “max.partition.fetch.bytes” to  “750000”, then “500000”, then
> “250000”, … until the error stop occurring. The default is 1,048,576
>
> Thanks,
>         DaVe.
>
>
> > On May 2, 2016, at 8:48 PM, Dana Powers <dana.pow...@gmail.com> wrote:
> >
> > It means there was a consumer group rebalance that this consumer missed.
> > You may be spending too much time in msg processing between poll() calls.
> >
> > -Dana
>
>


-- 
-Richard L. Burton III
@rburton

Reply via email to