Awesome that is what I thought. Answer seems simple, speed up flush :-D,
which we should be able to do.

On Fri, Apr 15, 2016 at 10:15 AM Liquan Pei <liquan...@gmail.com> wrote:

> Hi Scott,
>
> It seems that your flush takes longer time than
> consumer.session.timeout.ms.
> The consumers used in SinkTasks for a SinkConnector are in the same
> consumer group. In case that your flush method takes longer than the
> consumer.session.timeout.ms, the consumer for a SinkTask may be kicked out
> by the coordinator.
>
> In this case, you may want to increase the consumer.session.timeout.ms or
> have some timeout mechanism in the implementation of the flush method to
> return the control back to the framework so that it can send heartbeat to
> the coordinator.
>
> Thanks,
> Liquan
>
> On Fri, Apr 15, 2016 at 9:56 AM, Scott Reynolds <sreyno...@twilio.com>
> wrote:
>
> > List,
> >
> > We are struggling with Kafka Connect settings. The process start up and
> > handle a bunch of messages and flush. Then slowly the Group coordinator
> > removes them.
> >
> > This is has to be a interplay between Connect's flush interval and the
> call
> > to poll for each of these tasks. Here is my current settings that I think
> > are relevant.
> >
> > Any insights someone could share with us ?
> >
> > # on shutdown wait this long for the tasks to finish their flush.
> > task.shutdown.graceful.timeout.ms=600000
> >
> > # Flush records to s3 every 1/2 hour
> > offset.flush.interval.ms=1800000
> >
> > # Max time to wait for flushing to finish. Wait at *most* this long every
> > offset.flush.interval.ms.
> > offset.flush.timeout.ms=600000
> >
> > # Take your time on session timeouts. We do a lot of work. These control
> > the length of time a lock on a TopicPartition can be held
> > # by the coordinator broker.
> > session.timeout.ms=180000
> > request.timeout.ms=190000
> > consumer.session.timeout.ms=180000
> > consumer.request.timeout.ms=190000
> >
>
>
>
> --
> Liquan Pei
> Software Engineer, Confluent Inc
>

Reply via email to