Hi Matteo,

Glad to hear that you are building a connector. To better understand the
issue, can you provide the exact steps to re-produce the issue? One thing I
am confused is that when one worker is shutdown, you don't need to restart
the connector through the rest API, the failover logic should handle the
connector and tasks shutdown and start up.

The offset storage topic is used for storing offset for source connectors.
For sink connector, the offset is simply Kafka offset and will be stored in
the __consumer_offset topic.

Thanks,
Liquan

On Wed, May 11, 2016 at 1:31 AM, Matteo Luzzi <matteo.lu...@gmail.com>
wrote:

> Hi,
> I'm working on a custom implementation of a sink connector for Kafka
> Connect framework. I'm testing the connector for fault tolerance by killing
> the worker process  and restarting the connector through the Rest API and
> occasionally I notice that some tasks don't receive anymore messages from
> the internal consumers. I don't get any errors from the log and the tasks
> seem to be initialised correctly but some of them just don't process
> messages anymore. Normally when I restart again the connector, the tasks
> read all the messages skipped before. I'm executing Kafka Connect in
> distributed mode.
>
> Could it be a problem of the cleanup function invoked when closing the
> connector causing a leak in consumer connections with the broker? Any
> ideas?
>
> And also, from the documentation I read that the connector save the offset
> of the tasks in a special topic in Kafka (the one specified via
> offset.storage.topic) but it is empty even though the connector process
> messages. Is it normal?
>
> Thanks,
> Matteo
>



-- 
Liquan Pei
Software Engineer, Confluent Inc

Reply via email to