Hi,

Thanks for the response.
I'm actually using a different consumer group id for each consumer.

RK.

On Fri, Jan 31, 2020 at 12:20 PM Tzu-Li (Gordon) Tai <tzuli...@apache.org>
wrote:

> Hi,
>
> There are no upper limits on the number of Kafka consumers per job.
>
> For each one of your FlinkKafkaConsumers, are you using the same group.id?
> That could maybe explain why you are experiencing higher commit times as
> you are adding more FlinkKafkaConsumers, as AFAIK on the broker side, the
> commit operations for the same consumer group are enqueued together.
>
> As a side note, as the warning message already mentions, this does not
> affect Flink's exactly-once guarantees.
> If the only reason that you want to commit the offsets back to Kafka is to
> have a way to monitor progress, it should be fine to define different
> consumer group ids for each FlinkKafkaConsumer.
>
> Hope this helps,
> Gordon
>
> On Sat, Feb 1, 2020 at 12:54 AM RKandoji <rkand...@gmail.com> wrote:
>
>> Can someone please help me here.
>>
>> Thanks
>> RK
>>
>>
>> On Thu, Jan 30, 2020 at 7:51 PM RKandoji <rkand...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I'm running into strange issue pasted below:
>>>
>>> Committing offsets to Kafka takes longer than the checkpoint interval.
>>> Skipping commit of previous offsets because newer complete checkpoint
>>> offsets are available. This does not compromise Flink's checkpoint
>>> integrity.
>>>
>>>
>>> I read data from more than 10 different Kafka topics, I started noticing
>>> this issue as I integrate more number of Kafkaconsumer reading from
>>> respective topics.
>>>
>>> Wondering if there is any upper limit on the number of Kafka consumers
>>> (Kafka topics) per job?
>>>
>>> If not could someone please shed some light on why this could be
>>> happening?
>>>
>>> Thanks,
>>> RK
>>>
>>

Reply via email to