Hi Surendra,

I think this behaviour is documented at
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#consumer-offset-committing

Best regards,

Martijn

On Tue, Dec 13, 2022 at 5:28 PM Surendra Lalwani via user <
user@flink.apache.org> wrote:

> Hi Team,
>
> I am on Flink version 1.13.6. I am reading couple of streams from Kafka
> and applying interval join with interval of 2 hours. However when I am
> checking KafkaConsumer_records_lag_max it is coming in some thousands but
> when I check Flink UI there is no backpressure and also the metrics inside
> flink UI show lag as 0, can anybody tell me what could be the reason?
>
> Thanks,
> Surendra
> --
> Thanks and Regards ,
> Surendra Lalwani
>
>
> ------------------------------
> IMPORTANT NOTICE: This e-mail, including any attachments, may contain
> confidential information and is intended only for the addressee(s) named
> above. If you are not the intended recipient(s), you should not
> disseminate, distribute, or copy this e-mail. Please notify the sender by
> reply e-mail immediately if you have received this e-mail in error and
> permanently delete all copies of the original message from your system.
> E-mail transmission cannot be guaranteed to be secure as it could be
> intercepted, corrupted, lost, destroyed, arrive late or incomplete, or
> contain viruses. Company accepts no liability for any damage or loss of
> confidential information caused by this email or due to any virus
> transmitted by this email or otherwise.

Reply via email to