I am using the flink connector to read from a kafka stream, I ran into the
problem where the flink job went down due to some application error, it was
down for sometime, meanwhile the kafka queue was growing as expected no
consumer to consume from the given group , and when I started the flink it
started consuming the messages no problem so far, but consumer lag was huge
since producer is a fast producer about 4500 events/sec. My question is
there any flink connector configuration which can force it read from the
latest offset when the flink application starts since in my application
logic I do not care about older events.

balaji

Reply via email to