Hi Mejri

> I’m wondering if this is strictly necessary, since the Kafka broker
itself keeps track of offsets (i am not mistaken). In other words, if we
redeploy the job, will it automatically resume from the last Kafka offset,
or should we still rely on Flink’s checkpoint/savepoint mechanism to ensure
correct offset recovery?

This depends on the starting offset you set in the source config[1]. you
can configure it to start from earliest or last committed or latest or at
specific offset.

I am not 100% sure about RabbitMQ, IIRC it uses checkpoints to ack read
messages unlike Kafka.


1-
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#starting-offset
Best Regards
Ahmed Hamdy


On Tue, 18 Mar 2025 at 22:20, mejri houssem <mejrihousse...@gmail.com>
wrote:

>
> Hello everyone,
>
> We have a stateless Flink job that uses a Kafka source with at-least-once
> guarantees. We’ve enabled checkpoints so that, in the event of a restart,
> Flink can restore from the last committed offset stored in a successful
> checkpoint. Now we’re considering enabling savepoints for our production
> deployment.
>
> I’m wondering if this is strictly necessary, since the Kafka broker itself
> keeps track of offsets (i am not mistaken). In other words, if we redeploy
> the job, will it automatically resume from the last Kafka offset, or should
> we still rely on Flink’s checkpoint/savepoint mechanism to ensure correct
> offset recovery?
>
> Additionally, we have another job that uses a RabbitMQ source with
> checkpoints enabled to manage manual acknowledgments. Does the same logic
> apply in that case as well?
>
> Thanks in advance for any guidance!point enabled in order to activate
> manual ack. Does this apply to this job also?
>
> Thanks in advance.
>
>
> Best Regards.
>

Reply via email to