Hi,
Is there a chance of data loss if there is a failure between the checkpoint
completion and when "notifyCheckpointComplete" is invoked.
The pending files are moved to final state in the "notifyCheckpointComplete"
method. So if there is a failure in this method or just before the method is
invo
Hi,
I have a flink streaming job that reads from kafka, performs a aggregation
in a window, it ran fine for a while however when the number of events in a
window crossed a certain limit , the yarn containers failed with Out Of
Memory. The job was running with 10G containers.
We have about 64G me
Hi Jamie,
Thanks for the reply.
Yeah i looked at save points, i want to start my job only from the last
checkpoint, this means I have to keep track of when the checkpoint was
taken and the trigger a save point. I am not sure this is the way to go. My
state backend is HDFS and I can see that the c
Thanks for the reply, It would be great to have the feature to restart a
failed job from the last checkpoint.
Is there a way to pass the initial set of partition-offsets to the
kafka-client ? In that case I can maintain a list of last processed offsets
from within my window operation (possibly st
>From the code in Kafka09Fetcher.java
// if checkpointing is enabled, we are not automatically committing to
Kafka.
kafkaProperties.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,
Boolean.toString(!runtimeContext.isCheckpointingEnabled()));
If flink checkpointing
Hi Stephan,
The flink kafka 09 connector does not do offset commits to kafka when
checkpointing is turned on. Is there a way to monitor the offset lag in this
case,
I am turning on a flink job that reads data from kafka (has about a week
data - around 7 TB) , currently the approximate way that I