Hi Feng,
Thanks for your response.
1. We have configured checkpointing to upload to a s3 location, also we see
metadata files getting created in the s3 location. But we are unsure if the
job is getting triggered from that checkpoint in case of failure. Is there
a possible way to test this. Also d
When we use 1.13.2,we have the following error:
FileNotFoundException: Cannot find metata file metadats in directory
'hdfs://xx/f408dbe327f9e5053e76d7b5323d6e81/chk-173'.
Hi Elakiya
1. You can confirm if the checkpoint for the task has been triggered
normally?
2. Also, If you stop the job, you need to use "STOP WITH SAVEPOINT" and
specify the path to the savepoint when starting the Flink job for recovery.
This is necessary to continue consuming from the historical
Hi Surendra,
there are no exceptions in the logs, nor anything salient with
INFO/WARN/ERROR levels. The checkpoints are definitely completing, we even
set the config
execution.checkpointing.tolerable-failed-checkpoints: 1
Regards,
Alexis.
Am Do., 28. Sept. 2023 um 09:32 Uhr schrieb Surendra Sin
Hi Alexis,
Could you please check the TaskManager log for any exceptions?
Thanks
Surendra
On Thu, Sep 28, 2023 at 7:06 AM Alexis Sarda-Espinosa <
sarda.espin...@gmail.com> wrote:
> Hello,
>
> We are using ABFSS for RocksDB's backend as well as the storage dir
> required for Kubernetes HA. In t
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"id": "emp_123456",
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the F
Thanks! I saw the first change but missed the third one, that is the
most that most probably explains my problem, most probably the metrics
I was sending with the twitter/finagle statsReceiver ended up in the
singleton default registry and were exposed by Flink with all the
other Flink metrics, but
Hi Ram,
Thanks for that. We configure a path with ABFSS scheme in the following
settings:
- state.checkpoints.dir
- state.savepoints.dir
- high-availability.storageDir
We use RocksDB with incremental checkpointing every minute.
I found the metrics from Azure in the storage account under Monitor