[
https://issues.apache.org/jira/browse/FLINK-27681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17789418#comment-17789418
]
Piotr Nowojski commented on FLINK-27681:
----------------------------------------
Thanks [~fanrui] for pinging me, and double thanks to [~mayuehappy] for
tackling this issue.
{quote}
In our production environment, most files are damaged due to hardware failures
on the machine where the file is written
{quote}
Wouldn't the much more reliable and faster solution be to enable CRC on the
local filesystem/disk that Flink's using? Benefits of this approach:
* no changes to Flink/no increased complexity of our code base
* would protect from not only errors that happen to occur between writing the
file and uploading to the DFS, but also from any errors that happen at any
point of time
* would amortise the performance hit. Instead of amplifying reads by 100%,
error correction bits/bytes are a small fraction of the payload, so the
performance penalty would be at every read/write access but ultimately a very
small fraction of the total cost of reading
Assuming we indeed want to verify files, doing so during the checkpoint's async
phase is and failing the whole checkpoint if verification fails is a good
enough solution, that doesn't complicate the code too much.
{quote}
If this file is uploaded to hdfs before, flink try to download it, and let
rocksdb become health.
If this file isn't uploaded to hdfs, flink job should fail directly, right?
If we only fail the current checkpoint, and
execution.checkpointing.tolerable-failed-checkpoints > 0, the job will continue
to run. And flink job will fail later (when this file is read.). And then job
will recover from latest checkpoint, flink job will consume more duplicate data
than fail job directly.
{quote}
[~fanrui] In what scenario file that we are verifying before the upload, could
have been uploaded before?
> Improve the availability of Flink when the RocksDB file is corrupted.
> ---------------------------------------------------------------------
>
> Key: FLINK-27681
> URL: https://issues.apache.org/jira/browse/FLINK-27681
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / State Backends
> Reporter: Ming Li
> Assignee: Yue Ma
> Priority: Critical
> Labels: pull-request-available
> Attachments: image-2023-08-23-15-06-16-717.png
>
>
> We have encountered several times when the RocksDB checksum does not match or
> the block verification fails when the job is restored. The reason for this
> situation is generally that there are some problems with the machine where
> the task is located, which causes the files uploaded to HDFS to be incorrect,
> but it has been a long time (a dozen minutes to half an hour) when we found
> this problem. I'm not sure if anyone else has had a similar problem.
> Since this file is referenced by incremental checkpoints for a long time,
> when the maximum number of checkpoints reserved is exceeded, we can only use
> this file until it is no longer referenced. When the job failed, it cannot be
> recovered.
> Therefore we consider:
> 1. Can RocksDB periodically check whether all files are correct and find the
> problem in time?
> 2. Can Flink automatically roll back to the previous checkpoint when there is
> a problem with the checkpoint data, because even with manual intervention, it
> just tries to recover from the existing checkpoint or discard the entire
> state.
> 3. Can we increase the maximum number of references to a file based on the
> maximum number of checkpoints reserved? When the number of references exceeds
> the maximum number of checkpoints -1, the Task side is required to upload a
> new file for this reference. Not sure if this way will ensure that the new
> file we upload will be correct.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)