Am 02.04.2025 um 10:37 hat Vladimir Sementsov-Ogievskiy geschrieben:
> Actually block job is not completed without the final flush. It's
> rather unexpected to have broken target when job was successfully
> completed long ago and now we fail to flush or process just
> crashed/killed.
>
> Mirror job already has mirror_flush() for this. So, it's OK.
>
> Do this for stream, commit and backup jobs too.
>
> Note that jobs behave a bit different around IGNORE action:
> backup and commit just retry the operation, when stream skip
> failed operation and store the error to report later. Keep
> these different behaviors for final flush too.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <[email protected]>
> ---
>
> v2 was old "[PATCH v2 0/3] block-jobs: add final flush"[1]
> v3: follow Kevin's suggestion to introduce block_job_handle_error()
> (still, it's not obvious how to rewrite commit and stream operation
> loops reusing this helper, not making things more complicated..
> I decided too keep them as is, using new helper only for final flush.)
>
> [1]
> https://patchew.org/QEMU/[email protected]/
> Supersedes: <[email protected]>
>
> block/backup.c | 8 ++++++++
> block/commit.c | 6 +++++-
> block/stream.c | 8 ++++++--
> blockjob.c | 34 ++++++++++++++++++++++++++++++++++
> include/block/blockjob.h | 9 +++++++++
> 5 files changed, 62 insertions(+), 3 deletions(-)
>
> diff --git a/block/backup.c b/block/backup.c
> index 79652bf57b..13fcdaa320 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -221,6 +221,14 @@ static int coroutine_fn backup_loop(BackupBlockJob *job)
> }
>
> out:
> + if (ret == 0) {
> + do {
> + WITH_GRAPH_RDLOCK_GUARD() {
> + ret = bdrv_co_flush(job->target_bs);
> + }
> + } while (block_job_handle_error(&job->common, ret,
> job->on_target_error,
> + true, true));
> + }
Backup doesn't flush if there was already an error before. This makes
sense to me because then the job won't complete successfully anyway.
is_read=true feels wrong for a flush. (Same in all three jobs.)
retry_on_ignore=true means that this is a potential tight loop that will
keep the process at 100% CPU if the request fails repeatedly. (In
file-posix, flush failing once means that it will keep failing forever.)
I wonder if we shouldn't use retry_on_ignore=false for all end-of-job
flushes because there isn't really a chance to retry successfully later.
> block_copy_call_free(s);
> job->bg_bcs_call = NULL;
> return ret;
> diff --git a/block/commit.c b/block/commit.c
> index 5df3d05346..711093504f 100644
> --- a/block/commit.c
> +++ b/block/commit.c
> @@ -201,7 +201,11 @@ static int coroutine_fn commit_run(Job *job, Error
> **errp)
> }
> }
>
> - return 0;
> + do {
> + ret = blk_co_flush(s->base);
> + } while (block_job_handle_error(&s->common, ret, s->on_error, true,
> true));
Why does commit still flush even if we return an error?
> + return ret;
> }
>
> static const BlockJobDriver commit_job_driver = {
> diff --git a/block/stream.c b/block/stream.c
> index 999d9e56d4..3d374094f5 100644
> --- a/block/stream.c
> +++ b/block/stream.c
> @@ -160,6 +160,7 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
> int64_t offset = 0;
> int error = 0;
> int64_t n = 0; /* bytes */
> + int ret = -1;
>
> WITH_GRAPH_RDLOCK_GUARD() {
> unfiltered_bs = bdrv_skip_filters(s->target_bs);
> @@ -177,7 +178,6 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
>
> for ( ; offset < len; offset += n) {
> bool copy;
> - int ret = -1;
>
> /* Note that even when no rate limit is applied we need to yield
> * with no pending I/O here so that bdrv_drain_all() returns.
These two hunks looked suspicious because ret is now set only once
instead of in each loop iteration. However, it doesn't matter, the
initialisation was dead code anyway and the value was never read.
Of course, it is still dead code, and -1 isn't a good return code
anyway (it should be -errno), so maybe remove the initialisation?
> @@ -235,8 +235,12 @@ static int coroutine_fn stream_run(Job *job, Error
> **errp)
> }
> }
>
> + do {
> + ret = blk_co_flush(s->blk);
> + } while (block_job_handle_error(&s->common, ret, s->on_error, true,
> false));
Same here about flushing even on error.
> +
> /* Do not remove the backing file if an error was there but ignored. */
> - return error;
> + return error ?: ret;
> }
>
> static const BlockJobDriver stream_job_driver = {
> diff --git a/blockjob.c b/blockjob.c
> index 32007f31a9..70a7af2077 100644
> --- a/blockjob.c
> +++ b/blockjob.c
> @@ -626,3 +626,37 @@ AioContext *block_job_get_aio_context(BlockJob *job)
> GLOBAL_STATE_CODE();
> return job->job.aio_context;
> }
> +
> +bool coroutine_fn
> +block_job_handle_error(BlockJob *job, int ret, BlockdevOnError on_err,
> + bool is_read, bool retry_on_ignore)
> +{
> + assert(ret >= 0);
> +
> + if (ret == 0) {
> + return false;
> + }
> +
> + if (job_is_cancelled(&job->job)) {
> + return false;
> + }
> +
> + BlockErrorAction action =
> + block_job_error_action(job, on_err, is_read, -ret);
> + switch (action) {
> + case BLOCK_ERROR_ACTION_REPORT:
> + return false;
> + case BLOCK_ERROR_ACTION_IGNORE:
> + if (!retry_on_ignore) {
> + return false;
> + }
> + /* fallthrough */
What is the idea behind having a pause point for "ignore"? Is it to at
least avoid QEMU hanging completely if it goes into an infinite loop
with retry_on_ignore?
> + case BLOCK_ERROR_ACTION_STOP:
> + job_pause_point(&job->job);
> + break;
> + default:
> + abort();
> + }
> +
> + return true;
> +}
> diff --git a/include/block/blockjob.h b/include/block/blockjob.h
> index 7061ab7201..8362143ed7 100644
> --- a/include/block/blockjob.h
> +++ b/include/block/blockjob.h
> @@ -235,4 +235,13 @@ bool block_job_is_internal(BlockJob *job);
> */
> const BlockJobDriver *block_job_driver(BlockJob *job);
>
> +/**
> + * block_job_handle_error:
> + *
> + * Return, wherter the operation should be retried.
s/wherter/whether/
> + */
> +bool coroutine_fn
> +block_job_handle_error(BlockJob *job, int ret, BlockdevOnError on_err,
> + bool is_read, bool retry_on_ignore);
> +
> #endif
Kevin