On 24/07/2020 18:49, Kanchan Joshi wrote:
> From: SelvaKumar S <selvakuma...@samsung.com>
> 
> Repurpose [cqe->res, cqe->flags] into cqe->res64 (signed) to report
> 64bit written-offset for zone-append. The appending-write which requires
> reporting written-location (conveyed by IOCB_ZONE_APPEND flag) is
> ensured not to be a short-write; this avoids the need to report
> number-of-bytes-copied.
> append-offset is returned by lower-layer to io-uring via ret2 of
> ki_complete interface. Make changes to collect it and send to user-space
> via cqe->res64.
> 
> Signed-off-by: SelvaKumar S <selvakuma...@samsung.com>
> Signed-off-by: Kanchan Joshi <josh...@samsung.com>
> Signed-off-by: Nitesh Shetty <nj.she...@samsung.com>
> Signed-off-by: Javier Gonzalez <javier.g...@samsung.com>
> ---
>  fs/io_uring.c                 | 49 
> ++++++++++++++++++++++++++++++++++++-------
>  include/uapi/linux/io_uring.h |  9 ++++++--
>  2 files changed, 48 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 7809ab2..6510cf5 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
...
> @@ -1244,8 +1254,15 @@ static bool io_cqring_overflow_flush(struct 
> io_ring_ctx *ctx, bool force)
>               req->flags &= ~REQ_F_OVERFLOW;
>               if (cqe) {
>                       WRITE_ONCE(cqe->user_data, req->user_data);
> -                     WRITE_ONCE(cqe->res, req->result);
> -                     WRITE_ONCE(cqe->flags, req->cflags);
> +                     if (unlikely(req->flags & REQ_F_ZONE_APPEND)) {
> +                             if (likely(req->result > 0))
> +                                     WRITE_ONCE(cqe->res64, 
> req->rw.append_offset);
> +                             else
> +                                     WRITE_ONCE(cqe->res64, req->result);
> +                     } else {
> +                             WRITE_ONCE(cqe->res, req->result);
> +                             WRITE_ONCE(cqe->flags, req->cflags);
> +                     }
>               } else {
>                       WRITE_ONCE(ctx->rings->cq_overflow,
>                               atomic_inc_return(&ctx->cached_cq_overflow));
> @@ -1284,8 +1301,15 @@ static void __io_cqring_fill_event(struct io_kiocb 
> *req, long res, long cflags)
>       cqe = io_get_cqring(ctx);
>       if (likely(cqe)) {
>               WRITE_ONCE(cqe->user_data, req->user_data);
> -             WRITE_ONCE(cqe->res, res);
> -             WRITE_ONCE(cqe->flags, cflags);
> +             if (unlikely(req->flags & REQ_F_ZONE_APPEND)) {
> +                     if (likely(res > 0))
> +                             WRITE_ONCE(cqe->res64, req->rw.append_offset);

1. as I mentioned before, that's not not nice to ignore @cflags
2. that's not the right place for opcode specific handling
3. it doesn't work with overflowed reqs, see the final else below

For this scheme, I'd pass @append_offset as an argument. That should
also remove this extra if from the fast path, which Jens mentioned.

> +                     else
> +                             WRITE_ONCE(cqe->res64, res);
> +             } else {
> +                     WRITE_ONCE(cqe->res, res);
> +                     WRITE_ONCE(cqe->flags, cflags);
> +             }
>       } else if (ctx->cq_overflow_flushed) {
>               WRITE_ONCE(ctx->rings->cq_overflow,
>                               atomic_inc_return(&ctx->cached_cq_overflow));
> @@ -1943,7 +1967,7 @@ static inline void req_set_fail_links(struct io_kiocb 
> *req)
>               req->flags |= REQ_F_FAIL_LINK;
>  }
>  


-- 
Pavel Begunkov

Reply via email to