On Fri, 29 Sep 2023, Giovanni Cabiddu wrote:
> On Fri, Sep 22, 2023 at 06:34:16PM +0200, Mikulas Patocka wrote:
> >
> > > Also, deserves a fixes tag.
> >
> > "Fixes" tag is for something that worked and that was broken in some
> > previous commit.
> That's right.
>
> > A quick search through git shows that QAT backlogging was
> > broken since the introduction of QAT.
> The driver was moved from drivers/crypto/qat to drivers/crypto/intel/qat
> that's why you see a single patch.
> This fixes 386823839732 ("crypto: qat - add backlog mechanism")
But before 386823839732 it also didn't work - it returned -EBUSY without
queuing the request and deadlocked.
> Thanks - when I proposed the rework I was thinking at a solution without
> gotos. Here is a draft:
Yes - it is possible to fix it this way.
> ------------8<----------------
> .../intel/qat/qat_common/qat_algs_send.c | 40 ++++++++++---------
> 1 file changed, 22 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/crypto/intel/qat/qat_common/qat_algs_send.c
> b/drivers/crypto/intel/qat/qat_common/qat_algs_send.c
> index bb80455b3e81..18c6a233ab96 100644
> --- a/drivers/crypto/intel/qat/qat_common/qat_algs_send.c
> +++ b/drivers/crypto/intel/qat/qat_common/qat_algs_send.c
> @@ -40,17 +40,7 @@ void qat_alg_send_backlog(struct qat_instance_backlog
> *backlog)
> spin_unlock_bh(&backlog->lock);
> }
>
> -static void qat_alg_backlog_req(struct qat_alg_req *req,
> - struct qat_instance_backlog *backlog)
> -{
> - INIT_LIST_HEAD(&req->list);
> -
> - spin_lock_bh(&backlog->lock);
> - list_add_tail(&req->list, &backlog->list);
> - spin_unlock_bh(&backlog->lock);
> -}
> -
> -static int qat_alg_send_message_maybacklog(struct qat_alg_req *req)
> +static bool qat_alg_try_enqueue(struct qat_alg_req *req)
> {
> struct qat_instance_backlog *backlog = req->backlog;
> struct adf_etr_ring_data *tx_ring = req->tx_ring;
> @@ -58,22 +48,36 @@ static int qat_alg_send_message_maybacklog(struct
> qat_alg_req *req)
>
> /* If any request is already backlogged, then add to backlog list */
> if (!list_empty(&backlog->list))
> - goto enqueue;
> + return false;
>
> /* If ring is nearly full, then add to backlog list */
> if (adf_ring_nearly_full(tx_ring))
> - goto enqueue;
> + return false;
>
> /* If adding request to HW ring fails, then add to backlog list */
> if (adf_send_message(tx_ring, fw_req))
> - goto enqueue;
> + return false;
>
> - return -EINPROGRESS;
> + return true;
> +}
>
> -enqueue:
> - qat_alg_backlog_req(req, backlog);
>
> - return -EBUSY;
> +static int qat_alg_send_message_maybacklog(struct qat_alg_req *req)
> +{
> + struct qat_instance_backlog *backlog = req->backlog;
> + int ret = -EINPROGRESS;
> +
> + if (qat_alg_try_enqueue(req))
> + return ret;
> +
> + spin_lock_bh(&backlog->lock);
> + if (!qat_alg_try_enqueue(req)) {
> + list_add_tail(&req->list, &backlog->list);
> + ret = -EBUSY;
> + }
> + spin_unlock_bh(&backlog->lock);
> +
> + return ret;
> }
>
> int qat_alg_send_message(struct qat_alg_req *req)
> --
> 2.41.0
Reviwed-by: Mikulas Patocka <mpato...@redhat.com>
Mikulas
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel