>-----Original Message-----
>From: dev <dev-boun...@dpdk.org> On Behalf Of Andrzej Ostruszka
>Sent: Thursday, March 19, 2020 3:07 PM
>To: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock
>Rx/Tx ctx
>
>On 3/6/20 5:35 PM, pbhagavat...@marvell.com wrote:
>> From: Pavan Nikhilesh <pbhagavat...@marvell.com>
>>
>> Add device arguments to lock Rx/Tx contexts.
>> Application can either choose to lock Rx or Tx contexts by using
>> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>>
>> Example:
>>      -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavat...@marvell.com>
>> ---
>>  doc/guides/nics/octeontx2.rst               |  14 ++
>>  drivers/net/octeontx2/otx2_ethdev.c         | 179
>+++++++++++++++++++-
>>  drivers/net/octeontx2/otx2_ethdev.h         |   2 +
>>  drivers/net/octeontx2/otx2_ethdev_devargs.c |  16 +-
>>  drivers/net/octeontx2/otx2_rss.c            |  23 +++
>>  5 files changed, 231 insertions(+), 3 deletions(-)
>>
>> diff --git a/doc/guides/nics/octeontx2.rst
>b/doc/guides/nics/octeontx2.rst
>> index 819d09e11..6a13c9b26 100644
>> --- a/doc/guides/nics/octeontx2.rst
>> +++ b/doc/guides/nics/octeontx2.rst
>> @@ -207,6 +207,20 @@ Runtime Config Options
>>     With the above configuration, application can enable inline IPsec
>processing
>>     on 128 SAs (SPI 0-127).
>>
>> +- ``Lock Rx contexts in NDC cache``
>> +
>> +   Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
>> +
>> +   For example::
>> +      -w 0002:02:00.0,lock_rx_ctx=1
>
>"=1" is needed because of kvargs require it?  If that is the case then
>I'll think about extending kvargs to accept simple keys - this syntax
>doesn't feel right when all one really wants is just to test the
>presence of flag (for 1/true) or its lack (for 0/false).
>

Kvargs requirea key value pair with RTE_KVARGS_KV_DELIM `=` as the delimiter.

>BTW - extra line break after "::"

Will fix in v2.

>
>> +
>> +- ``Lock Tx contexts in NDC cache``
>> +
>> +   Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
>> +
>> +   For example::
>> +      -w 0002:02:00.0,lock_tx_ctx=1
>
>Same as above
>
>[...]
>> diff --git a/drivers/net/octeontx2/otx2_ethdev.c
>b/drivers/net/octeontx2/otx2_ethdev.c
>> index e60f4901c..592fef458 100644
>> --- a/drivers/net/octeontx2/otx2_ethdev.c
>> +++ b/drivers/net/octeontx2/otx2_ethdev.c
>> @@ -381,6 +381,38 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev,
>struct otx2_eth_dev *dev,
>>              goto fail;
>>      }
>>
>> +    if (dev->lock_rx_ctx) {
>> +            aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            aq->qidx = qid;
>> +            aq->ctype = NIX_AQ_CTYPE_CQ;
>> +            aq->op = NIX_AQ_INSTOP_LOCK;
>> +
>> +            aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            if (!aq) {
>> +                    /* The shared memory buffer can be full.
>> +                     * Flush it and retry
>> +                     */
>> +                    otx2_mbox_msg_send(mbox, 0);
>> +                    rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> +                    if (rc < 0) {
>> +                            otx2_err("Failed to LOCK cq context");
>> +                            return 0;
>
>Similar comments as for the previous patch.  Is this not a failure?  If
>so why "return 0"?  If not failure don't log it as an error. 

Since, NDC locking is done as FIFO Locking might fail but it is not a 
catastrophic failure as NIX will function normally. I'd still like to log
it as an error for visibility.

> BTW here
>the failure is not for locking but for flushing of the mbox (right?) 

otx2_mbox_msg_send will never fail. We might get timeout while waiting for the 
response which we should consider as AQ instruction failure.

>so maybe change the err-msg to differentiate from the case below.


>
>> +                    }
>> +
>> +                    aq =
>otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +                    if (!aq) {
>> +                            otx2_err("Failed to LOCK rq context");
>> +                            return 0;
>
>Same as above - error or not?
>
>> +                    }
>> +            }
>> +            aq->qidx = qid;
>> +            aq->ctype = NIX_AQ_CTYPE_RQ;
>> +            aq->op = NIX_AQ_INSTOP_LOCK;
>> +            rc = otx2_mbox_process(mbox);
>> +            if (rc < 0)
>> +                    otx2_err("Failed to LOCK rq context");
>
>Ditto.
>
>> +    }
>> +
>>      return 0;
>>  fail:
>>      return rc;
>
>Same comments for *rq_uninit (below) as those for *rq_init (above).
>
>> @@ -430,6 +462,38 @@ nix_cq_rq_uninit(struct rte_eth_dev
>*eth_dev, struct otx2_eth_rxq *rxq)
>>              return rc;
>>      }
>>
>> +    if (dev->lock_rx_ctx) {
>> +            aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            aq->qidx = rxq->rq;
>> +            aq->ctype = NIX_AQ_CTYPE_CQ;
>> +            aq->op = NIX_AQ_INSTOP_UNLOCK;
>> +
>> +            aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            if (!aq) {
>> +                    /* The shared memory buffer can be full.
>> +                     * Flush it and retry
>> +                     */
>> +                    otx2_mbox_msg_send(mbox, 0);
>> +                    rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> +                    if (rc < 0) {
>> +                            otx2_err("Failed to UNLOCK cq
>context");
>> +                            return 0;
>> +                    }
>> +
>> +                    aq =
>otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +                    if (!aq) {
>> +                            otx2_err("Failed to UNLOCK rq
>context");
>> +                            return 0;
>> +                    }
>> +            }
>> +            aq->qidx = rxq->rq;
>> +            aq->ctype = NIX_AQ_CTYPE_RQ;
>> +            aq->op = NIX_AQ_INSTOP_UNLOCK;
>> +            rc = otx2_mbox_process(mbox);
>> +            if (rc < 0)
>> +                    otx2_err("Failed to UNLOCK rq context");
>> +    }
>> +
>>      return 0;
>>  }
>>
>
>And the same set of comments apply below to *sqb_lock/unlock - in
>addition one comment about err-msg.
>
>> @@ -715,6 +779,90 @@ nix_tx_offload_flags(struct rte_eth_dev
>*eth_dev)
>>      return flags;
>>  }
>>
>> +static int
>> +nix_sqb_lock(struct rte_mempool *mp)
>> +{
>> +    struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
>> +    struct npa_aq_enq_req *req;
>> +    int rc;
>> +
>> +    req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> +    req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> +    req->ctype = NPA_AQ_CTYPE_AURA;
>> +    req->op = NPA_AQ_INSTOP_LOCK;
>> +
>> +    req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> +    if (!req) {
>> +            /* The shared memory buffer can be full.
>> +             * Flush it and retry
>> +             */
>> +            otx2_mbox_msg_send(npa_lf->mbox, 0);
>> +            rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
>> +            if (rc < 0) {
>> +                    otx2_err("Failed to LOCK AURA context");
>> +                    return 0;
>> +            }
>> +
>> +            req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf-
>>mbox);
>> +            if (!req) {
>> +                    otx2_err("Failed to LOCK POOL context");
>
>Apart from the general err-or-not-err comment here you have not
>attempted to lock pool yet, you do that below.  Just like before use
>different msg 6 lines above (since you have only flushed - not
>attepmted
>to lock) and here use the "Failed to LOCK AURA context".  The same
>comment applies below for unlock.
>

AURA message would have already been delivered to AQ here. I think a more 
appropriate message would be "failed to get mbox memory for locking pool ctx."

>> +                    return 0;
>> +            }
>> +    }
>> +
>> +    req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> +    req->ctype = NPA_AQ_CTYPE_POOL;
>> +    req->op = NPA_AQ_INSTOP_LOCK;
>> +
>> +    rc = otx2_mbox_process(npa_lf->mbox);
>> +    if (rc < 0)
>> +            otx2_err("Unable to lock POOL in NDC");
>> +
>> +    return 0;
>> +}
>> +
>> +static int
>> +nix_sqb_unlock(struct rte_mempool *mp)
>> +{
>> +    struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
>> +    struct npa_aq_enq_req *req;
>> +    int rc;
>> +
>> +    req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> +    req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> +    req->ctype = NPA_AQ_CTYPE_AURA;
>> +    req->op = NPA_AQ_INSTOP_UNLOCK;
>> +
>> +    req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> +    if (!req) {
>> +            /* The shared memory buffer can be full.
>> +             * Flush it and retry
>> +             */
>> +            otx2_mbox_msg_send(npa_lf->mbox, 0);
>> +            rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
>> +            if (rc < 0) {
>> +                    otx2_err("Failed to UNLOCK AURA context");
>> +                    return 0;
>> +            }
>> +
>> +            req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf-
>>mbox);
>> +            if (!req) {
>> +                    otx2_err("Failed to UNLOCK POOL context");
>> +                    return 0;
>> +            }
>> +    }
>> +    req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> +    req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> +    req->ctype = NPA_AQ_CTYPE_POOL;
>> +    req->op = NPA_AQ_INSTOP_UNLOCK;
>> +
>> +    rc = otx2_mbox_process(npa_lf->mbox);
>> +    if (rc < 0)
>> +            otx2_err("Unable to UNLOCK AURA in NDC");
>> +
>> +    return 0;
>> +}
>> +
>>  static int
>>  nix_sq_init(struct otx2_eth_txq *txq)
>>  {
>> @@ -757,7 +905,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
>>      /* Many to one reduction */
>>      sq->sq.qint_idx = txq->sq % dev->qints;
>>
>> -    return otx2_mbox_process(mbox);
>> +    rc = otx2_mbox_process(mbox);
>> +    if (rc < 0)
>> +            return rc;
>> +
>> +    if (dev->lock_tx_ctx) {
>> +            sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            sq->qidx = txq->sq;
>> +            sq->ctype = NIX_AQ_CTYPE_SQ;
>> +            sq->op = NIX_AQ_INSTOP_LOCK;
>> +
>> +            rc = otx2_mbox_process(mbox);
>> +    }
>> +
>> +    return rc;
>>  }
>>
>>  static int
>> @@ -800,6 +961,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
>>      if (rc)
>>              return rc;
>>
>> +    if (dev->lock_tx_ctx) {
>> +            /* Unlock sq */
>> +            aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            aq->qidx = txq->sq;
>> +            aq->ctype = NIX_AQ_CTYPE_SQ;
>> +            aq->op = NIX_AQ_INSTOP_UNLOCK;
>> +
>> +            rc = otx2_mbox_process(mbox);
>> +            if (rc < 0)
>> +                    return rc;
>> +
>> +            nix_sqb_unlock(txq->sqb_pool);
>> +    }
>> +
>>      /* Read SQ and free sqb's */
>>      aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>>      aq->qidx = txq->sq;
>> @@ -921,6 +1096,8 @@ nix_alloc_sqb_pool(int port, struct
>otx2_eth_txq *txq, uint16_t nb_desc)
>>      }
>>
>>      nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
>> +    if (dev->lock_tx_ctx)
>> +            nix_sqb_lock(txq->sqb_pool);
>>
>>      return 0;
>>  fail:
>> diff --git a/drivers/net/octeontx2/otx2_ethdev.h
>b/drivers/net/octeontx2/otx2_ethdev.h
>> index e5684f9f0..71f8cf729 100644
>> --- a/drivers/net/octeontx2/otx2_ethdev.h
>> +++ b/drivers/net/octeontx2/otx2_ethdev.h
>> @@ -287,6 +287,8 @@ struct otx2_eth_dev {
>>      uint16_t scalar_ena;
>>      uint16_t rss_tag_as_xor;
>>      uint16_t max_sqb_count;
>> +    uint16_t lock_rx_ctx;
>> +    uint16_t lock_tx_ctx;
>>      uint16_t rx_offload_flags; /* Selected Rx offload
>flags(NIX_RX_*_F) */
>>      uint64_t rx_offloads;
>>      uint16_t tx_offload_flags; /* Selected Tx offload
>flags(NIX_TX_*_F) */
>> diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c
>b/drivers/net/octeontx2/otx2_ethdev_devargs.c
>> index bc11f54b5..0857d8247 100644
>> --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
>> +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
>> @@ -124,6 +124,8 @@ parse_switch_header_type(const char *key,
>const char *value, void *extra_args)
>>  #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
>>  #define OTX2_SWITCH_HEADER_TYPE "switch_header"
>>  #define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
>> +#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
>> +#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
>>
>>  int
>>  otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct
>otx2_eth_dev *dev)
>> @@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct
>rte_devargs *devargs, struct otx2_eth_dev *dev)
>>      uint16_t switch_header_type = 0;
>>      uint16_t flow_max_priority = 3;
>>      uint16_t ipsec_in_max_spi = 1;
>> -    uint16_t scalar_enable = 0;
>>      uint16_t rss_tag_as_xor = 0;
>> +    uint16_t scalar_enable = 0;
>>      struct rte_kvargs *kvlist;
>> +    uint16_t lock_rx_ctx = 0;
>> +    uint16_t lock_tx_ctx = 0;
>>
>>      if (devargs == NULL)
>>              goto null_devargs;
>> @@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct
>rte_devargs *devargs, struct otx2_eth_dev *dev)
>>                         &parse_switch_header_type,
>&switch_header_type);
>>      rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
>>                         &parse_flag, &rss_tag_as_xor);
>> +    rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
>> +                       &parse_flag, &lock_rx_ctx);
>> +    rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
>> +                       &parse_flag, &lock_tx_ctx);
>>      otx2_parse_common_devargs(kvlist);
>>      rte_kvargs_free(kvlist);
>>
>> @@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct
>rte_devargs *devargs, struct otx2_eth_dev *dev)
>>      dev->scalar_ena = scalar_enable;
>>      dev->rss_tag_as_xor = rss_tag_as_xor;
>>      dev->max_sqb_count = sqb_count;
>> +    dev->lock_rx_ctx = lock_rx_ctx;
>> +    dev->lock_tx_ctx = lock_tx_ctx;
>>      dev->rss_info.rss_size = rss_size;
>>      dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
>>      dev->npc_flow.flow_max_priority = flow_max_priority;
>> @@ -187,4 +197,6 @@
>RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
>>                            OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
>>                            OTX2_FLOW_MAX_PRIORITY "=<1-32>"
>>                            OTX2_SWITCH_HEADER_TYPE
>"=<higig2|dsa>"
>> -                          OTX2_RSS_TAG_AS_XOR "=1");
>> +                          OTX2_RSS_TAG_AS_XOR "=1"
>> +                          OTX2_LOCK_RX_CTX "=<1-65535>"
>> +                          OTX2_LOCK_TX_CTX "=<1-65535>");
>
>AFAIU the "=1" is required due to kvargs parsing limitation.  But why
>the range here is 1-65535 when all you want is just a boolean flag (if
>the key is present or not).
>

My bad we were debating whether it should be per Rx/Tx queue which would 
use a mask. I will fix it in v2.

>> diff --git a/drivers/net/octeontx2/otx2_rss.c
>b/drivers/net/octeontx2/otx2_rss.c
>> index 7a8c8f3de..34005ef02 100644
>> --- a/drivers/net/octeontx2/otx2_rss.c
>> +++ b/drivers/net/octeontx2/otx2_rss.c
>> @@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev
>*dev,
>>              req->qidx = (group * rss->rss_size) + idx;
>>              req->ctype = NIX_AQ_CTYPE_RSS;
>>              req->op = NIX_AQ_INSTOP_INIT;
>> +
>> +            if (!dev->lock_rx_ctx)
>> +                    continue;
>> +
>> +            req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +            if (!req) {
>> +                    /* The shared memory buffer can be full.
>> +                     * Flush it and retry
>> +                     */
>> +                    otx2_mbox_msg_send(mbox, 0);
>> +                    rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> +                    if (rc < 0)
>> +                            return rc;
>> +
>> +                    req =
>otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> +                    if (!req)
>> +                            return -ENOMEM;
>> +            }
>> +            req->rss.rq = ind_tbl[idx];
>> +            /* Fill AQ info */
>> +            req->qidx = (group * rss->rss_size) + idx;
>> +            req->ctype = NIX_AQ_CTYPE_RSS;
>> +            req->op = NIX_AQ_INSTOP_LOCK;
>>      }
>>
>>      otx2_mbox_msg_send(mbox, 0);
>>
>
>And here you treat the locking errors as errors - so I think you need to
>just adapt to this style and fix the previous comments.

I think here we should continue creating the RSS table without locking it as
It is not mandatory. I will modify accordingly in v2.

>
>With regards
>Andrzej Ostruszka

Thanks,
Pavan.

Reply via email to