On 22/09/2021 03:16, Zhang, AlvinX wrote:
-----Original Message-----
From: Kevin Traynor <ktray...@redhat.com>
Sent: Tuesday, September 21, 2021 5:21 PM
To: Zhang, AlvinX <alvinx.zh...@intel.com>; Zhang, Qi Z
<qi.z.zh...@intel.com>; Guo, Junfeng <junfeng....@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] net/ice: add ability to reduce the Rx latency
On 18/09/2021 02:33, Zhang, AlvinX wrote:
-----Original Message-----
From: Kevin Traynor <ktray...@redhat.com>
Sent: Saturday, September 18, 2021 1:25 AM
To: Zhang, AlvinX <alvinx.zh...@intel.com>; Zhang, Qi Z
<qi.z.zh...@intel.com>; Guo, Junfeng <junfeng....@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] net/ice: add ability to reduce the Rx
latency
On 14/09/2021 02:31, Alvin Zhang wrote:
This patch adds a devarg parameter to enable/disable reducing the Rx
latency.
Signed-off-by: Alvin Zhang <alvinx.zh...@intel.com>
---
doc/guides/nics/ice.rst | 8 ++++++++
drivers/net/ice/ice_ethdev.c | 26 +++++++++++++++++++++++---
drivers/net/ice/ice_ethdev.h | 1 +
3 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index
5bc472f..3db0430 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -219,6 +219,14 @@ Runtime Config Options
These ICE_DBG_XXX are defined in
``drivers/net/ice/base/ice_type.h``.
+- ``Reduce Rx interrupts and latency`` (default ``0``)
+
+ vRAN workloads require low latency DPDK interface for the front
+ haul interface connection to Radio. Now we can reduce Rx
+ interrupts and latency by specify ``1`` for parameter ``rx-low-latency``::
+
+ -a 0000:88:00.0,rx-low-latency=1
+
When would a user select this and when not? What is the trade off?
The text is a bit unclear. It looks below like it reduces the
interrupt latency, but not the number of interrupts. Maybe I got it wrong.
Yes, it reduces the interrupt latency, We will refine the doc in next
patch.
Thanks, the text in v2 is clearer.
Driver compilation and testing
------------------------------
diff --git a/drivers/net/ice/ice_ethdev.c
b/drivers/net/ice/ice_ethdev.c index a4cd39c..85662e4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -29,12 +29,14 @@
#define ICE_PIPELINE_MODE_SUPPORT_ARG
"pipeline-mode-support"
#define ICE_PROTO_XTR_ARG "proto_xtr"
#define ICE_HW_DEBUG_MASK_ARG "hw_debug_mask"
+#define ICE_RX_LOW_LATENCY "rx-low-latency"
static const char * const ice_valid_args[] = {
ICE_SAFE_MODE_SUPPORT_ARG,
ICE_PIPELINE_MODE_SUPPORT_ARG,
ICE_PROTO_XTR_ARG,
ICE_HW_DEBUG_MASK_ARG,
+ ICE_RX_LOW_LATENCY,
NULL
};
@@ -1827,6 +1829,9 @@ static int ice_parse_devargs(struct
rte_eth_dev
*dev)
if (ret)
goto bail;
+ ret = rte_kvargs_process(kvlist, ICE_RX_LOW_LATENCY,
+ &parse_bool, &ad->devargs.rx_low_latency);
+
bail:
rte_kvargs_free(kvlist);
return ret;
@@ -3144,8 +3149,9 @@ static int ice_init_rss(struct ice_pf *pf) {
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint32_t val, val_tx;
- int i;
+ int rx_low_latency, i;
+ rx_low_latency = vsi->adapter->devargs.rx_low_latency;
for (i = 0; i < nb_queue; i++) {
/*do actual bind*/
val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) | @@ -3155,8
+3161,21 @@
static int ice_init_rss(struct ice_pf *pf)
PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
base_queue + i, msix_vect);
+
/* set ITR0 value */
- ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x2);
+ if (rx_low_latency) {
+ /**
+ * Empirical configuration for optimal real time
+ * latency reduced interrupt throttling to 2us
+ */
+ ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x1);
Why not set this to 0? "Setting the INTERVAL to zero enables
immediate interrupt."
Didn't see a reply to this comment?
I'm not requesting a change, just asking if there is a reason you didn't choose
the
lowest latency setting, and if you should?
Setting the INTERVAL to zero enable immediate interrupt, which will cause more
interrupts at high packets rates,
and more interrupts will consume more PCI bandwidth and CPU cycles.
Setting to 2us is a performance trade-off.
ok, thanks.
+ ICE_WRITE_REG(hw, QRX_ITR(base_queue + i),
+ QRX_ITR_NO_EXPR_M);
+ } else {
+ ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x2);
+ ICE_WRITE_REG(hw, QRX_ITR(base_queue + i), 0);
+ }
+
ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
}
@@ -5314,7 +5333,8 @@ static int ice_xstats_get_names(__rte_unused
struct rte_eth_dev *dev,
ICE_HW_DEBUG_MASK_ARG "=0xXXX"
ICE_PROTO_XTR_ARG
"=[queue:]<vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset>"
ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>"
- ICE_PIPELINE_MODE_SUPPORT_ARG "=<0|1>");
+ ICE_PIPELINE_MODE_SUPPORT_ARG "=<0|1>"
+ ICE_RX_LOW_LATENCY "=<0|1>");
RTE_LOG_REGISTER_SUFFIX(ice_logtype_init, init, NOTICE);
RTE_LOG_REGISTER_SUFFIX(ice_logtype_driver, driver, NOTICE); diff
--git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index b4bf651..c61cc1f 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -463,6 +463,7 @@ struct ice_pf {
* Cache devargs parse result.
*/
struct ice_devargs {
+ int rx_low_latency;
int safe_mode_support;
uint8_t proto_xtr_dflt;
int pipe_mode_support;