On 30-Jun-21 11:29 AM, Ananyev, Konstantin wrote:


Use the new multi-monitor intrinsic to allow monitoring multiple ethdev
Rx queues while entering the energy efficient power state. The multi
version will be used unconditionally if supported, and the UMWAIT one
will only be used when multi-monitor is not supported by the hardware.

Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com>
---

Notes:
     v4:
     - Fix possible out of bounds access
     - Added missing index increment

  doc/guides/prog_guide/power_man.rst |  9 ++--
  lib/power/rte_power_pmd_mgmt.c      | 81 ++++++++++++++++++++++++++++-
  2 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/doc/guides/prog_guide/power_man.rst 
b/doc/guides/prog_guide/power_man.rst
index ec04a72108..94353ca012 100644
--- a/doc/guides/prog_guide/power_man.rst
+++ b/doc/guides/prog_guide/power_man.rst
@@ -221,13 +221,16 @@ power saving whenever empty poll count reaches a certain 
number.
  The "monitor" mode is only supported in the following configurations and 
scenarios:

  * If ``rte_cpu_get_intrinsics_support()`` function indicates that
+  ``rte_power_monitor_multi()`` function is supported by the platform, then
+  monitoring multiple Ethernet Rx queues for traffic will be supported.
+
+* If ``rte_cpu_get_intrinsics_support()`` function indicates that only
    ``rte_power_monitor()`` is supported by the platform, then monitoring will 
be
    limited to a mapping of 1 core 1 queue (thus, each Rx queue will have to be
    monitored from a different lcore).

-* If ``rte_cpu_get_intrinsics_support()`` function indicates that the
-  ``rte_power_monitor()`` function is not supported, then monitor mode will not
-  be supported.
+* If ``rte_cpu_get_intrinsics_support()`` function indicates that neither of 
the
+  two monitoring functions are supported, then monitor mode will not be 
supported.

  * Not all Ethernet drivers support monitoring, even if the underlying
    platform may support the necessary CPU instructions. Please refer to
diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index fccfd236c2..2056996b9c 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -124,6 +124,32 @@ queue_list_take(struct pmd_core_cfg *cfg, const union 
queue *q)
       return found;
  }

+static inline int
+get_monitor_addresses(struct pmd_core_cfg *cfg,
+             struct rte_power_monitor_cond *pmc, size_t len)
+{
+     const struct queue_list_entry *qle;
+     size_t i = 0;
+     int ret;
+
+     TAILQ_FOREACH(qle, &cfg->head, next) {
+             const union queue *q = &qle->queue;
+             struct rte_power_monitor_cond *cur;
+
+             /* attempted out of bounds access */
+             if (i >= len) {
+                     RTE_LOG(ERR, POWER, "Too many queues being monitored\n");
+                     return -1;
+             }
+
+             cur = &pmc[i++];
+             ret = rte_eth_get_monitor_addr(q->portid, q->qid, cur);
+             if (ret < 0)
+                     return ret;
+     }
+     return 0;
+}
+
  static void
  calc_tsc(void)
  {
@@ -190,6 +216,45 @@ lcore_can_sleep(struct pmd_core_cfg *cfg)
       return true;
  }

+static uint16_t
+clb_multiwait(uint16_t port_id __rte_unused, uint16_t qidx __rte_unused,
+             struct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,
+             uint16_t max_pkts __rte_unused, void *arg)
+{
+     const unsigned int lcore = rte_lcore_id();
+     struct queue_list_entry *queue_conf = arg;
+     struct pmd_core_cfg *lcore_conf;
+     const bool empty = nb_rx == 0;
+
+     lcore_conf = &lcore_cfgs[lcore];
+
+     /* early exit */
+     if (likely(!empty))
+             /* early exit */
+             queue_reset(lcore_conf, queue_conf);
+     else {
+             struct rte_power_monitor_cond pmc[RTE_MAX_ETHPORTS];

As discussed, I still think it needs to be pmc[lcore_conf->n_queues];
Or if VLA is not an option - alloca(), or dynamic lcore_conf->pmc[], or...


Apologies, this was a rebase mistake. Thanks for catching it! Will fix in v6.

--
Thanks,
Anatoly

Reply via email to