On 4/12/2020 7:51 AM, Hideyuki Yamashita wrote:
In general, DPDK application consumes CPU usage because it polls
incoming packets using rx_burst API in infinite loop.
This makes difficult to estimate how much CPU usage is really
used to send/receive packets by the DPDK application.

For example, even if no incoming packets arriving, CPU usage
looks nearly 100% when observed by top command.

It is beneficial if developers can observe real CPU usage of the
DPDK application.
Such information can be exported to monitoring application like
prometheus/graphana and shows CPU usage graphically.

To achieve above, this patch set provides apistats functionality.
apistats provides the followiing two counters for each lcore.
- rx_burst_counts[RTE_MAX_LCORE]
- tx_burst_counts[RTE_MAX_LCORE]
Those accumulates rx_burst/tx_burst counts since the application starts.

By using those values, developers can roughly estimate CPU usage.
Let us assume a DPDK application is simply forwarding packets.
It calls tx_burst only if it receive packets.
If rx_burst_counts=1000 and tx_burst_count=1000 during certain
period of time, one can assume CPU usage is 100%.
If rx_burst_counts=1000 and tx_burst_count=100 during certain
period of time, one can assume CPU usage is 10%.
Here we assumes that tx_burst_count equals counts which rx_burst function
really receives incoming packets.


This patch set provides the following.
- basic API counting functionality(apistats) into librte_ethdev
- add code to testpmd to accumulate counter information
- add code to proc-info to retrieve above mentioned counter information
- add description in proc-info document about --apistats parameter
- modify MAINTAINERS file for apistats.c and apistats.h

Hideyuki Yamashita (5):
   maintainers: update maintainers file for apistats
   app/proc-info: add to use apistats
   app/test-pmd: add to use apistats
   docs: add description of apistats parameter into proc-info
   librte_ethdev: add to use apistats

  MAINTAINERS                      |  3 ++
  app/proc-info/main.c             | 46 +++++++++++++++++++++++
  app/test-pmd/testpmd.c           |  4 ++
  doc/guides/tools/proc_info.rst   | 10 ++++-
  lib/librte_ethdev/meson.build    |  6 ++-
  lib/librte_ethdev/rte_apistats.c | 64 ++++++++++++++++++++++++++++++++
  lib/librte_ethdev/rte_apistats.h | 64 ++++++++++++++++++++++++++++++++
  lib/librte_ethdev/rte_ethdev.h   |  7 ++++
  lib/librte_ethdev/version.map    |  5 +++
  9 files changed, 205 insertions(+), 4 deletions(-)
  create mode 100644 lib/librte_ethdev/rte_apistats.c
  create mode 100644 lib/librte_ethdev/rte_apistats.h


Hi Hideyuki,

I have a few questions on the patch set.

How does this compare to the mechanism added to l3fwd-power which counts the number of empty, partial and full polls, and uses them to calculate busyness? We saw pretty good tracking of busyness using those metrics. I would be concerned that just looking at the numebr of rx_bursts and tx_bursts may be limited to only a few use-cases. The l3fwd-power example uses branchless increments to capture empty, non-empty, full, and non-full polls.

Why not use the existing telemetry library to store the stats? It would be good if whatever metrics were counted were made available in a standard way, so that external entities such as collectd could pick them up, rather than having to parse the new struct. The l3fwd-power example registers the necessary new metrics, and exposes them through the telemetry library.

And a comment on the patch set in general: The order of the patch set seems reversed. The earlier patch do not compile, because they depend on rte_apistats.h, which is introduced in the final patch. Each patch as it is applied needs to build successfully.

Also, I would suggest a different name. rte_apistats seems very generic and could apply to anything. How about something like rte_ethdev_stats.h or rte_burst_stats.h?

Rgds,
Dave.







Reply via email to