On 17-Jul-22 10:56 AM, Morten Brørup wrote:
From: Honnappa Nagarahalli [mailto:honnappa.nagaraha...@arm.com]
Sent: Sunday, 17 July 2022 05.10

<snip>

Subject: RE: [PATCH v1 1/2] eal: add lcore busyness telemetry

From: Anatoly Burakov [mailto:anatoly.bura...@intel.com]
Sent: Friday, 15 July 2022 15.13

Currently, there is no way to measure lcore busyness in a passive
way,
without any modifications to the application. This patch adds a new
EAL API that will be able to passively track core busyness.

The busyness is calculated by relying on the fact that most DPDK
API's
will poll for packets.

This is an "alternative fact"! Only run-to-completion applications
polls for RX.
Pipelined applications do not poll for packets in every pipeline
stage.
I guess you meant, poll for packets from NIC. They still need to
receive packets from queues. We could do a similar thing for rte_ring
APIs.

Ring API is already instrumented to report telemetry in the same way, so any rte_ring-based pipeline will be able to track it. Obviously, non-DPDK API's will have to be instrumented too, we really can't do anything about that from inside DPDK.


But it would mix apples, pears and bananas.

Let's say you have a pipeline with three ingress preprocessing threads, two 
advanced packet processing threads in the next pipeline stage and one egress 
thread as the third pipeline stage.

Now, the metrics reflects busyness for six threads, but three of them are 
apples, two of them are pears, and one is bananas.

I just realized another example, where this patch might give misleading results 
on a run-to-completion application:

One thread handles a specific type of packets received on an Ethdev ingress 
queue set up by the rte_flow APIs, and another thread handles ingress packets 
from another Ethdev ingress queue. E.g. the first queue may contain packets for 
well known flows, where packets can be processed quickly, and the other queue 
for other packets requiring more scrutiny. Both threads are run-to-completion 
and handle Ethdev ingress packets.

*So: Only applications where the threads perform the exact same task can use 
this patch.*

I do not see how that follows. I think you're falling for a "it's not 100% useful, therefore it's 0% useful" fallacy here. Some use cases would obviously make telemetry more informative than others, that's true, however I do not see how it's a mandatory requirement for lcore busyness to report the same thing. We can document the limitations and assumptions made, can we not?

It is true that this patchset is mostly written from the standpoint of a run-to-completion application, but can we improve it? What would be your suggestions to make it better suit use cases you are familiar with?


Also, rings may be used for other purposes than queueing packets between 
pipeline stages. E.g. our application uses rings for fast bulk allocation and 
freeing of other resources.


Well, this is the tradeoff for simplicity. Of course we could add all sorts of stuff like dynamic enable/disable of this and that and the other... but the end goal was something easy and automatic and that doesn't require any work to implement, not something that suits 100% of the cases 100% of the time. Having such flexibility as you described comes at a cost that this patch was not meant to pay!

--
Thanks,
Anatoly

Reply via email to