> From: Burakov, Anatoly [mailto:anatoly.bura...@intel.com]
> Sent: Monday, 18 July 2022 11.44
> 
> On 17-Jul-22 10:56 AM, Morten Brørup wrote:
> >> From: Honnappa Nagarahalli [mailto:honnappa.nagaraha...@arm.com]
> >> Sent: Sunday, 17 July 2022 05.10
> >>
> >> <snip>
> >>
> >>> Subject: RE: [PATCH v1 1/2] eal: add lcore busyness telemetry
> >>>
> >>>> From: Anatoly Burakov [mailto:anatoly.bura...@intel.com]
> >>>> Sent: Friday, 15 July 2022 15.13
> >>>>
> >>>> Currently, there is no way to measure lcore busyness in a passive
> >> way,
> >>>> without any modifications to the application. This patch adds a
> new
> >>>> EAL API that will be able to passively track core busyness.
> >>>>
> >>>> The busyness is calculated by relying on the fact that most DPDK
> >> API's
> >>>> will poll for packets.
> >>>
> >>> This is an "alternative fact"! Only run-to-completion applications
> >> polls for RX.
> >>> Pipelined applications do not poll for packets in every pipeline
> >> stage.
> >> I guess you meant, poll for packets from NIC. They still need to
> >> receive packets from queues. We could do a similar thing for
> rte_ring
> >> APIs.
> 
> Ring API is already instrumented to report telemetry in the same way,
> so
> any rte_ring-based pipeline will be able to track it. Obviously,
> non-DPDK API's will have to be instrumented too, we really can't do
> anything about that from inside DPDK.
> 
> >
> > But it would mix apples, pears and bananas.
> >
> > Let's say you have a pipeline with three ingress preprocessing
> threads, two advanced packet processing threads in the next pipeline
> stage and one egress thread as the third pipeline stage.
> >
> > Now, the metrics reflects busyness for six threads, but three of them
> are apples, two of them are pears, and one is bananas.
> >
> > I just realized another example, where this patch might give
> misleading results on a run-to-completion application:
> >
> > One thread handles a specific type of packets received on an Ethdev
> ingress queue set up by the rte_flow APIs, and another thread handles
> ingress packets from another Ethdev ingress queue. E.g. the first queue
> may contain packets for well known flows, where packets can be
> processed quickly, and the other queue for other packets requiring more
> scrutiny. Both threads are run-to-completion and handle Ethdev ingress
> packets.
> >
> > *So: Only applications where the threads perform the exact same task
> can use this patch.*
> 
> I do not see how that follows. I think you're falling for a "it's not
> 100% useful, therefore it's 0% useful" fallacy here. Some use cases
> would obviously make telemetry more informative than others, that's
> true, however I do not see how it's a mandatory requirement for lcore
> busyness to report the same thing. We can document the limitations and
> assumptions made, can we not?

I did use strong wording in my email to get my message through. However, I do 
consider the scope "applications where the threads perform the exact same task" 
more than 0 % of all deployed applications, and thus the patch is more than 0 % 
useful. But I certainly don't consider the scope for this patch 100 %  of all 
deployed applications, and perhaps not even 80 %.

I didn't reject the patch or oppose to it, but requested it to be updated, so 
the names reflect the information provided by it. I strongly oppose to using 
"CPU Busyness" as the telemetry name for something that only reflects ingress 
activity, and is zero for a thread that only performs egress or other 
non-ingress tasks. That would be strongly misleading.

If you by "document the limitations and assumptions" also mean rename telemetry 
names and variables/functions in the patch to reflect what it actually does, 
then yes, documenting the limitations and assumptions suffices. However, adding 
a notice in some documentation that "CPU Business" telemetry only is 
correct/relevant for specific applications doesn't suffice.

> 
> It is true that this patchset is mostly written from the standpoint of
> a
> run-to-completion application, but can we improve it? What would be
> your
> suggestions to make it better suit use cases you are familiar with?

Our application uses our own run-time profiler library to measure time spent in 
the application's various threads and pipeline stages. And the application 
needs to call the profiler library functions to feed it the information it 
needs. We still haven't found a good way to transform the profiler data to a 
generic summary CPU Utilization percentage, which should reflect how much of 
the system's CPU capacity is being used (preferably on a linear scale). (Our 
profiler library is designed specifically for our own purposes, and would 
require a complete rewrite to meet even basic DPDK library standards, so I 
won't even try to contribute it.)

I don't think it is possible to measure and report detailed CPU Busyness 
without involving the application. Only the application has knowledge about 
what the individual lcores are doing. Even for my example above (with two 
run-to-completion threads serving rte_flow configured Ethdev ingress queues), 
this patch would not provide information about which of the two types of 
traffic is causing the higher busyness. The telemetry might expose which 
specific thread is busy, but it doesn't tell which of the two tasks is being 
performed by that thread, and thus which kind of traffic is causing the 
busyness.

> 
> >
> > Also, rings may be used for other purposes than queueing packets
> between pipeline stages. E.g. our application uses rings for fast bulk
> allocation and freeing of other resources.
> >
> 
> Well, this is the tradeoff for simplicity. Of course we could add all
> sorts of stuff like dynamic enable/disable of this and that and the
> other... but the end goal was something easy and automatic and that
> doesn't require any work to implement, not something that suits 100% of
> the cases 100% of the time. Having such flexibility as you described
> comes at a cost that this patch was not meant to pay!

I do see the benefit of adding instrumentation like this to the DPDK libraries, 
so information becomes available at zero application development effort. The 
alternative would be a profiler/busyness library requiring application 
modifications.

I only request that:
1. The patch clearly reflects what is does, and
2. The instrumentation can be omitted at build time, so it has zero performance 
impact on applications where it is useless.

> 
> --
> Thanks,
> Anatoly

PS: The busyness counters in the DPDK Service Cores library are also being 
updated [1].

[1] 
http://inbox.dpdk.org/dev/20220711131825.3373195-2-harry.van.haa...@intel.com/T/#u

Reply via email to