On 2024-01-29 13:50, Van Haaren, Harry wrote:
-----Original Message-----
From: Mattias Rönnblom <hof...@lysator.liu.se>
Sent: Saturday, January 27, 2024 7:32 PM
To: Morten Brørup <m...@smartsharesystems.com>; Mattias Rönnblom
<mattias.ronnb...@ericsson.com>; dev@dpdk.org
Cc: Van Haaren, Harry <harry.van.haa...@intel.com>; Stefan Sundkvist
<stefan.sundkv...@ericsson.com>
Subject: Re: [RFC] service: extend service function call statistics

Hi Mattias,

Thanks for the patch, looks good to me! Some responses to discussion below;

Acked-by: Harry van Haaren <harry.van.haa...@intel.com>

On 2024-01-26 11:07, Morten Brørup wrote:
From: Mattias Rönnblom [mailto:hof...@lysator.liu.se]
Sent: Friday, 26 January 2024 09.28

On 2024-01-26 00:19, Morten Brørup wrote:
From: Mattias Rönnblom [mailto:mattias.ronnb...@ericsson.com]
Sent: Thursday, 25 January 2024 20.15

Add two new per-service counters.

RTE_SERVICE_ATTR_IDLE_CALL_COUNT tracks the number of service
function
invocations where no work was performed.

RTE_SERVICE_ATTR_ERROR_CALL_COUNT tracks the number invocations
resulting in an error.

The semantics of RTE_SERVICE_ATTR_CALL_COUNT remains the same (i.e.,
counting all invocations, regardless of return value).

The new statistics may be useful for both debugging and profiling
(e.g., calculate the average per-call processing latency for non-
idle
service calls).

Service core tests are extended to cover the new counters, and
coverage for RTE_SERVICE_ATTR_CALL_COUNT is improved.

OK to all of the above. Good stuff.


The documentation for the CYCLES attributes are updated to reflect
their actual semantics.

If this is intended behavior, then updating the documentation seems
appropriate - I would even go so far as considering it a bug fix.

However, quite a few cycles may be consumed by a service before it
can conclude that it had no work to do. Shouldn't that be considered
time spent by the service? I.e. should the code be fixed instead of the
documentation?


Generally, polling for new work in the service is cheap, in my
experience. But there's nothing in principle that prevents the
situation
your describe from occurring. You could add an "IDLE_CYCLES" counter in
case that would ever be a real-world problem.

That wouldn't be a fix, but rather just returning to the old, subtly
broken, (pre-22.11?) semantics.

Have a look at 809bd24 to see the rationale for the change. There's an
example in 4689c57.

The cause of this ambiguity is due to the fact that the platform/OS
(i.e., DPDK) doesn't know which service to "wake up" (which in turn is
the result of the platform not being able to tracking input sources,
like NIC RX queues or timer wheels) and thus must ask every service to
check if it has something to do.

OK. Makes good sense.
So definitely fix the documentation, not the code. :-)

Agreed.


Alternatively, keep the behavior (for backwards compatibility) and
fix the documentation, as this patch does, and add an IDLE_CYCLES
counter for time spent in idle calls.

PS: We're not using DPDK service cores in our applications, so I'm
not familiar with the details. We are using something somewhat similar
(but homegrown), also for profiling and power management purposes, and
my feedback is based on my experience with our own variant of service
cores.


When are you making the switch to service cores? :)

Our own "service cores" implementation has some slightly different properties,
which we are still experimenting with.

E.g. in addition to the special return value "idle (no work to do)", we also 
have a
special return value for "incomplete (more work urgently pending)" when a 
service
processed a full burst and still has more work pending its input queue.

We are also considering returning a value to inform what time it needs to be 
called
again. This concept is only an idea, and we haven't started experimenting with 
it yet.


  From a high level perspective, the service cores library is much like an 
operating
system's CPU scheduler, although based on voluntary time sharing. Many 
algorithms
and many parameters can be considered. It can also tie into power management and
prioritization of different tasks.

This was brought up as a concern when initially adding it to DPDK; the scope of 
service-cores
is quite easily going to change from "way to run a dataplane function on a 
core, to abstract
away the different environments that DPDK runs" to "userspace scheduler with 
bells-and-whistles".


Yes, and in a hypothetical DPDK without Eventdev or competing deferred work mechanisms that would have been a natural evolution.

The reason service-cores was built was to allow applications run with HW & SW 
eventdev PMDs,
and not have to handle the different threading requirements at the application 
level. This goal is
achieved by the current service-cores infrastructure.


It's a generic and public API. I'm not sure I think it matters much what service it was initially built. There's no "PMD use only" label on it, which is a good thing.


Service cores in their current form is more like a primitive kernel
bottom half implementation.

The primary work scheduler in DPDK is Eventdev, but even with Eventdev
you want some way to fit non-event kind of processing as well, and here
services cores serves a role.

rte_service.c is a good place for power management. I agree some means
for the services to convey what latency (e.g., sleep time) is acceptable
is needed. Returning a time per service function invocation would be a
flexible way to do so. Could also just be a per-lcore, or global
configuration, set by the application.

While I agree it is potentially a place to implement power management, sleeping 
etc... to do so expands
the scope of service-cores significantly; not something that we should do 
without considerable thought.

Doesn't have to be significant in terms of lines of code to develop and maintain, and the actual logic could (and probably should) sit elsewhere. Such a feature could start off as just a lcore-is-idle-hook to the application, and it would decided what to do, if anything.

Reply via email to