On Thu, Sep 29, 2022 at 6:11 PM Kevin Laatz <kevin.la...@intel.com> wrote:
>
> >
> >> 2. Ring does not have callback support, meaning pipelined applications
> >> could not report lcore poll busyness telemetry with this approach.
> > That's another big concern that I have:
> > Why you consider that all rings will be used for a pipilines between 
> > threads and should
> > always be accounted by your stats?
> > They could be used for dozens different purposes.
> > What if that ring is used for mempool, and ring_dequeue() just means we try 
> > to allocate
> > an object from the pool? In such case, why failing to allocate an object 
> > should mean
> > start of new 'idle cycle'?
>
> Another approach could be taken here if the mempool interactions are of 
> concern.


Another method to solve the problem will be leveraging an existing
trace framework and leverage existing fastpath tracepoints.
Where existing lcore poll busyness could be monitored by another
application by looking at the timestamp where traces are emitted.
This also gives flexibility to add customer or application specific
tracepoint as needed. Also control enable/disable aspects of trace
points.

l2reflect is a similar problem to see latency.

The use case like above(other application need to observe the code
flow of an DPDK application) and analyse, can be implemented
as

Similar suggesiton provied for l2reflect at
https://mails.dpdk.org/archives/dev/2022-September/250583.html

I would suggest to take this path to accommodate more use case in future like
- finding CPU idle time
-latency for crypto/dmadev/eventdev enqueue to dequeue
-histogram of occupancy for different queues
etc

This would translate to
1)Adding app/proc-info style app to pull the live trace from primary process
2)Add plugin framework to operate on live trace
3)Add a plugin for this specific use case
4)If needed, a communication from secondary to primary to take action
based on live analysis
like in this case if stop the primary when latency exceeds certain limit

On the plus side,
If we move all analysis and presentation to new generic application,
your packet forwarding
logic can simply move as new fwd_engine in testpmd(see
app/test-pmd/noisy_vnf.c as a example for fwdengine)

Ideally "eal: add lcore poll busyness telemetry"[1] could converge to
this model.

[1]
https://patches.dpdk.org/project/dpdk/patch/20220914092929.1159773-2-kevin.la...@intel.com/






>
>  From our understanding, mempool operations use the "_bulk" APIs, whereas 
> polling operations use the "_burst" APIs. Would only timestamping on the 
> "_burst" APIs be better here? That way the mempool interactions won't be 
> counted towards the busyness.
>
> Including support for pipelined applications using rings is key for a number 
> of usecases, this was highlighted as part of the customer feedback when we 
> shared the design.
>
> >
> >> Eventdev is another driver which would be completely missed with this
> >> approach.
> > Ok, I see two ways here:
> > - implement CB support for eventdev.
> > -meanwhile clearly document that this stats are not supported for eventdev  
> > scenarios (yet).

Reply via email to