On 2023/8/21 14:06, Honnappa Nagarahalli wrote:

It would be good if you could fix the email on your side to text format. Comments inline.

Seems a wrong setting on my email client. Sorry.

*From:* Jack Min <jack...@nvidia.com>
*Sent:* Friday, August 18, 2023 8:35 PM
*To:* Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>; Stephen Hemminger <step...@networkplumber.org> *Cc:* dev@dpdk.org; Matan Azrad <ma...@nvidia.com>; viachesl...@nvidia.com; Tyler Retzlaff <roret...@linux.microsoft.com>; Wathsala Wathawana Vithanage <wathsala.vithan...@arm.com>; nd <n...@arm.com>
*Subject:* Re: MLX5 PMD access ring library private data

On 2023/8/18 21:59, Honnappa Nagarahalli wrote:

    *From:* Jack Min <jack...@nvidia.com> <mailto:jack...@nvidia.com>
    *Sent:* Friday, August 18, 2023 12:57 AM
    *To:* Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>
    <mailto:honnappa.nagaraha...@arm.com>; Stephen Hemminger
    <step...@networkplumber.org> <mailto:step...@networkplumber.org>
    *Cc:* dev@dpdk.org; Matan Azrad <ma...@nvidia.com>
    <mailto:ma...@nvidia.com>; viachesl...@nvidia.com; Tyler Retzlaff
    <roret...@linux.microsoft.com>
    <mailto:roret...@linux.microsoft.com>; Wathsala Wathawana
    Vithanage <wathsala.vithan...@arm.com>
    <mailto:wathsala.vithan...@arm.com>; nd <n...@arm.com>
    <mailto:n...@arm.com>
    *Subject:* Re: MLX5 PMD access ring library private data

    On 2023/8/18 12:30, Honnappa Nagarahalli wrote:

            -----Original Message-----

            From: Jack Min<jack...@nvidia.com>  <mailto:jack...@nvidia.com>

            Sent: Thursday, August 17, 2023 9:32 PM

            To: Stephen Hemminger<step...@networkplumber.org>  
<mailto:step...@networkplumber.org>; Honnappa

            Nagarahalli<honnappa.nagaraha...@arm.com>  
<mailto:honnappa.nagaraha...@arm.com>

            Cc:dev@dpdk.org; Matan Azrad<ma...@nvidia.com>  
<mailto:ma...@nvidia.com>;

            viachesl...@nvidia.com; Tyler Retzlaff<roret...@linux.microsoft.com>  
<mailto:roret...@linux.microsoft.com>;

            Wathsala Wathawana Vithanage<wathsala.vithan...@arm.com>  
<mailto:wathsala.vithan...@arm.com>; nd

            <n...@arm.com>  <mailto:n...@arm.com>

            Subject: Re: MLX5 PMD access ring library private data

            On 2023/8/17 22:06, Stephen Hemminger wrote:

                On Thu, 17 Aug 2023 05:06:20 +0000

                Honnappa Nagarahalli<honnappa.nagaraha...@arm.com>  
<mailto:honnappa.nagaraha...@arm.com>  wrote:

                    Hi Matan, Viacheslav,

                           Tyler pointed out that the function

            __mlx5_hws_cnt_pool_enqueue_revert is accessing the ring private 
structure

            members (prod.head and prod.tail) directly. Even though ' struct 
rte_ring' is a

            public structure (mainly because the library provides inline 
functions), the

            structure members are considered private to the ring library. So, 
this needs to

            be corrected.

                    It looks like the function 
__mlx5_hws_cnt_pool_enqueue_revert is trying

            to revert things that were enqueued. It is not clear to me why this

            functionality is required. Can you provide the use case for this? 
We can

            discuss possible solutions.

                How can reverting be thread safe? Consumer could have already 
looked at

            them?

            Hey,

            In our case, this ring is SC/SP, only accessed by one thread

            (enqueue/dequeue/revert).

        You could implement a more simpler and more efficient (For ex: such an 
implementation would not need any atomic operations, would require less number 
of cache lines) ring for this.

        Is this function being used in the dataplane?

    Yes,  we can have our own version of ring (no atomic operations)
    but basic operation are still as same as rte_ring.

    Since rte ring has been well-designed and tested sufficiently, so
    there is no strong reason to re-write a new simple version of it
    until today :)

            The scenario we have "revert" is:

               We use ring to manager our HW objects (counter in this case) and 
for each

            core (thread) has "cache" (a SC/SP ring) for sake of performance.

            1. Get objects from "cache" firstly, if cache is empty, we fetch a 
bulk of free

            objects from global ring into cache.

            2. Put (free) objects also into "cache" firstly, if cache is full, 
we flush a bulk of

            objects into global ring in order to make some rooms in cache.

            However, this HW object cannot be immediately reused after free. It 
needs

            time to be reset and then can be used again.

            So when we flush cache, we want to keep the first enqueued objects 
still stay

            there because they have more chance already be reset than the latest

            enqueued objects.

            Only flush recently enqueued objects back into global ring, act as 
"LIFO"

            behavior.

            This is why we require "revert" enqueued objects.

        You could use 'rte_ring_free_count' API before you enqueue to check for 
available space.

    Only when cache is full (rte_ring_free_count() is zero), we revert
    X objects.

    If there is still  one free slot we will not trigger revert (flush).

    */[Honnappa]/* May be I was not clear in my recommendation. What I
    am saying is, you could call ‘rte_ring_free_count’ to check if you
    have enough space on the cache ring. If there is not enough space
    you can enqueue the new objects on the global ring. Pseudo code below:

    If (rte_ring_free_count(cache_ring) > n) {

                 <enqueue n objects on cache ring>

    } else {

                 <enqueue n objects on global ring>

    }

Hey,

Then next n objects will still enqueue into global ring, not into cache , right? ( we enqueue nnnn objects continually)

Our requirement is like this:

if (rte_ring_free_count(cache_ring) > 0) {

         <enqueue this object on cache ring>

} else { /* cache is full */

      <enqueue this object into global ring>

     <move the latest n objects into global ring too>

*/[Honnappa] /*Understood. IMO, this is a unique requirement. Ring library does not support this and is not designed for this. As per the guidelines and past agreements, accessing structure members in ring structures is not allowed.

Alright. Now I'm aware of this.

Do we have a document about this? I probably overlooked it...

I think a simple implementation like [1] would suffice your needs.

[1] https://patches.dpdk.org/project/dpdk/patch/20230821060420.3509667-1-honnappa.nagaraha...@arm.com/

}

Yes, mostly.

It will be better if we have a "zero-copy" version, like: rte_st_ring_dequeue_zc_at_head_burst_elem_start()

-Jack

Reply via email to