On 9/28/2020 2:58 PM, Dumitru Ceara wrote:
On 9/28/20 3:26 PM, Ferruh Yigit wrote:
On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote:


-----Original Message-----
From: Ferruh Yigit <ferruh.yi...@intel.com>
Sent: Monday, September 28, 2020 1:43 PM
To: Ananyev, Konstantin <konstantin.anan...@intel.com>; Dumitru Ceara
<dce...@redhat.com>; dev@dpdk.org
Cc: Richardson, Bruce <bruce.richard...@intel.com>
Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment
support.

On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
On 9/22/20 4:21 PM, Ferruh Yigit wrote:
On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
Even though ring interfaces don't support any other TX/RX
offloads they
do support sending multi segment packets and this should be
advertised
in order to not break applications that use ring interfaces.


Does ring PMD support sending multi segmented packets?


Yes, sending multi segmented packets works fine with ring PMD.


Define "works fine" :)

All PMDs can put the first mbuf of the chained mbuf to the ring, in
that case
what is the difference between the ones supports
'DEV_TX_OFFLOAD_MULTI_SEGS' and
the ones doesn't support?

If the traffic is only from ring PMD to ring PMD, you won't
recognize the
difference between segmented or not-segmented mbufs, and it will
look like
segmented packets works fine.
But if there is other PMDs involved in the forwarding, or if need
to process the
packets, will it still work fine?

As far as I can see ring PMD doesn't know about the mbuf segments.


Right, the PMD doesn't care about the mbuf segments but it implicitly
supports sending multi segmented packets. From what I see it's
actually
the case for most of the PMDs, in the sense that most don't even
check
the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
segment packets they are just accepted.
    >

As far as I can see, if the segmented packets sent, the ring PMD
will put the
first mbuf into the ring without doing anything specific to the
next segments.

If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should
detect the
segmented packets and put each chained mbuf into the separate field
in the ring.

Hmm, wonder why do you think this is necessary?
   From my perspective current behaviour is sufficient for TX-ing
multi-seg packets
over the ring.


I was thinking based on what some PMDs already doing, but right ring
may not
need to do it.

Also for the case, one application is sending multi segmented packets
to the
ring, and other application pulling packets from the ring and sending
to a PMD
that does NOT support the multi-seg TX. I thought ring PMD claiming the
multi-seg Tx support should serialize packets to support this case,
but instead
ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing
the
responsibility to the application.

So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' &
'DEV_RX_OFFLOAD_SCATTER', what do you think?

Seems so...
Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here,
   if DEV_RX_OFFLOAD_SCATTER was not specified?


I think better to have a new version of the patch to claim both
capabilities together.


OK, I can do that and send a v2 to claim both caps together.

Just so that it's clear to me though, these capabilities will only be
advertised and the current behavior of the ring PMD at tx/rx will remain
unchanged, right?


Yes, PMD behavior won't change, only PMD's hint to applications on what it supports will change.






However, the fact that the ring PMD doesn't advertise this implicit
support forces applications that use ring PMD to have a special
case for
handling ring interfaces. If the ring PMD would advertise
DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be
oblivious
to the type of underlying interface.


This is not handling the special case for the ring PMD, this is why
he have the
offload capability flag. Application should behave according
capability flags,
not per specific PMD.

Is there any specific usecase you are trying to cover?




Reply via email to