On 10/7/2020 1:36 PM, Gregory Etelson wrote:
Hello Harsha,

-----Original Message-----

[snip]

Tunnel vport is an internal construct used by one specific
application: OVS. So, shouldn't the rte APIs also be application
agnostic apart from being vendor agnostic ? For OVS, the match fields
in the existing datapath flow rules contain enough information to
identify the tunnel type.

Tunnel offload model was inspired by OVS vport, but it is not part of the 
existing API.
It looks like the API documentation should not use that term to avoid confusion.

[snip]

[snip]

Wouldn't it be better if the APIs do not refer to vports and avoid
percolating it down to the PMD ? My point here is to avoid bringing in
the knowledge of an application specific virtual object (vport) to the
PMD.


As I have mentioned above, the API description should not mention vport.
I'll post updated documents.

Here's some other issues that I see with the helper APIs and
vendor-specific variable actions.
1) The application needs some kind of validation (or understanding) of
the actions returned by the PMD. The application can't just blindly
use the actions specified by the PMD. That is, the decision to pick
the set of actions can't be left entirely to the PMD.
2) The application needs to learn a PMD-specific way of action
processing for each vendor. For example, how should the application
handle flow-miss, given a different set of actions between two vendors
(if one vendor has already popped the tunnel header while the other
one hasn't).
3) The end-users/customers won't have a common interface (as in,
consistent actions) to perform tunnel decap action. This becomes a
manageability/maintenance issue for the application while working with
different vendors.

IMO, the API shouldn't expect the PMD to understand the notion of
vport. The goal here is to offload a flow rule to decap the tunnel
header and forward the packet to a HW endpoint.  The problem is that
we don't have a way to express the "tnl_pop" datapath action to the HW
(decap flow #1, in the context of br-phy in OVS-DPDK) and also we may
not want the HW to really pop the tunnel header at that stage. If this
cannot be expressed with existing rte action types, maybe we should
introduce a new action that clearly defines what is expected to the
PMD.

Tunnel Offload API provides a common interface for all HW vendors:
Rule #1: define a tunneled traffic and steer / group traffic related to
that tunnel
Rule #2: within the tunnel selection, run matchers on all packet headers,
outer and inner, and perform actions on inner headers in case of a match.
For the rule #1 application provides tunnel matchers and traffic selection 
actions
and for rule #2 application provides full header matchers and actions for inner 
parts.
The rest is supplied by PMD according to HW and rule type. Application does not
need to understand exact PMD elements implementation.
Helper return value notifies application whether it received requested PMD 
elements or not.
If helper completed successfully, it means that application received required 
elements
and can complete flow rule compilation.
As the result, a packet will be fully offloaded or returned to application with 
enough
information to continue processing in SW.

[snip]

[snip]

Miss handling
-------------
Packets going through multiple rte_flow groups are exposed to hw
misses due to partial packet processing. In such cases, the software
should continue the packet's processing from the point where the
hardware missed.

Whether the packet goes through multiple groups or not for tunnel
decap processing, should be left to the PMD/HW.  These assumptions
shouldn't be built into the APIs. The encapsulated packet (i,e with
outer headers) should be provided to the application, rather than
making SW understand that there was a miss in stage-1, or stage-n in
HW. That is, HW either processes it entirely, or punts the whole
packet to SW if there's a miss. And the packet should take the normal
processing path in SW (no action offload).

Thanks,
-Harsha

The packet is provided to the application via the standard rte_eth_rx_burst API.
Additional information about the HW packet processing state is provided to
the application by the suggested rte_flow_get_restore_info API. It is up to the
application if to use such provided info, or even if to call this API at all.

[snip]

Regards,
Gregory



Hi Gregory, Sriharsha,

Is there any output of the discussion?

Reply via email to