On 1/28/2021 9:36 PM, Lance Richardson wrote:
On Tue, Jan 26, 2021 at 6:01 AM Ferruh Yigit <ferruh.yi...@intel.com> wrote:

On 1/26/2021 3:45 AM, Lance Richardson wrote:
On Mon, Jan 25, 2021 at 7:44 PM Ferruh Yigit <ferruh.yi...@intel.com> wrote:

+       if (rx_offloads != port->dev_conf.rxmode.offloads) {
+               uint16_t qid;
+
+               port->dev_conf.rxmode.offloads = rx_offloads;
+
+               /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
+               for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
+                       if (on)
+                               port->rx_conf[qid].offloads |= 
DEV_RX_OFFLOAD_JUMBO_FRAME;
+                       else
+                               port->rx_conf[qid].offloads &= 
~DEV_RX_OFFLOAD_JUMBO_FRAME;
+               }

Is it correct to set per-queue offloads that aren't advertised by the PMD
as supported in rx_queue_offload_capa?


'port->rx_conf[]' is testpmd struct, and 'port->dev_conf.rxmode.offloads' values
are reflected to 'port->rx_conf[].offloads' for all queues.

We should set the offload in 'port->rx_conf[].offloads' if it is set in
'port->dev_conf.rxmode.offloads'.

If a port has capability for 'JUMBO_FRAME', 'port->rx_conf[].offloads' can have
it. And the port level capability is already checked above.


I'm still not 100% clear about the per-queue offload question.

With this patch, and jumbo max packet size configured (on the command
line in this case), I see:

testpmd> show port 0 rx_offload configuration
Rx Offloading Configuration of port 0 :
    Port : JUMBO_FRAME
    Queue[ 0] : JUMBO_FRAME

testpmd> show port 0 rx_offload capabilities
Rx Offloading Capabilities of port 0 :
    Per Queue :
    Per Port  : VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM TCP_LRO
OUTER_IPV4_CKSUM VLAN_FILTER VLAN_EXTEND JUMBO_FRAME SCATTER TIMESTAMP
KEEP_CRC OUTER_UDP_CKSUM RSS_HASH


The port level offload is applied to all queues on the port, testpmd config
structure reflects this logic in implementation.
If Rx offload X is set for a port, it is set for all Rx queue offloads, this is
not new behavior and not related to this patch.

OK, is this purely for display purposes within testpmd? I ask because
it appears that all PMDs supporting per-queue offload configuration already
take care of combining port-level and per-queue offloads within their
tx_queue_setup()/rx_queue_setup() functions and then track the combined
set of offloads within a per-queue field, e.g. this line is common to
e1000/i40e/ionic/ixgbe/octeontx2/thunderx/txgbe rx_queue_setup()
implementations:
     offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;

And rte_ethdev.h says:
    No need to repeat any bit in rx_conf->offloads which has already been
    enabled in rte_eth_dev_configure() at port level. An offloading enabled
   at port level can't be disabled at queue level.

Which I suppose confirms that if testpmd is combining per-port and per-
queue offloads, it's just for the purposes of testpmd.


Yes it is just for purposed of testpmd (application), but doesn't need to be just limited to display purpose, testpmd keeps the config locally for any need.

Apologies for worrying at this even more, I just wanted to be sure that
I understand what the PMD is expected to do.


PMDs should not be getting these offloads in queue setup, since ethdev layer strips the port level offloads, I mean:

if port level offloads: X, Y

And applications provide following for queue_setup():
a) X
b) Y
c) X, Y

For all three PMD receives empty queue offload request.

d) X, Y, Z

PMD gets Z if it is in drivers queue specific capabilities.

Reply via email to