From: Ferruh Yigit > On 11/8/2019 6:54 AM, Matan Azrad wrote: > > Hi > > > > From: Ferruh Yigit > >> On 11/7/2019 12:35 PM, Dekel Peled wrote: > >>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev * > >>> > >> RTE_ETHER_MAX_LEN; > >>> } > >>> > >>> + /* > >>> + * If LRO is enabled, check that the maximum aggregated packet > >>> + * size is supported by the configured device. > >>> + */ > >>> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) { > >>> + ret = check_lro_pkt_size( > >>> + port_id, dev_conf- > >>> rxmode.max_lro_pkt_size, > >>> + dev_info.max_lro_pkt_size); > >>> + if (ret != 0) > >>> + goto rollback; > >>> + } > >>> + > >> > >> This check forces applications that enable LRO to provide > 'max_lro_pkt_size' > >> config value. > > > > Yes.(we can break an API, we noticed it) > > I am not talking about API/ABI breakage, that part is OK. > With this check, if the application requested LRO offload but not provided > 'max_lro_pkt_size' value, device configuration will fail. > Yes > Can there be a case application is good with whatever the PMD can support > as max? Yes can be - you know, we can do everything we want but it is better to be consistent: Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max lro pkt len should be mandatory for LRO offload.
So your question is actually why both, non-lro packets and LRO packets max size are mandatory... I think it should be important values for net applications management. Also good for mbuf size managements. > > > >> - Why it is mandatory now, how it was working before if it is > >> mandatory value? > > > > It is the same as max_rx_pkt_len which is mandatory for jumbo frame > offload. > > So now, when the user configures a LRO offload he must to set max lro pkt > len. > > We don't want to confuse the user here with the max rx pkt len > configurations and behaviors, they should be with same logic. > > > > This parameter defines well the LRO behavior. > > Before this, each PMD took its own interpretation to what should be the > maximum size for LRO aggregated packets. > > Now, the user must say what is his intension, and the ethdev can limit it > according to the device capability. > > By this way, also, the PMD can organize\optimize its data-path more. > > Also, the application can create different mempools for LRO queues to > allow bigger packet receiving for LRO traffic. > > > >> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'? > > Yes, you can see the feature description Dekel added. > > This patch also updates all the PMDs support an LRO for non-0 value. > > Of course I can see the updates Matan, my point is "What happens if PMD > doesn't provide 'max_lro_pkt_size'", > 1) There is no check for it right, so it is acceptable? There is check. If the capability is 0, any non-zero configuration will fail. > 2) Are we making this filed mandatory to provide for PMDs, it is easy to make > new fields mandatory for PMDs but is this really necessary? Yes, for consistence. > > > > as same as max rx pkt len, no? > > > >> - What do you think setting 'max_lro_pkt_size' config value to what > >> PMD provided if application doesn't provide it? > > Same answers as above. > > > > If application doesn't care the value, as it has been till now, and not > provided > explicit 'max_lro_pkt_size', why not ethdev level use the value provided by > PMD instead of failing? Again, same question we can ask on max rx pkt len. Looks like the packet size is very important value which should be set by the application. Previous applications have no option to configure it, so they haven't configure it, (probably cover it somehow) I think it is our miss to supply this info. Let's do it in same way as we do max rx pkt len (as this patch main idea). Later, we can change both to other meaning. Matan