Fri, Dec 15, 2023 at 02:46:04AM CET, k...@kernel.org wrote:
>On Thu, 14 Dec 2023 21:29:51 +0100 Paolo Abeni wrote:
>> Together with Simon, I spent some time on the above. We think the
>> ndo_setup_tc(TC_SETUP_QDISC_TBF) hook could be used as common basis for
>> this offloads, with some small extensions (adding a 'max_rate' param,
>> too).
>
>uAPI aside, why would we use ndo_setup_tc(TC_SETUP_QDISC_TBF)
>to implement common basis?
>
>Is it not cleaner to have a separate driver API, with its ops
>and capabilities?
>
>> The idea would be:
>> - 'fixing' sch_btf so that the s/w path became a no-op when h/w offload
>> is enabled
>> - extend sch_btf to support max rate
>> - do the relevant ice implementation
>> - ndo_set_tx_maxrate could be replaced with the mentioned ndo call (the
>> latter interface is a strict super-set of former)
>> - ndo_set_vf_rate could also be replaced with the mentioned ndo call
>> (with another small extension to the offload data)
>> 
>> I think mqprio deserves it's own separate offload interface, as it
>> covers multiple tasks other than shaping (grouping queues and mapping
>> priority to classes)
>> 
>> In the long run we could have a generic implementation of the
>> ndo_setup_tc(TC_SETUP_QDISC_TBF) in term of devlink rate adding a
>> generic way to fetch the devlink_port instance corresponding to the
>> given netdev and mapping the TBF features to the devlink_rate API.
>> 
>> Not starting this due to what Jiri mentioned [1].
>
>Jiri, AFAIU, is against using devlink rate *uAPI* to configure network
>rate limiting. That's separate from the internal representation.

Devlink rate was introduced for configuring port functions that are
connected to eswitch port. I don't see any reason to extend it for
configuration of netdev on the host. We have netdev instance and other
means to do it.

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

Reply via email to