On Tue, 2 Apr 2024 16:37:39 +0800
huangdengdui <huangdeng...@huawei.com> wrote:

> On 2024/4/2 4:07, Thomas Monjalon wrote:
> > 30/03/2024 12:38, huangdengdui:  
> >> But, there are different solutions for the device to report the setting
> >> lane capability, as following:
> >> 1. Like the current patch, reporting device capabilities in speed and
> >>    lane coupling mode. However, if we use this solution, we will have
> >>    to couple the the lanes setting with speed setting.
> >>
> >> 2. Like the Damodharam's RFC patch [1], the device reports the maximum
> >>    number of supported lanes. Users can config a lane randomly,
> >>    which is completely separated from the speed.
> >>
> >> 3. Similar to the FEC capability reported by a device, the device reports 
> >> the
> >>    relationship table of the number of lanes supported by the speed,
> >>    for example:
> >>       speed    lanes_capa
> >>       50G      1,2
> >>       100G     1,2,4
> >>       200G     2,4
> >>
> >> Options 1 and 2 have been discussed a lot above.
> >>
> >> For solution 1, the speed and lanes are over-coupled, and the 
> >> implementation is too
> >> complex. But I think it's easier to understand and easier for the device 
> >> to report
> >> capabilities. In addition, the ethtool reporting capability also uses this 
> >> mode.
> >>
> >> For solution 2, as huisong said that user don't know what lanes should or 
> >> can be set
> >> for a specified speed on one NIC.
> >>
> >> I think that when the device reports the capability, the lanes should be 
> >> associated
> >> with the speed. In this way, users can know which lanes are supported by 
> >> the current
> >> speed and verify the configuration validity.
> >>
> >> So I think solution 3 is better. What do you think?  
> > 
> > I don't understand your proposals.
> > Please could you show the function signature for each option?
> > 
> >   
> 
> I agree with separating the lanes setting from the speed setting.
> I have a different proposal for device lanes capability reporting.
> 
> Three interfaces are added to the lib/ethdev like FEC interfaces.
> 1. rte_eth_lanes_get(uint16_t port_id, uint32_t *capa)       /* get current 
> lanes */
> 2. rte_eth_lanes_set(uint16_t port_id, uint32_t capa)
> 3. rte_eth_lanes_get_capa(uint16_t port_id, struct rte_eth_lanes_capa 
> *speed_lanes_capa)
> 
> /* A structure used to get capabilities per link speed */
> struct rte_eth_lanes_capa {
>       uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
>       uint32_t capa;  /**< lanes capabilities bitmask */
> };
> 
> For example, an ethdev report the following lanes capability array:
> struct rte_eth_lanes_capa[] device_capa = {
>       { RTE_ETH_SPEED_NUM_50G, 0x0003 }, //supports lanes 1 and 2 for 50G
>       { RTE_ETH_SPEED_NUM_100G, 0x000B } //supports lanes 1, 2 and 4 for 100G
> };
> 
> The application can know which lanes are supported at a specified speed.
> 
> I think it's better to implement the setting lanes feature in this way.
> 
> Welcom to jump into discuss.

Wouldn't the best way to handle this be to make lanes as similar as possible
to how link speed is handled in ethdev now.  It would mean holding off until 
24.11
release to do it right.  Things like adding link_lanes to rte_eth_link struct

Reply via email to