On 2024/3/22 21:58, Thomas Monjalon wrote:
> 22/03/2024 08:09, Dengdui Huang:
>> -#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
>> -#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
>> -#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
>> -#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
>> -#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
>> -#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
>> -#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
>> -#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
>> -#define RTE_ETH_LINK_SPEED_400G RTE_BIT32(16) /**< 400 Gbps */
>> +#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
>> +#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps 2lanes
>> */
>> +#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
>> +#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps 4lanes
>> */
>> +#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
>> +#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps 4lanes
>> */
>> +#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
>> +#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps
>> 4lanes */
>> +#define RTE_ETH_LINK_SPEED_400G RTE_BIT32(16) /**< 400 Gbps
>> 4lanes */
>> +#define RTE_ETH_LINK_SPEED_10G_4LANES RTE_BIT32(17) /**< 10 Gbps
>> 4lanes */
>> +#define RTE_ETH_LINK_SPEED_50G_2LANES RTE_BIT32(18) /**< 50 Gbps 2
>> lanes */
>> +#define RTE_ETH_LINK_SPEED_100G_2LANES RTE_BIT32(19) /**< 100 Gbps 2
>> lanes */
>> +#define RTE_ETH_LINK_SPEED_100G_4LANES RTE_BIT32(20) /**< 100 Gbps
>> 4lanes */
>> +#define RTE_ETH_LINK_SPEED_200G_2LANES RTE_BIT32(21) /**< 200 Gbps
>> 2lanes */
>> +#define RTE_ETH_LINK_SPEED_400G_8LANES RTE_BIT32(22) /**< 400 Gbps
>> 8lanes */
>
> I don't think it is a good idea to make this more complex.
> It brings nothing as far as I can see, compared to having speed and lanes
> separated.
> Can we have lanes information a separate value? no need for bitmask.
>
Hi,Thomas, Ajit, roretzla, damodharam
I also considered the option at the beginning of the design.
But this option is not used due to the following reasons:
1. For the user, ethtool couples speed and lanes.
The result of querying the NIC capability is as follows:
Supported link modes:
100000baseSR4/Full
100000baseSR2/Full
The NIC capability is configured as follows:
ethtool -s eth1 speed 100000 lanes 4 autoneg off
ethtool -s eth1 speed 100000 lanes 2 autoneg off
Therefore, users are more accustomed to the coupling of speed and lanes.
2. For the PHY, When the physical layer capability is configured through the
MDIO,
the speed and lanes are also coupled.
For example:
Table 45–7—PMA/PMD control 2 register bit definitions[1]
PMA/PMD type selection
1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
Therefore, coupling speeds and lanes is easier to understand.
And it is easier for the driver to report the support lanes.
In addition, the code implementation is compatible with the old version.
When the driver does not support the lanes setting, the code does not need to
be modified.
So I think the speed and lanes coupling is better.
[1]
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9844436