On Tue, Mar 26, 2024 at 6:47 AM Ajit Khaparde
<ajit.khapa...@broadcom.com> wrote:
>
> On Tue, Mar 26, 2024 at 4:15 AM lihuisong (C) <lihuis...@huawei.com> wrote:
> >
> >
> > 在 2024/3/26 18:30, Thomas Monjalon 写道:
> > > 26/03/2024 02:42, lihuisong (C):
> > >> 在 2024/3/25 17:30, Thomas Monjalon 写道:
> > >>> 25/03/2024 07:24, huangdengdui:
> > >>>> On 2024/3/22 21:58, Thomas Monjalon wrote:
> > >>>>> 22/03/2024 08:09, Dengdui Huang:
> > >>>>>> -#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**< 10 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**< 20 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**< 25 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**< 40 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**< 50 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**< 56 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
> > >>>>>> -#define RTE_ETH_LINK_SPEED_400G    RTE_BIT32(16) /**< 400 Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_10G            RTE_BIT32(8)  /**< 10 
> > >>>>>> Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_20G            RTE_BIT32(9)  /**< 20 
> > >>>>>> Gbps 2lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_25G            RTE_BIT32(10) /**< 25 
> > >>>>>> Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_40G            RTE_BIT32(11) /**< 40 
> > >>>>>> Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_50G            RTE_BIT32(12) /**< 50 
> > >>>>>> Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_56G            RTE_BIT32(13) /**< 56 
> > >>>>>> Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_100G           RTE_BIT32(14) /**< 100 
> > >>>>>> Gbps */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_200G           RTE_BIT32(15) /**< 200 
> > >>>>>> Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_400G           RTE_BIT32(16) /**< 400 
> > >>>>>> Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_10G_4LANES     RTE_BIT32(17)  /**< 10 
> > >>>>>> Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_50G_2LANES     RTE_BIT32(18) /**< 50 
> > >>>>>> Gbps 2 lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_100G_2LANES    RTE_BIT32(19) /**< 100 
> > >>>>>> Gbps 2 lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_100G_4LANES    RTE_BIT32(20) /**< 100 
> > >>>>>> Gbps 4lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_200G_2LANES    RTE_BIT32(21) /**< 200 
> > >>>>>> Gbps 2lanes */
> > >>>>>> +#define RTE_ETH_LINK_SPEED_400G_8LANES    RTE_BIT32(22) /**< 400 
> > >>>>>> Gbps 8lanes */
> > >>>>> I don't think it is a good idea to make this more complex.
> > >>>>> It brings nothing as far as I can see, compared to having speed and 
> > >>>>> lanes separated.
> > >>>>> Can we have lanes information a separate value? no need for bitmask.
> > >>>>>
> > >>>> Hi,Thomas, Ajit, roretzla, damodharam
> > >>>>
> > >>>> I also considered the option at the beginning of the design.
> > >>>> But this option is not used due to the following reasons:
> > >>>> 1. For the user, ethtool couples speed and lanes.
> > >>>> The result of querying the NIC capability is as follows:
> > >>>> Supported link modes:
> > >>>>           100000baseSR4/Full
> > >>>>           100000baseSR2/Full
> > >>>> The NIC capability is configured as follows:
> > >>>> ethtool -s eth1 speed 100000 lanes 4 autoneg off
> > >>>> ethtool -s eth1 speed 100000 lanes 2 autoneg off
> > >>>>
> > >>>> Therefore, users are more accustomed to the coupling of speed and 
> > >>>> lanes.
> > >>>>
> > >>>> 2. For the PHY, When the physical layer capability is configured 
> > >>>> through the MDIO,
> > >>>> the speed and lanes are also coupled.
> > >>>> For example:
> > >>>> Table 45–7—PMA/PMD control 2 register bit definitions[1]
> > >>>> PMA/PMD type selection
> > >>>>                           1 0 0 1 0 1 0 = 100GBASE-SR2 PMA/PMD
> > >>>>                           0 1 0 1 1 1 1 = 100GBASE-SR4 PMA/PMD
> > >>>>
> > >>>> Therefore, coupling speeds and lanes is easier to understand.
> > >>>> And it is easier for the driver to report the support lanes.
> > >>>>
> > >>>> In addition, the code implementation is compatible with the old 
> > >>>> version.
> > >>>> When the driver does not support the lanes setting, the code does not 
> > >>>> need to be modified.
> > >>>>
> > >>>> So I think the speed and lanes coupling is better.
> > >>> I don't think so.
> > >>> You are mixing hardware implementation, user tool, and API.
> > >>> Having a separate and simple API is cleaner and not more difficult to 
> > >>> handle
> > >>> in some get/set style functions.
> > >> Having a separate and simple API is cleaner. It's good.
> > >> But supported lane capabilities have a lot to do with the specified
> > >> speed. This is determined by releated specification.
> > >> If we add a separate API for speed lanes, it probably is hard to check
> > >> the validity of the configuration for speed and lanes.
> > >> And the setting lane API sepparated from speed is not good for
> > >> uniforming all PMD's behavior in ethdev layer.
> > > Please let's be more specific.
> > > There are 3 needs:
> > >       - set PHY lane config
> > >       - get PHY lane config
> > >       - get PHY lane capabilities
> > IMO, this lane capabilities should be reported based on supported speed
> > capabilities.
> > >
> > > There is no problem providing a function to get the number of PHY lanes.
> > > It is possible to set PHY lanes number after defining a fixed speed.
> > yes it's ok.
> > >
> > >> The patch[1] is an example for this separate API.
> > >> I think it is not very good. It cannot tell user and PMD the follow 
> > >> points:
> > >> 1) user don't know what lanes should or can be set for a specified speed
> > >> on one NIC.
> > > This is about capabilities.
> > > Can we say a HW will support a maximum number of PHY lanes in general?
> > > We may need to associate the maximum speed per lane?
> > > Do we really need to associate PHY lane and PHY speed numbers for 
> > > capabilities?
> > Personally, it should contain the below releationship at least.
> > speed 10G  --> 1lane | 4lane
> > speed 100G  --> 2lane | 4lane
> > > Example: if a HW supports 100G-4-lanes and 200G-2-lanes,
> > > may we assume it is also supporting 200G-4-lanes?
> > I think we cannot assume that NIC also support 200G-4-lanes.
> > Beause it has a lot to do with HW design.
> > >
> > >> 2) how should PMD do for a supported lanes in their HW?
> > > I don't understand this question. Please rephrase.
> > I mean that PMD don't know set how many lanes when the lanes from user
> > is not supported on a fixed speed by PMD.
> > So ethdev layer should limit the avaiable lane number based on a fixed
> > speed.
>
> ethdev layer has generally been opaque. We should keep it that way.
I mis-typed.
%s/opaque/transparent


> The PMD should know what the HW supports.
> So it should show the capabilities correctly. Right?
> And if the user provides incorrect settings, it should reject it.
>
> > >
> > >> Anyway, if we add setting speed lanes feature, we must report and set
> > >> speed and lanes capabilities for user well.
> > >> otherwise, user will be more confused.
> > > Well is not necessarily exposing all raw combinations as ethtool does.
> > Agreed.
> > >
> > >> [1] https://patchwork.dpdk.org/project/dpdk/list/?series=31606
> > >
> > >
> > > .

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to