On 11/2/2023 5:12 PM, Morten Brørup wrote:
>> From: Ferruh Yigit [mailto:ferruh.yi...@amd.com]
>> Sent: Thursday, 2 November 2023 18.06
>>
>> On 11/2/2023 4:51 PM, Morten Brørup wrote:
>>>> From: Ferruh Yigit [mailto:ferruh.yi...@amd.com]
>>>> Sent: Thursday, 2 November 2023 17.24
>>>>
>>>> On 11/2/2023 1:59 AM, lihuisong (C) wrote:
>>>>>
>>>>> 在 2023/11/2 0:08, Stephen Hemminger 写道:
>>>>>> On Wed, 1 Nov 2023 10:36:07 +0800
>>>>>> "lihuisong (C)" <lihuis...@huawei.com> wrote:
>>>>>>
>>>>>>>> Do we need to report this size? It's a common feature for all
>>>> PMDs.
>>>>>>>> It would make sense then to have max_rx_bufsize set to 16K by
>>>> default
>>>>>>>> in ethdev, and PMD could then raise/lower based on hardware.
>>>>>>> It is not appropriate to set to 16K by default in ethdev layer.
>>>>>>> Because I don't see any check for the upper bound in some driver,
>>>> like
>>>>>>> axgbe, enetc and so on.
>>>>>>> I'm not sure if they have no upper bound.
>>>>>>> And some driver's maximum buffer size is "16384(16K) - 128"
>>>>>>> So it's better to set to UINT32_MAX by default.
>>>>>>> what do you think?
>>>>>> The goal is always giving application a working upper bound, and
>>>>>> enforcing
>>>>>> that as much as possible in ethdev layer. It doesnt matter which
>>>> pattern
>>>>>> does that.  Fortunately, telling application an incorrect answer
>> is
>>>>>> not fatal.
>>>>>> If over estimated, application pool would be wasting space.
>>>>>> If under estimated, application will get more fragmented packets.
>>>>> I know what you mean.
>>>>> If we set UINT32_MAX, it just means that driver don't report this
>>>> upper
>>>>> bound.
>>>>> This is also a very common way of handling. And it has no effect on
>>>> the
>>>>> drivers that doesn't report this value.
>>>>> On the contrary, if we set a default value (like 16K) in ethdev,
>> user
>>>>> may be misunderstood and confused by that, right?
>>>>> After all, this isn't the real upper bound of all drivers. And this
>>>>> fixed default value may affect the behavior of some driver that I
>>>> didn't
>>>>> find their upper bound.
>>>>> So I'd like to keep it as UINT32_MAX.
>>>>>
>>>>
>>>>
>>>> Hi Stephen, Morten,
>>>>
>>>> I saw scattered Rx mentioned, there may be some misalignment,
>>>> the purpose of the patch is not to enable application to set as big
>> as
>>>> possible mbuf size, so that application can escape from parsing
>>>> multi-segment mbufs.
>>>> Indeed application can provide a large mbuf anyway, to have same
>>>> result,
>>>> without knowing this information.
>>>>
>>>> Main motivation is other way around, device may have restriction on
>>>> buffer size that a single descriptor can address, independent from
>>>> scattered Rx used, if mbuf size is bigger than this device limit,
>> each
>>>> mbuf will have some unused space.
>>>> Patch has intention to inform this max per mbuf/descriptor buffer
>> size,
>>>> so that application doesn't allocate bigger mbuf and waste memory.
>>>
>>> Good point!
>>>
>>> Let's categorize this patch series as a memory optimization for
>> applications that support jumbo frames, but are trying to avoid (or
>> reduce) scattered RX. :-)
>>>
>>
>> It is a memory optimization patch, but again nothing to do with jumbo
>> frames or scattered Rx.
> 
> I expect all NICs to support standard Ethernet frames without scattered RX.
> 
> So I consider this patch related to jumbo frames (and non-scattered RX). Is 
> there any other use case?
> 

I was thinking this is mainly for miss configuration by the application,
but if done intentionally yes intention of the application can be to
receive jumbo frames.

Reply via email to