> On Wed, 2 Dec 2020 19:49:58 +0100 Thomas Karlsson wrote:
>> Background:
>> Broadcast and multicast packages are enqueued for later processing.
>> This queue was previously hardcoded to 1000.
>>
>> This proved insufficient for handling very high packet rates.
>> This resulted in packet drops for multicast.
>> While at the same time unicast worked fine.
>>
>> The change:
>> This patch make the queue length adjustable to accommodate
>> for environments with very high multicast packet rate.
>> But still keeps the default value of 1000 unless specified.
>>
>> The queue length is specified as a request per macvlan
>> using the IFLA_MACVLAN_BC_QUEUE_LEN parameter.
>>
>> The actual used queue length will then be the maximum of
>> any macvlan connected to the same port. The actual used
>> queue length for the port can be retrieved (read only)
>> by the IFLA_MACVLAN_BC_QUEUE_LEN_USED parameter for verification.
>>
>> This will be followed up by a patch to iproute2
>> in order to adjust the parameter from userspace.
>>
>> Signed-off-by: Thomas Karlsson <thomas.karls...@paneda.se>
> 
>> @@ -1658,6 +1680,8 @@ static const struct nla_policy 
>> macvlan_policy[IFLA_MACVLAN_MAX + 1] = {
>>      [IFLA_MACVLAN_MACADDR] = { .type = NLA_BINARY, .len = MAX_ADDR_LEN },
>>      [IFLA_MACVLAN_MACADDR_DATA] = { .type = NLA_NESTED },
>>      [IFLA_MACVLAN_MACADDR_COUNT] = { .type = NLA_U32 },
>> +    [IFLA_MACVLAN_BC_QUEUE_LEN] = { .type = NLA_U32 },
> 
> I wonder whether we should require that the queue_len is > 0? Is there 
> a valid use case for the queue to be completely disabled? If you agree
> please follow up with a simple patch which adds a NLA_POLICY_MIN() here.

Well, I did consider if I should put in a limit like that but then came
to the conclusion that is probably wise not to make any assumption on everyone 
else's
use cases. I can't really think of a reason why you want to disable the queue 
completely.
However, I think it still makes sense to allow that in case someone somewhere 
in the future
does find that useful.

> Applied to net-next, thank you!
> 

Awesome, thanks!

Reply via email to