On 11/27/2020 3:19 PM, Tobias Waldekranz wrote:
>> The initial design of switchdev was transactions. First there was a
>> prepare call, where you validated the requested action is possible,
>> and allocate resources needed, but don't actually do it. This prepare
>> call is allowed to fail. Then there is a second call to actually do
>> it, and that call is not allowed to fail. This structure avoids most
>> of the complexity of the unwind, just free up some resources. If you
>> never had to allocate the resources in the first place, better still.
>
> OK I think I finally see what you are saying. Sorry it took me this
> long. I do not mean to be difficult, I just want to understand.
>
> How about this:
>
> - Add a `lags_max` field to `struct dsa_switch` to let each driver
> define the maximum number supported by the hardware. By default this
> would be zero, which would mean that LAG offloading is not supported.
>
> - In dsa_tree_setup we allocate a static array of the minimum supported
> number across the entire tree.
>
> - When joining a new LAG, we ensure that a slot is available in
> NETDEV_PRECHANGEUPPER, avoiding the issue you are describing.
>
> - In NETDEV_CHANGEUPPER, we actually mark it as busy and start using it.
>
Sounds reasonable to me.
--
Florian