Hi Tobias, On Tue, Oct 27, 2020 at 11:51:13AM +0100, Tobias Waldekranz wrote: > I would really appreciate feedback on the following: > > All LAG configuration is cached in `struct dsa_lag`s. I realize that > the standard M.O. of DSA is to read back information from hardware > when required. With LAGs this becomes very tricky though. For example, > the change of a link state on one switch will require re-balancing of > LAG hash buckets on another one, which in turn depends on the total > number of active links in the LAG. Do you agree that this is > motivated?
I don't really have an issue with that. > The LAG driver ops all receive the LAG netdev as an argument when this > information is already available through the port's lag pointer. This > was done to match the way that the bridge netdev is passed to all VLAN > ops even though it is in the port's bridge_dev. Is there a reason for > this or should I just remove it from the LAG ops? Maybe because on "leave", the bridge/LAG net device pointer inside struct dsa_port is first set to NULL, then the DSA notifier is called? > At least on mv88e6xxx, the exact source port is not available when > packets are received on the CPU. The way I see it, there are two ways > around that problem: > > - Inject the packet directly on the LAG device (what this series > does). Feels right because it matches all that we actually know; the > packet came in on the LAG. It does complicate dsa_switch_rcv > somewhat as we can no longer assume that skb->dev is a DSA port. > > - Inject the packet on "the designated port", i.e. some port in the > LAG. This lets us keep the current Rx path untouched. The problem is > that (a) the port would have to be dynamically updated to match the > expectations of the LAG driver (team/bond) as links are > enabled/disabled and (b) we would be presenting a lie because > packets would appear to ingress on netdevs that they might not in > fact have been physically received on. Since ocelot/felix does not have this restriction, and supports individual port addressing even under a LAG, you can imagine I am not very happy to see the RX data path punishing everyone else that is not mv88e6xxx. > (mv88e6xxx) What is the policy regarding the use of DSA vs. EDSA? It > seems like all chips capable of doing EDSA are using that, except for > the Peridot. I have no documentation whatsoever for mv88e6xxx, but just wondering, what is the benefit brought by EDSA here vs DSA? Does DSA have the same restriction when the ports are in a LAG?