On Wed, Mar 24, 2021 at 04:07:47PM -0700, Florian Fainelli wrote: > > What are the benefits of mapping packets to TX queues of the DSA master > > from the DSA layer? > > For systemport and bcm_sf2 this was explained in this commit: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d156576362c07e954dc36e07b0d7b0733a010f7d > > in a nutshell, the switch hardware can return the queue status back to > the systemport's transmit DMA such that it can automatically pace the TX > completion interrupts. To do that we need to establish a mapping between > the DSA slave and master that is comprised of the switch port number and > TX queue number, and tell the HW to inspect the congestion status of > that particular port and queue. > > What this is meant to address is a "lossless" (within the SoC at least) > behavior when you have user ports that are connected at a speed lower > than that of your internal connection to the switch typically Gigabit or > more. If you send 1Gbits/sec worth of traffic down to a port that is > connected at 100Mbits/sec there will be roughly 90% packet loss unless > you have a way to pace the Ethernet controller's transmit DMA, which > then ultimately limits the TX completion of the socket buffers so things > work nicely. I believe that per queue flow control was evaluated before > and an out of band mechanism was preferred but I do not remember the > details of that decision to use ACB.
Interesting system design. Just to clarify, this port to queue mapping is completely optional, right? You can send packets to a certain switch port through any TX queue of the systemport?