https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #11 from Eugene Grosbein <eu...@freebsd.org> --- There seems to be common mis-understanging in how hardware receive queues work in igb(4) chipsets. First, one should read Intel's datasheet on the NIC. For example of 82576-based NIC this is https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82576eb-gigabit-ethernet-controller-datasheet.pdf The section 7.1.1.7 of the datasheet states that NIC "supports a single hash function, as defined by Microsoft RRS". Reading on, one learns this means that only frames containing IPv4 or IPv6 packets are hashed using their IP addresses as hash function arguments and, optionally, TCP port numbers. This means that all incoming PPPoE ethernet frames are NOT hashed by such NIC in hardware, as any other frames carrying no plain IPv4 nor IPv6 packets. This is the reason why all incoming PPPoE ethernet frames get to the same (zero) queue. The igb(4) driver has nothing to do with this problem, and mentioned "patch" cannot solve the problem too. However, there are other ways. Most performant way for production use is usage of several igb NICs combined with lagg(4) logical channel connected to managed switch that is configured to distribute traffic flows between ports of the logical channel based on source MAC address of a frame. This is useful for mass-servicing of clients when one has multiple PPPoE clients generating flows of PPPoE frames each using distinct MAC address. This way is not really useful for PPPoE client receiving all frames from single PPPoE server. There is another way. By default, FreeBSD kernel performs all processing of received PPPoE frame within driver interrupt context: decapsulation, optional decompression/decryption, network address translation, routing lookups, packet filtering and so on. This can result in overloaded single CPU core in default configuration when sysctl net.isr.dispatch=direct. Since FreeBSD 8 we have netisr(8) network dispatch service allowing any NIC driver just enqueue received ethernet frame and cease its following processing freeing this CPU core. Other kernel threads using other CPU cores will then dequeue received frames to complete decapsilation etc. loading all CPU cores evenly. So, one just should make sure it has "net.isr.maxthreads" and "net.isr.numthreads" greater than 1 and switch net.isr.dispatch to "deferred" value that permits NIC drivers to use netisr(9) queues to distribute load between CPU cores. -- You are receiving this mail because: You are the assignee for the bug. _______________________________________________ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"