Hi Keith, On Mon, Jun 13, 2016 at 9:35 PM, Wiles, Keith <keith.wiles at intel.com> wrote: > > On 6/13/16, 9:07 AM, "dev on behalf of Take Ceara" <dev-bounces at dpdk.org > on behalf of dumitru.ceara at gmail.com> wrote: > >>Hi, >> >>I'm reposting here as I didn't get any answers on the dpdk-users mailing list. >> >>We're working on a stateful traffic generator (www.warp17.net) using >>DPDK and we would like to control two XL710 NICs (one on each socket) >>to maximize CPU usage. It looks that we run into the following >>limitation: >> >>http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html >>section 7.2, point 3 >> >>We completely split memory/cpu/NICs across the two sockets. However, >>the performance with a single CPU and both NICs on the same socket is >>better. >>Why do all the NICs have to be on the same socket, is there a >>driver/hw limitation? > > Normally the limitation is in the hardware, basically how the PCI bus is > connected to the CPUs (or sockets). How the PCI buses are connected to the > system depends on the Mother board design. I normally see the buses attached > to socket 0, but you could have some of the buses attached to the other > sockets or all on one socket via a PCI bridge device. > > No easy way around the problem if some of your PCI buses are split or all on > a single socket. Need to look at your system docs or look at lspci it has an > option to dump the PCI bus as an ASCII tree, at least on Ubuntu.
This is the motherboard we use on our system: http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRX.cfm I need to swap some NICs around (as now we moved everything on socket 1) before I can share the lspci output. Thanks, Dumitru