On Tue, Apr 21, 2015 at 10:44:54AM -0700, Matthew Hall wrote: > On Tue, Apr 21, 2015 at 10:27:48AM +0100, Bruce Richardson wrote: > > Can you perhaps comment on the use-case where you find this binding > > limiting? Modern platforms have multiple NUMA nodes, but they also > > generally > > have PCI slots connected to those multiple NUMA nodes also, so that you can > > have your NIC ports similarly NUMA partitionned? > > Hi Bruce, > > I was wondering if you have tried to do this on COTS (commerical > off-the-shelf) hardware before. What I found each time I tried it was that > PCIe slots are not very evenly distributed across the NUMA nodes unlike what > you'd expect. >
I doubt I've tried it on regular commercial boards as much as you guys have, though it does happen! > Sometimes the PCIe lanes on CPU 0 get partly used up by Super IO or other > integrated peripherals. Other times the motherboards give you 2 x8 when you > needed 1 x16 or they give you a bundh of x4 when you needed x8, etc. Point taken! > > It's actually pretty difficult to find the mapping, for one, and even when > you > do, even harder to get the right slots for your cards and so on. In the ixgbe > kernel driver you'll sometimes get some cryptic debug prints when it's been > munged and performance will suffer. But in the ixgbe PMD driver you're on > your > own mostly. It was to try and make the NUMA mapping of PCI clearer that we added in the printing of the NUMA node on PCI scan: EAL: PCI device 0000:86:00.0 on NUMA socket 1 EAL: probe driver: 8086:154a rte_ixgbe_pmd EAL: PCI memory mapped at 0x7fb452f04000 EAL: PCI memory mapped at 0x7fb453004000 Is there something more than this you feel we could do in the PMD to help with slot identification? /Bruce > > Matthew.