thanks bruce. I didn't know that PCI slots have direct socket affinity. is it static or configurable through PCI configuration space? well, my NUT, two node NUMA, seems always returns -1 on calling rte_eth_dev_socket_id(portid) whenever portid is 0, 1, or other values. I appreciate if you explain more about getting the affinity.
p.s. I'm using intel Xeon processor and 1G NIC(82576). On Fri, Oct 16, 2015 at 10:43 PM, Bruce Richardson < bruce.richardson at intel.com> wrote: > On Thu, Oct 15, 2015 at 11:08:57AM +0900, Moon-Sang Lee wrote: > > There is codes as below in examples/l2fwd/main.c and I think > > rte_eth_dev_socket_id(portid) > > always returns -1(SOCKET_ID_ANY) since there is no association code > between > > port and > > lcore in the example codes. > > Can you perhaps clarify what you mean here. On modern NUMA systems, such > as those > from Intel :-), the PCI slots are directly connected to the CPU sockets, > so the > ethernet ports do indeed have a direct NUMA affinity. It's not something > that > the app needs to specify. > > /Bruce > > > (i.e. I need to find a matching lcore from > > lcore_queue_conf[] with portid > > and call rte_lcore_to_socket_id(lcore_id).) > > > > /* init one RX queue */ > > fflush(stdout); > > ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd, > > rte_eth_dev_socket_id(portid), > > NULL, > > l2fwd_pktmbuf_pool); > > if (ret < 0) > > rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, > > port=%u\n", > > ret, (unsigned) portid); > > > > It works fine even though memory is allocated in different NUMA node. > But I > > wonder there is > > a DPDK API that associates inlcore to port internally thus > > rte_eth_devices[portid].pci_dev->numa_node > > contains proper node. > > > > > > -- > > Moon-Sang Lee, SW Engineer > > Email: sang0627 at gmail.com > > Wisdom begins in wonder. *Socrates* > -- Moon-Sang Lee, SW Engineer Email: sang0627 at gmail.com Wisdom begins in wonder. *Socrates*