> -----Original Message----- > From: Ferruh Yigit <ferruh.yi...@intel.com> > Sent: Monday, October 15, 2018 6:42 PM > To: Phil Yang (Arm Technology China) <phil.y...@arm.com>; dev@dpdk.org > Cc: nd <n...@arm.com>; anatoly.bura...@intel.com > Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization > > On 10/15/2018 10:51 AM, Phil Yang (Arm Technology China) wrote: > >> -----Original Message----- > >> From: Ferruh Yigit <ferruh.yi...@intel.com> > >> Sent: Saturday, October 13, 2018 1:13 AM > >> To: Phil Yang (Arm Technology China) <phil.y...@arm.com>; > >> dev@dpdk.org > >> Cc: nd <n...@arm.com>; anatoly.bura...@intel.com > >> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization > >> > >> On 10/12/2018 10:34 AM, phil.y...@arm.com wrote: > >>> The cmdline settings of port-numa-config and rxring-numa-config have > >>> been flushed by the following init_config. If we don't configure the > >>> port-numa-config, the virtual device will allocate the device ports > >>> to socket 0. It will cause failure when the socket 0 is unavailable. > >>> > >>> eg: > >>> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo > >>> --socket-mem=64 -- --numa --port-numa-config="(0,1)" > >>> --ring-numa-config="(0,1,1),(0,2,1)" -i > >>> > >>> ... > >>> Configuring Port 0 (socket 0) > >>> Failed to setup RX queue:No mempool allocation on the socket 0 > >>> EAL: Error - exiting with code: 1 > >>> Cause: Start ports failed > >>> > >>> Fix by allocate the devices port to the first available socket or > >>> the socket configured in port-numa-config. > >> > >> I confirm this fixes the issue, by making vdev to allocate from > >> available socket instead of hardcoded socket 0, overall this make sense. > >> > >> But currently there is no way to request mempool form "socket 0" if > >> only cores from "socket 1" provided in "-l", even with > >> "port-numa-config" and "rxring- numa-config". > >> Both this behavior and the problem this patch fixes caused by patch: > >> Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation") > >> > >> It is good to have optimized mempool allocation but I think this > >> shouldn't limit the tool. If user wants mempools from specific socket, let > >> it > have. > >> > >> What about changing the default behavior to: > >> 1- Allocate mempool only from socket that coremask provided (current > >> approach) > >> 2- Plus, allocate mempool from sockets of attached devices (this is > >> alternative solution to this patch, your solution seems better for > >> virtual devices but for physical devices allocating from socket it > >> connects can be better) > >> 3- Plus, allocate mempool from sockets provided in "port-numa-config" > >> and "rxring-numa-config" > >> > >> What do you think? > > > > Hi Ferruh, > > > > Totally agreed with your suggestion. > > > > As I understand, allocating mempool from sockets of attached devices will > enable the cross NUMA scenario for Testpmd. > > Yes it will. > > > > > Below is my fix for physic port mempool allocate issue. So, is it better to > separate it into a new patch on the top of this one or rework this one by > adding > below fix? I prefer to add a new one because the current patch has already > fixed > two defects. Anyway, I will follow your comment. > > +1 to separate it into a new patch, so I will check existing patch. > > Below looks good only not sure if is should be in > `set_default_fwd_ports_config`? > Or perhaps `set_default_fwd_lcores_config`? Hi Ferruh,
IMO, 'set_default_fwd_lcores_config' is aiming to update sockets info and core related info according to the -l <core list> or -c <core mask> input. So, go through the attached devices to update ports' socket info in 'set_default_fwd_ports_config' is reasonable. I think the initialization process go through like below in Testpmd. / 1. 'set_default_fwd_lcore_config' update core related info 'set_def_fwd_config' — 2. 'set_default_peer_eth_addrs ' update port address | \ 3. 'set_default_fwd_ports_config ' update port (or devices) related info V 'launch_args_parse' — update port-numa-config settings | V / 1. allocate mempool for each available socket which recorded in socket_ids[] 'init_config' — 2. 'init_fwd_streams' update port->socket_id info according to the port-numa-config : start_port Once this patch applied. This socket_ids[] update order will affect the default socket of mempool allocation. e.g. : socket_ids[0] = -l <core list> socket_ids[1] = <attached devices socket id> when it is not the socket listed in <core list> For virtual devices, the default socket is socket_ids[0]. For physic devices, the default socket will be socket_id[1]. > > And port-numa-config and rxring-numa-config still not covered. Those configurations have been initialed in launch_args_parse' and operate the forwarding streams in 'init_fwd_streams'. So the patch covered those configurations. Thanks Phil Yang > > > > > 565 static void > > 566 set_default_fwd_ports_config(void) > > 567 { > > 568 › portid_t pt_id; > > 569 › int i = 0; > > 570 > > 571 › RTE_ETH_FOREACH_DEV(pt_id) { > > 572 › › fwd_ports_ids[i++] = pt_id; > > 573 > > + 574 › › /* Update sockets info according to the attached device */ > > + 575 › › int socket_id = rte_eth_dev_socket_id(pt_id); > > + 576 › › if (socket_id >= 0 && new_socket_id(pt_id)) { > > + 577 › › › if (num_sockets >= RTE_MAX_NUMA_NODES) { > > + 578 › › › › rte_exit(EXIT_FAILURE, > > + 579 › › › › › "Total sockets greater than %u\n", > > + 580 › › › › › RTE_MAX_NUMA_NODES); > > + 581 › › › } > > + 582 › › › socket_ids[num_sockets++] = socket_id; > > + 583 › › } > > + 584 › } > > + 585 > > 586 › nb_cfg_ports = nb_ports; > > 587 › nb_fwd_ports = nb_ports; > > 588 } > > > > Thanks > > Phil Yang > > > >> > >> > >>> > >>> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization") > >>> > >>> Signed-off-by: Phil Yang <phil.y...@arm.com> > >>> Reviewed-by: Gavin Hu <gavin...@arm.com> > >> > >> <...>