Hi, Konstantin, On Tue, Jan 16, 2018 at 12:38:35PM +0000, Ananyev, Konstantin wrote: > > > > -----Original Message----- > > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of wei.guo.si...@gmail.com > > Sent: Saturday, January 13, 2018 2:35 AM > > To: Lu, Wenzhuo <wenzhuo...@intel.com> > > Cc: dev@dpdk.org; Thomas Monjalon <tho...@monjalon.net>; Simon Guo > > <wei.guo.si...@gmail.com> > > Subject: [dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu to > > bind Q with CPU > > > > From: Simon Guo <wei.guo.si...@gmail.com> > > > > Currently the rx/tx queue is allocated from the buffer pool on socket of: > > - port's socket if --port-numa-config specified > > - or ring-numa-config setting per port > > > > All the above will "bind" queue to single socket per port configuration. > > But it can actually archieve better performance if one port's queue can > > be spread across multiple NUMA nodes, and the rx/tx queue is allocated > > per lcpu socket. > > > > This patch adds a new option "--ring-bind-lcpu"(no parameter). With > > this, testpmd can utilize the PCI-e bus bandwidth on another NUMA > > nodes. > > > > When --port-numa-config or --ring-numa-config option is specified, this > > --ring-bind-lcpu option will be suppressed. > > Instead of introducing one more option - wouldn't it be better to > allow user manually to define flows and assign them to particular lcores? > Then the user will be able to create any FWD configuration he/she likes. > Something like: > lcore X add flow rxq N,Y txq M,Z > > Which would mean - on lcore X recv packets from port=N, rx_queue=Y, > and send them through port=M,tx_queue=Z. Thanks for the comment. Will it be a too compliated solution for user since it will need to define specifically for each lcore? We might have hundreds of lcores in current modern platforms.
Thanks, - Simon