[dpdk-dev] Mellanox Flow Steering
Hi folks, I'm trying to use the flow steering features of the Mellanox card to effectively use a multicore server for a benchmark. The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 of the 32 cores present and 4 of the 16 RX queues supported by the hardware (i.e. one RX queue per core). I assign RX queues to each of the cores, but obviously without flow steering (all the packets have the same IP and UDP headers, but different dest MACs in the ethernet headers) each of the packets hits one core. I've set up the client such that it sends packets with a different destination MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on). I try to accomplish this by using ethtool to set flow steering rules (e.g. ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..). As soon as I set up these rules though, packets matching them just stop hitting my application. All other packets go through, and removing the rules also causes the packets to go through. I'm pretty sure my application is looking at all the queues, but I tried changing the rules to try a rule for every single destination RX queue (0-16), and that doesn't work either. If it helps, my code is based on the l2fwd sample application, and is here: https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e Also, I added the following to my /etc/init.d: options mlx4_core log_num_mgm_entry_size=-1, and restarted the driver before any of these tests. Any ideas what might be causing my packets to drop? In case this is a Mellanox issue, should I be talking to their customer support? Best, Raghav Sethi
[dpdk-dev] Mellanox Flow Steering
Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC device simultaneously. When you use ethtool to setup flow director filter, the rules are written to NIC via ethtool support in kernel driver. But when DPDK PMD is loaded to drive same device, the rules previously written by ethtool/kernel_driver will be invalid, so you may have to use DPDK APIs to rewrite your rules to the NIC again. The bifurcated driver is designed to provide a solution to support the kernel driver and DPDK coexist scenarios, but it has security concern so netdev maintainer rejects it. It should not be a Mellanox hardware problem, if you try it on Intel NIC the result is same. > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi > Sent: Sunday, April 12, 2015 1:10 PM > To: dev at dpdk.org > Subject: [dpdk-dev] Mellanox Flow Steering > > Hi folks, > > I'm trying to use the flow steering features of the Mellanox card to > effectively use a multicore server for a benchmark. > > The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 of > the 32 cores present and 4 of the 16 RX queues supported by the hardware > (i.e. one RX queue per core). > > I assign RX queues to each of the cores, but obviously without flow > steering (all the packets have the same IP and UDP headers, but different > dest MACs in the ethernet headers) each of the packets hits one core. I've > set up the client such that it sends packets with a different destination > MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX > queue 2 should get 10:00:00:00:00:01 and so on). > > I try to accomplish this by using ethtool to set flow steering rules (e.g. > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1, > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..). > > As soon as I set up these rules though, packets matching them just stop > hitting my application. All other packets go through, and removing the > rules also causes the packets to go through. I'm pretty sure my application > is looking at all the queues, but I tried changing the rules to try a rule > for every single destination RX queue (0-16), and that doesn't work either. > > If it helps, my code is based on the l2fwd sample application, and is here: > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e > > Also, I added the following to my /etc/init.d: options mlx4_core > log_num_mgm_entry_size=-1, and restarted the driver before any of these > tests. > > Any ideas what might be causing my packets to drop? In case this is a > Mellanox issue, should I be talking to their customer support? > > Best, > Raghav Sethi
[dpdk-dev] [RFC PATCH 2/4] Add the new common device header and C file.
On Fri, Apr 10, 2015 at 07:25:32PM +, Wiles, Keith wrote: > > > On 4/9/15, 6:53 AM, "Neil Horman" wrote: > > >On Wed, Apr 08, 2015 at 03:58:38PM -0500, Keith Wiles wrote: > >> Move a number of device specific define, structures and functions > >> into a generic device base set of files for all device not just > >>Ethernet. > >> > >> Signed-off-by: Keith Wiles > >> --- > >> lib/librte_eal/common/eal_common_device.c | 185 +++ > >> lib/librte_eal/common/include/rte_common_device.h | 617 > >>++ > >> 2 files changed, 802 insertions(+) > >> create mode 100644 lib/librte_eal/common/eal_common_device.c > >> create mode 100644 lib/librte_eal/common/include/rte_common_device.h > >> > >> diff --git a/lib/librte_eal/common/eal_common_device.c > >>b/lib/librte_eal/common/eal_common_device.c > >> new file mode 100644 > >> index 000..a9ef925 > >> --- /dev/null > >> +++ b/lib/librte_eal/common/eal_common_device.c > >> @@ -0,0 +1,185 @@ > >> +/*- > >> + * BSD LICENSE > >> + * > >> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. > >> + * Copyright(c) 2014 6WIND S.A. > >> + * All rights reserved. > >> + * > >> + * Redistribution and use in source and binary forms, with or without > >> + * modification, are permitted provided that the following conditions > >> + * are met: > >> + * > >> + * * Redistributions of source code must retain the above copyright > >> + * notice, this list of conditions and the following disclaimer. > >> + * * Redistributions in binary form must reproduce the above > >>copyright > >> + * notice, this list of conditions and the following disclaimer > >>in > >> + * the documentation and/or other materials provided with the > >> + * distribution. > >> + * * Neither the name of Intel Corporation nor the names of its > >> + * contributors may be used to endorse or promote products > >>derived > >> + * from this software without specific prior written permission. > >> + * > >> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > >>CONTRIBUTORS > >> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > >> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS > >>FOR > >> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > >>COPYRIGHT > >> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > >>INCIDENTAL, > >> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > >> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF > >>USE, > >> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON > >>ANY > >> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > >>TORT > >> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE > >>USE > >> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > >>DAMAGE. > >> + */ > >> + > >> +#include > >> +#include > >> +#include > >> +#include > >> + > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> + > >> +#include "rte_common_device.h" > >> + > >> +void * > >> +rte_dev_add_callback(struct rte_dev_rxtx_callback ** cbp, > >> + void * fn, void *user_param) > >> +{ > >> + struct rte_dev_rxtx_callback *cb; > >> + > >> + cb = rte_zmalloc(NULL, sizeof(*cb), 0); > >> + > >> + if (cb == NULL) { > >> + rte_errno = ENOMEM; > >> + return NULL; > >> + } > >> + > >> + cb->fn.vp = fn; > >> + cb->param = user_param; > >> + cb->next = *cbp; > >> + *cbp = cb; > >> + return cb; > >> +} > >> + > >> +int > >> +rte_dev_remove_callback(struct rte_dev_rxtx_callback ** cbp, > >> + struct rte_dev_rxtx_callback *user_cb) > >> +{ > >> + struct rte_dev_rxtx_callback *cb = *cbp; > >> + struct rte_dev_rxtx_callback *prev_cb; > >> + > >> + /* Reset head pointer and remove user cb if first in the list. */ > >> + if (cb == user_cb) { > >> + *cbp = user_cb->next; > >> + return 0; > >> + } > >> + > >> + /* Remove the user cb from the callback list. */ > >> + do { > >> + prev_cb = cb; > >> + cb = cb->next; > >> + > >> + if (cb == user_cb) { > >> + prev_cb->next = user_cb->next; > >> + return 0; > >> + } > >> + } while (cb != NULL); > >> + > >> + /* Callback wasn't found. */ > >> + return (-EINVAL); > >> +} > >> + > >> +int > >> +rte_dev_callback_register(struct rte_dev_cb_list * cb_list, > >> + rte_spinlock_t * lock, > >> + enum rte_dev_event_type event, > >> + rte_dev_cb_fn cb_fn, void *cb_arg) > >> +{ > >> + struct rte_dev_callback *cb; > >> + > >> + rte_spinlock_lock(lock); > >> + > >> + TAILQ_FOREACH(cb, cb_list, next) { > >> + if (cb->cb_fn == cb_fn && > >> + cb->cb_arg == cb_arg && > >> + cb->event == event) { > >> +
[dpdk-dev] Mellanox Flow Steering
Hi Danny, Thanks, that's helpful. However, Mellanox cards don't support Intel Flow Director, so how would one go about installing these rules in the NIC? The only technique the Mellanox User Manual ( http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the ethtool based method. Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise (unlike the igb_uio driver, which needs to be loaded to use PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running. Best, Raghav On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny wrote: > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC > device simultaneously. When you > use ethtool to setup flow director filter, the rules are written to NIC > via ethtool support in kernel driver. But when > DPDK PMD is loaded to drive same device, the rules previously written by > ethtool/kernel_driver will be invalid, so > you may have to use DPDK APIs to rewrite your rules to the NIC again. > > The bifurcated driver is designed to provide a solution to support the > kernel driver and DPDK coexist scenarios, but > it has security concern so netdev maintainer rejects it. > > It should not be a Mellanox hardware problem, if you try it on Intel NIC > the result is same. > > > -Original Message- > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi > > Sent: Sunday, April 12, 2015 1:10 PM > > To: dev at dpdk.org > > Subject: [dpdk-dev] Mellanox Flow Steering > > > > Hi folks, > > > > I'm trying to use the flow steering features of the Mellanox card to > > effectively use a multicore server for a benchmark. > > > > The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 > of > > the 32 cores present and 4 of the 16 RX queues supported by the hardware > > (i.e. one RX queue per core). > > > > I assign RX queues to each of the cores, but obviously without flow > > steering (all the packets have the same IP and UDP headers, but different > > dest MACs in the ethernet headers) each of the packets hits one core. > I've > > set up the client such that it sends packets with a different destination > > MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX > > queue 2 should get 10:00:00:00:00:01 and so on). > > > > I try to accomplish this by using ethtool to set flow steering rules > (e.g. > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1, > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..). > > > > As soon as I set up these rules though, packets matching them just stop > > hitting my application. All other packets go through, and removing the > > rules also causes the packets to go through. I'm pretty sure my > application > > is looking at all the queues, but I tried changing the rules to try a > rule > > for every single destination RX queue (0-16), and that doesn't work > either. > > > > If it helps, my code is based on the l2fwd sample application, and is > here: > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e > > > > Also, I added the following to my /etc/init.d: options mlx4_core > > log_num_mgm_entry_size=-1, and restarted the driver before any of these > > tests. > > > > Any ideas what might be causing my packets to drop? In case this is a > > Mellanox issue, should I be talking to their customer support? > > > > Best, > > Raghav Sethi >
[dpdk-dev] Mellanox Flow Steering
Hi Raghav, You are right with your observations, Mellanox PMD and mlx4_en (kernel driver) are co-exist. When DPDK application run, all traffic is redirected to DPDK application. When DPDK application exit the traffic is received by mlx4_en driver. Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues. Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it. Currently the only way to spread traffic between different PMD queues is using RSS. Best Regards, Olga -Original Message- From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Raghav Sethi Sent: Sunday, April 12, 2015 7:18 PM To: Zhou, Danny; dev at dpdk.org Subject: Re: [dpdk-dev] Mellanox Flow Steering Hi Danny, Thanks, that's helpful. However, Mellanox cards don't support Intel Flow Director, so how would one go about installing these rules in the NIC? The only technique the Mellanox User Manual ( http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the ethtool based method. Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise (unlike the igb_uio driver, which needs to be loaded to use PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running. Best, Raghav On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny wrote: > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC > device simultaneously. When you use ethtool to setup flow director > filter, the rules are written to NIC via ethtool support in kernel > driver. But when DPDK PMD is loaded to drive same device, the rules > previously written by ethtool/kernel_driver will be invalid, so you > may have to use DPDK APIs to rewrite your rules to the NIC again. > > The bifurcated driver is designed to provide a solution to support the > kernel driver and DPDK coexist scenarios, but it has security concern > so netdev maintainer rejects it. > > It should not be a Mellanox hardware problem, if you try it on Intel > NIC the result is same. > > > -Original Message- > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi > > Sent: Sunday, April 12, 2015 1:10 PM > > To: dev at dpdk.org > > Subject: [dpdk-dev] Mellanox Flow Steering > > > > Hi folks, > > > > I'm trying to use the flow steering features of the Mellanox card to > > effectively use a multicore server for a benchmark. > > > > The system has a single-port Mellanox ConnectX-3 EN, and I want to > > use 4 > of > > the 32 cores present and 4 of the 16 RX queues supported by the > > hardware (i.e. one RX queue per core). > > > > I assign RX queues to each of the cores, but obviously without flow > > steering (all the packets have the same IP and UDP headers, but > > different dest MACs in the ethernet headers) each of the packets hits one > > core. > I've > > set up the client such that it sends packets with a different > > destination MAC for each RX queue (e.g. RX queue 1 should get > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on). > > > > I try to accomplish this by using ethtool to set flow steering rules > (e.g. > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..). > > > > As soon as I set up these rules though, packets matching them just > > stop hitting my application. All other packets go through, and > > removing the rules also causes the packets to go through. I'm pretty > > sure my > application > > is looking at all the queues, but I tried changing the rules to try > > a > rule > > for every single destination RX queue (0-16), and that doesn't work > either. > > > > If it helps, my code is based on the l2fwd sample application, and > > is > here: > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e > > > > Also, I added the following to my /etc/init.d: options mlx4_core > > log_num_mgm_entry_size=-1, and restarted the driver before any of > > these tests. > > > > Any ideas what might be causing my packets to drop? In case this is > > a Mellanox issue, should I be talking to their customer support? > > > > Best, > > Raghav Sethi >
[dpdk-dev] Mellanox Flow Steering
Thanks for clarification Olga. I assume when PMD is upgraded to support flow director, the rules should be only set by PMD while DPDK application is running, right? Also, when DPDK application exits, the rules previously written by the PMD are invalid then user needs to reset rules by ethtool via mlx4_en driver. I think it does not make sense to allow two drivers, one in kernel and another in user space, to control a same NIC device simultaneously. Or a control plane synchronization mechanism is needed between two drivers. A master driver responsible for NIC control solely is expected. > -Original Message- > From: Olga Shern [mailto:olgas at mellanox.com] > Sent: Monday, April 13, 2015 4:39 AM > To: Raghav Sethi; Zhou, Danny; dev at dpdk.org > Subject: RE: [dpdk-dev] Mellanox Flow Steering > > Hi Raghav, > > You are right with your observations, Mellanox PMD and mlx4_en (kernel > driver) are co-exist. > When DPDK application run, all traffic is redirected to DPDK application. > When DPDK application exit the traffic is received by > mlx4_en driver. > > Regarding ethtool configuration you did, it influence only mlx4_en driver, it > doesn't influence Mellanox PMD queues. > > Mellanox PMD doesn't support Flow Director, like you mention, and we are > working to add it. > Currently the only way to spread traffic between different PMD queues is > using RSS. > > Best Regards, > Olga > > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi > Sent: Sunday, April 12, 2015 7:18 PM > To: Zhou, Danny; dev at dpdk.org > Subject: Re: [dpdk-dev] Mellanox Flow Steering > > Hi Danny, > > Thanks, that's helpful. However, Mellanox cards don't support Intel Flow > Director, so how would one go about installing these > rules in the NIC? The only technique the Mellanox User Manual ( > http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf) > lists to use Flow Steering is the ethtool based method. > > Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise > (unlike the igb_uio driver, which needs to be loaded > to use PMD) and it seems weird that only the packets affected by the rules > don't hit the DPDK application. That indicates to me > that the NIC is dealing with the rules somehow even though a DPDK application > is running. > > Best, > Raghav > > On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny wrote: > > > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC > > device simultaneously. When you use ethtool to setup flow director > > filter, the rules are written to NIC via ethtool support in kernel > > driver. But when DPDK PMD is loaded to drive same device, the rules > > previously written by ethtool/kernel_driver will be invalid, so you > > may have to use DPDK APIs to rewrite your rules to the NIC again. > > > > The bifurcated driver is designed to provide a solution to support the > > kernel driver and DPDK coexist scenarios, but it has security concern > > so netdev maintainer rejects it. > > > > It should not be a Mellanox hardware problem, if you try it on Intel > > NIC the result is same. > > > > > -Original Message- > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi > > > Sent: Sunday, April 12, 2015 1:10 PM > > > To: dev at dpdk.org > > > Subject: [dpdk-dev] Mellanox Flow Steering > > > > > > Hi folks, > > > > > > I'm trying to use the flow steering features of the Mellanox card to > > > effectively use a multicore server for a benchmark. > > > > > > The system has a single-port Mellanox ConnectX-3 EN, and I want to > > > use 4 > > of > > > the 32 cores present and 4 of the 16 RX queues supported by the > > > hardware (i.e. one RX queue per core). > > > > > > I assign RX queues to each of the cores, but obviously without flow > > > steering (all the packets have the same IP and UDP headers, but > > > different dest MACs in the ethernet headers) each of the packets hits one > > > core. > > I've > > > set up the client such that it sends packets with a different > > > destination MAC for each RX queue (e.g. RX queue 1 should get > > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on). > > > > > > I try to accomplish this by using ethtool to set flow steering rules > > (e.g. > > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc > > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc > > > 2..). > > > > > > As soon as I set up these rules though, packets matching them just > > > stop hitting my application. All other packets go through, and > > > removing the rules also causes the packets to go through. I'm pretty > > > sure my > > application > > > is looking at all the queues, but I tried changing the rules to try > > > a > > rule > > > for every single destination RX queue (0-16), and that doesn't work > > either. > > > > > > If it helps, my code is based on the l