> >> to update key only as the function name says. I.e. keep
> >> rss_hf as is. That could be the reason to get first.
True,
> > I think that was the intial purpose of the command, but patch
> > 8205e241b2b0 added setting the hash type as mandatory. There are
> > no other command to configure the hash type from testpmd AFAICT.
Also for the same initial purpose, some NIC have an hash key per
protocol, by default it uses the same key for all of them but it can be
configured individually making for example key0 for all protocols expect
IPv4 which uses key1.
> > Also, even without 8205e241b2b0, the function was broken because the
> > key length was overiden.
>
> I see, many thanks for explanations.
--
Nélio Laranjeiro
6WIND
+Shahaf,
Hi Maxime,
On Mon, Sep 13, 2021 at 11:41:04AM +0200, Maxime Coquelin wrote:
> Hi Nélio,
>
> On 9/10/21 4:16 PM, Nélio Laranjeiro wrote:
> > On Fri, Sep 10, 2021 at 01:06:53PM +0300, Andrew Rybchenko wrote:
> >> On 9/10/21 12:57 PM, Maxime Coquelin wrote:
> &g
ntentional, this commit won't apply as is on stable branch.
Thanks for the remind,
--
Nélio Laranjeiro
6WIND
net/mlx5/mlx5_rxtx.c: In function ‘mlx5_tx_burst’:
> drivers/net/mlx5/mlx5_rxtx.c:523:10: error:
> right shift count >= width of type [-Werror=shift-count-overflow]
> addr >> 32,
> ^~
>
> Please Ferruh, remove the series from next-net.
Hi Thomas,
Wait, I'll submit a fix in few minutes.
Regards,
--
Nélio Laranjeiro
6WIND
> --
> 1.8.3.1
Hi Shahaf,
This function embeds some HAVE_ETHTOOL_LINK_MODE_* to handle the
different version of Linux Kernel where those link speeds were added.
With this patch, they becomes useless.
It could be great for configuration and compilation time to remove them
from this file and the Makefile.
Regards,
--
Nélio Laranjeiro
6WIND
_dev_link_status_handler,
> dev);
> - } else
> - ret = 1;
> + }
> + } else {
> + ret = 1;
> }
> return ret;
> }
> @@ -1178,6 +1173,7 @@ struct priv *
>
> priv_lock(priv);
> assert(priv->pending_alarm == 1);
> + priv->pending_alarm = 0;
> ret = priv_dev_link_status_handler(priv, dev);
> priv_unlock(priv);
> if (ret)
> --
> 1.8.3.1
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
> if (sc & (ETHTOOL_LINK_MODE_10baseKR4_Full_BIT |
> ETHTOOL_LINK_MODE_10baseSR4_Full_BIT |
> ETHTOOL_LINK_MODE_10baseCR4_Full_BIT |
> ETHTOOL_LINK_MODE_10baseLR4_ER4_Full_BIT))
> priv->link_speed_capa |= ETH_LINK_SPEED_100G;
> -#endif
> dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
> ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
> dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> --
> 1.8.3.1
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
On Wed, Feb 01, 2017 at 06:31:17PM +, Ferruh Yigit wrote:
> On 1/31/2017 3:42 PM, Nélio Laranjeiro wrote:
> > On Tue, Jan 31, 2017 at 01:45:29PM +0200, Shahaf Shuler wrote:
> >> Trying to query the link status through new kernel ioctl API
> >> ETHTOOL_GLINKSETTING
")
> >
> > Signed-off-by: Yongseok Koh
> > Signed-off-by: Nelio Laranjeiro
> > ---
> <...>
Regards,
--
Nélio Laranjeiro
6WIND
- mpw.total_len += length;
> elts_head = elts_head_next;
> #ifdef MLX5_PMD_SOFT_COUNTERS
> /* Increment sent bytes counter. */
> --
> 2.11.0
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
r release 7.3
> +* SUSE Enterprise Linux 12
> +* Wind River Linux 8
> +* Ubuntu 16.04
> +* Ubuntu 16.10
> +
> --
> 2.1.0
Hi Yulong, John,
I would like to propose a modification on those section to improve
readability and avoid confusion users can meet while reading it.
They always think that all the combination written here have been
tested, main idea is to avoid such situation.
You can find the proposition on series [1] and specially the patch [2].
Regards,
[1] http://dpdk.org/ml/archives/dev/2017-February/057154.html
[2] http://dpdk.org/dev/patchwork/patch/20290/
--
Nélio Laranjeiro
6WIND
stantin
Hi Konstantin,
The benefit is to provide documented byte ordering for data types
software is manipulating to determine when network to CPU (or CPU to
network) conversion must be performed.
Regards,
--
Nélio Laranjeiro
6WIND
the types, though.
I agree, at least APIs should use this, PMDs can do as they want.
> One thing I'm wondering though, is if we might want to take this
> further. For little endian environments, we could define the big endian
> types as structs using typedefs, and similarly the le types on be
> platforms, so that assigning from the non-native type to the native one
> without a transformation function would cause a compiler error.
>
> /Bruce
If I understand you correctly, this will break hton like functions which
expects an uint*_t not a structure.
--
Nélio Laranjeiro
6WIND
red, with those types
it takes less than a second.
Regards,
--
Nélio Laranjeiro
6WIND
?
--
Nélio Laranjeiro
6WIND
} else {
>
> Not really so important, but as a note, ACTION_TYPE_VOID hits here. It
> pass from validation, but gives error in creation.
>
> > + rte_flow_error_set(error, ENOTSUP,
> > + RTE_FLOW_ERROR_TYPE_ACTION,
> > + actions,
> > + "no possible action found");
> > + goto exit;
> > + }
>
> <...>
Hi Ferruh,
I will send (very soon) a v4 to handle this situation.
Regards,
--
Nélio Laranjeiro
6WIND
s in order to make the code simpler and to reduce a few calculations.
>
> Signed-off-by: Yongseok Koh
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
}
> + }
This loop could be smaller, blk_n can only be equal to 0 in the first
iteration, otherwise is >= 1.
The first if statement can be merged with the second one:
if (likely(m != NULL)) {
if (likely(blk_n && m->pool == free[0]->pool)) {
...
} else {
...
}
Thanks,
--
Nélio Laranjeiro
6WIND
On Fri, Jun 30, 2017 at 02:30:47PM +0200, Nélio Laranjeiro wrote:
> On Wed, Jun 28, 2017 at 04:04:00PM -0700, Yongseok Koh wrote:
> > When processing Tx completion, it is more efficient to free buffers in bulk
> > using rte_mempool_put_bulk() if buffers are from a same mempool.
>
ucture, it was
useful in the dataplane to retrieve the lkey, but with this new
implementation it becomes useless.
This also helps to keep the memory footprint of this array. The control
plane can spend some cycles to retrieve the start/end addresses of the
mempool to compare them.
Thanks,
--
Nélio Laranjeiro
6WIND
On Wed, Jun 28, 2017 at 04:04:02PM -0700, Yongseok Koh wrote:
> The callbacks are global to a device but the seletion is made every queue
> configuration, which is redundant.
>
> Signed-off-by: Yongseok Koh
>[...]
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
s in order to make the code simpler and to reduce a few calculations.
>
> Signed-off-by: Yongseok Koh
> ---
Already acked in V1, please keep the ack in the unchanged commits.
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
On Fri, Jun 30, 2017 at 12:23:32PM -0700, Yongseok Koh wrote:
> The callbacks are global to a device but the seletion is made every queue
> configuration, which is redundant.
>
> Signed-off-by: Yongseok Koh
Same here, already acked in v1.
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
On Fri, Jun 30, 2017 at 12:23:30PM -0700, Yongseok Koh wrote:
> When processing Tx completion, it is more efficient to free buffers in bulk
> using rte_mempool_put_bulk() if buffers are from a same mempool.
>
> Signed-off-by: Yongseok Koh
> ---
>[...]
Acked-by: Nelio Lara
r_t)mr->addr &&
> + end <= (uintptr_t)mr->addr + mr->length)
> return;
> }
> txq_mp2mr_reg(&txq_ctrl->txq, mp, i);
if (start >= (uintptr_t)mr->addr &&
end <= (uintptr_t)mr->addr + mr->length)
Is this expected to have a memory region bigger than the memory pool
space? I mean I was expecting to see strict equality in the addresses.
Regards,
--
Nélio Laranjeiro
6WIND
On Mon, Jul 03, 2017 at 08:54:43PM +, Yongseok Koh wrote:
>
> > On Jul 3, 2017, at 7:06 AM, Nélio Laranjeiro
> > wrote:
> >
> > On Fri, Jun 30, 2017 at 12:23:31PM -0700, Yongseok Koh wrote:
> >> When searching LKEY, if search key is mempool point
(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
> + 0,
> + (uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
> + (uint8_t)(PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED),
> + 0);
> + const __m128i cv_mask =
> + _mm_set_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
> + PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED,
> + PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
> + PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED,
> + PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
> + PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED,
> + PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
> + PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED);
Same here.
> + const __m128i mbuf_init =
> + _mm_loadl_epi64((__m128i *)&rxq->mbuf_initializer);
> + __m128i rearm0, rearm1, rearm2, rearm3;
>[...]
> +/**
> + * Check a TX queue can support vectorized TX.
> + *
> + * @param txq
> + * Pointer to TX queue.
> + *
> + * @return
> + * 1 if supported, negative errno value if not.
> + */
> +int __attribute__((cold))
> +txq_check_vec_tx_support(struct txq *txq)
> +{
> + /* Currently unused, but for possible future use. */
This comment is not useful as the PMD code style prefix reflects the
first parameter of the function, it is expected from the function name
to have txq as first parameter even if not use (for now).
> + (void)txq;
> + if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
> + return -ENOTSUP;
> + return 1;
> +}
> +
>[...]
Thanks,
--
Nélio Laranjeiro
6WIND
On Tue, Jul 04, 2017 at 05:38:44PM -0700, Yongseok Koh wrote:
> On Tue, Jul 04, 2017 at 10:58:52AM +0200, Nélio Laranjeiro wrote:
> > Yongseok, some comments in this huge and great work,
> >
> > On Fri, Jun 30, 2017 at 12:23:33PM -0700, Yongseok Koh wrote:
> > > To
you could use the static_assert
which verifies at compilation time the constants are correct.
(I am not asking to change it now, we can make a compain to change
all this kind of assert at onces).
Unless the pragma which must remain on Verbs header:
Acked-by: Nelio Laranjeiro
Great work.
Thanks,
--
Nélio Laranjeiro
6WIND
On Wed, Jul 05, 2017 at 11:12:27AM -0700, Yongseok Koh wrote:
> The callbacks are global to a device but the seletion is made every queue
> configuration, which is redundant.
>
> Signed-off-by: Yongseok Koh
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
cqe_cnt].pkt_info);
>
> /* Fix endianness. */
> zip->cqe_cnt = ntohl(cqe->byte_cnt);
> --
> 2.11.0
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
tly re-working this part of the code to improve it using
reference counters instead. The cache will remain for performance
purpose. This will fix the issues you are pointing.
Are you facing some kind of issue? Maybe you can share it, it can help
to improve things.
Thanks,
--
Nélio Laranjeiro
6WIND
ile were on the subject, what is the plan of detaching mlx5_pmd
> from its MLNX_OFED dependency? Mellanox has been doing a good job
> upstreaming the needed features (rdma-core). CC'ing Leon (who is
> co-maintaining the user-space rdma tree.
This is also a in progress in PMD part, it should be part of the next
DPDK release.
--
Nélio Laranjeiro
6WIND
__u32 link_mode_data[3 *
> ETHTOOL_LINK_MODE_MASK_MAX_KERNEL_NU32];
Checkpatch complains about the line size here.
Thanks,
--
Nélio Laranjeiro
6WIND
breaks
the standard.
> + struct ethtool_link_settings edata;
> + uint32_t link_mode_data[3 *
> + ETHTOOL_LINK_MODE_MASK_MAX_KERNEL_NU32];
> + } ecmd;
Please add this __extension__ and it will be good ;)
Thanks,
--
Nélio Laranjeiro
6WIND
19888 ("net/mlx5: fix support for newer link speeds")
> Cc: sta...@dpdk.org
>
> Signed-off-by: Gowrishankar Muthukrishnan
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
ix target an even older commit?
>
> cf37ca9563d5 ("mlx5: support MTU configuration")
Right,
--
Nélio Laranjeiro
6WIND
net/mlx5: cleanup Rx ring in free functions
>
> http://dpdk.org/dev/patchwork/patch/27303/
Differed to the next release,
Thanks
--
Nélio Laranjeiro
6WIND
> 2.11.0
>
Hi Yongseok,
I don't understand why you need the cast for the returned value of
mlx5_dev_start() as it already returns an int and your final variable is
also an int.
Thanks,
--
Nélio Laranjeiro
6WIND
On Fri, Mar 17, 2017 at 10:14:56AM -0700, Yongseok Koh wrote:
> Hi Nelio,
>
> On Fri, Mar 17, 2017 at 02:11:43AM -0700, Nélio Laranjeiro wrote:
> > On Thu, Mar 16, 2017 at 03:40:56PM -0700, Yongseok Koh wrote:
> > > Currently mlx5_dev_rss_reta_update() just up
t; + /* VLAN filters. */
> + struct {
> + unsigned int enabled:1; /* If enabled. */
> + unsigned int id:12; /* VLAN ID (0-4095). */
> + } vlan_filter[MLX4_MAX_VLAN_IDS]; /* VLAN filters table. */
> + /* Device properties. */
> + uint16_t mtu; /* Configured MTU. */
> + uint8_t port; /* Physical port number. */
> + unsigned int started:1; /* Device started, flows enabled. */
> + unsigned int promisc:1; /* Device in promiscuous mode. */
> + unsigned int allmulti:1; /* Device receives all multicast packets. */
> + unsigned int hw_qpg:1; /* QP groups are supported. */
> + unsigned int hw_tss:1; /* TSS is supported. */
> + unsigned int hw_rss:1; /* RSS is supported. */
> + unsigned int hw_csum:1; /* Checksum offload is supported. */
> + unsigned int hw_csum_l2tun:1; /* Same for L2 tunnels. */
> + unsigned int rss:1; /* RSS is enabled. */
> + unsigned int vf:1; /* This is a VF device. */
> + unsigned int pending_alarm:1; /* An alarm is pending. */
> +#ifdef INLINE_RECV
> + unsigned int inl_recv_size; /* Inline recv size */
> +#endif
> + unsigned int max_rss_tbl_sz; /* Maximum number of RSS queues. */
> + /* RX/TX queues. */
> + struct rxq rxq_parent; /* Parent queue when RSS is enabled. */
> + unsigned int rxqs_n; /* RX queues array size. */
> + unsigned int txqs_n; /* TX queues array size. */
> + struct rxq *(*rxqs)[]; /* RX queues. */
> + struct txq *(*txqs)[]; /* TX queues. */
> + struct rte_intr_handle intr_handle; /* Interrupt handler. */
> + rte_spinlock_t lock; /* Lock for control functions. */
> +};
> +
> +void priv_lock(struct priv *priv);
> +void priv_unlock(struct priv *priv);
> +
> #endif /* RTE_PMD_MLX4_H_ */
> --
> 1.8.3.1
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
flow->ibv_flow = NULL;
> + DEBUG("Flow %p removed", (void *)flow);
> + }
> +}
> +
> +/**
> + * Add all flows.
> + *
> + * @param priv
> +
/mlx5/mlx5_rss.c| 18 +-
> lib/librte_ether/rte_ethdev.c | 8 +---
> 4 files changed, 15 insertions(+), 23 deletions(-)
>
> --
> 2.11.0
>
Acked-by: Nelio Laranjeiro
For the series, thanks,
--
Nélio Laranjeiro
6WIND
(void *)dev, idx);
> + priv_unlock(priv);
> + return -ENOMEM;
> + }
> + }
> } else {
> txq_ctrl =
> rte_calloc_socket("TXQ", 1,
> --
> 2.11.0
By the same time, can you also fix the indentation please?
Thanks,
--
Nélio Laranjeiro
6WIND
+ priv_unlock(priv);
> + return -ENOMEM;
> + }
> + }
> } else {
> txq_ctrl =
> rte_calloc_socket("TXQ", 1,
> --
> 2.11.0
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
if (is_tunneled && txq->tunnel_en) {
> tso_header_sz += buf->outer_l2_len +
>buf->outer_l3_len;
> --
> 1.8.3.1
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
Is this behavior consistent with your experience using the device
> and/or API?
No I did not face such issue, the behavior was consistent, but I never
tried to generate so many rules in the past.
Thanks,
--
Nélio Laranjeiro
6WIND
I Allain,
Please see below
On Tue, Mar 28, 2017 at 04:16:08PM +, Legacy, Allain wrote:
> > -Original Message-
> > From: Nélio Laranjeiro [mailto:nelio.laranje...@6wind.com]
> > Sent: Tuesday, March 28, 2017 11:36 AM
> <..>
> > If I understand corr
Hi Allain,
On Wed, Mar 29, 2017 at 12:29:59PM +, Legacy, Allain wrote:
> > -Original Message-
> > From: Nélio Laranjeiro [mailto:nelio.laranje...@6wind.com]
> > Sent: Wednesday, March 29, 2017 5:45 AM
>
> <...>
> > > Almost... the only differe
On Thu, Mar 30, 2017 at 04:53:47PM +, Legacy, Allain wrote:
> > -Original Message-
> > From: Nélio Laranjeiro [mailto:nelio.laranje...@6wind.com]
> > Sent: Thursday, March 30, 2017 9:03 AM
> <...>
> > I found an issue on the id retrieval while recei
tio/virtio_rxtx_simple.h | 6 +-
> > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 5 +-
> > lib/librte_mbuf/rte_mbuf.c | 4 +
> > lib/librte_mbuf/rte_mbuf.h | 123
> > -
> > 19 files changed, 130 insertions(+), 102 deletions(-)
> >
Tested-by: Nelio Laranjeiro
with mlx5 ConnectX-4 two ports with a single thread IO forwarding.
Olivier patches: increase performance by +0.4Mpps.
Olivier + Konstantin patches: increase performance by +0.8Mpps.
Regards,
--
Nélio Laranjeiro
6WIND
On Fri, Mar 31, 2017 at 01:16:51PM +, Legacy, Allain wrote:
> > -Original Message-
> > From: Nélio Laranjeiro [mailto:nelio.laranje...@6wind.com]
> > Sent: Friday, March 31, 2017 4:35 AM
> <...>
> > + Olga Shern,
> >
> > Allain,
> >
goto end;
> - priv_mac_addr_add(priv, index,
> + }
> + re = priv_mac_addr_add(priv, index,
> (const uint8_t (*)[ETHER_ADDR_LEN])
> mac_addr->addr_bytes);
> end:
> priv_unlock(priv);
> + return re;
> }
>[...]
Same remarks here,
Thanks,
--
Nélio Laranjeiro
6WIND
meaning of this
> field. My idea is:
>
> - the timestamp is in nanosecond
> - the reference is always the same for a given path: if the timestamp is
> set in a PMD, all the packets for this PMD will have the same
> reference, but for 2 different PMDs (or a sw lib), the reference
> would not be the same.
>
> I think it's enough for many use cases.
> We can later add helpers to compare timestamps with different
> references.
>
> Regards,
> Olivier
Regards,
--
Nélio Laranjeiro
6WIND
t;hdr_type_etc &
> + if (ntohs(cqe->hdr_type_etc) &
> MLX5_CQE_VLAN_STRIPPED) {
> pkt->ol_flags |= PKT_RX_VLAN_PKT |
> PKT_RX_VLAN_STRIPPED;
> --
> 1.8.3.1
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
ved", (void *)flow);
> + }
> +}
> +
> +/**
> + * Add all flows.
> + *
> + * @param priv
> + * Pointer to private structure.
> + *
> + * @return
> + * 0
> + struct {
> + unsigned int enabled:1; /* If enabled. */
> + unsigned int id:12; /* VLAN ID (0-4095). */
> + } vlan_filter[MLX4_MAX_VLAN_IDS]; /* VLAN filters table. */
> + /* Device properties. */
> + uint16_t mtu; /* Configured MTU. */
&
On Wed, Feb 22, 2017 at 09:37:42AM +0100, Nélio Laranjeiro wrote:
> On Tue, Feb 21, 2017 at 02:07:03PM +, Vasily Philipov wrote:
> > Adding support for the next items: eth, vlan, ipv4, udp, tcp and for the
> > next actions: queue, drop
> >
> > Sig
t cqe_n:4; /* Number of CQ elements (in log2). */
> uint16_t wqe_n:4; /* Number of of WQ elements (in log2). */
> uint16_t max_inline; /* Multiple of RTE_CACHE_LINE_SIZE to inline. */
> + uint16_t inline_en:1; /* When set inline is enabled. */
> + uint16_t tso_en:1; /* When set hardware TSO is enabled. */
> uint32_t qp_num_8s; /* QP number shifted by 8. */
> volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
> volatile void *wqes; /* Work queue (use volatile to write into). */
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 949035b..995b763 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -342,6 +342,19 @@
>RTE_CACHE_LINE_SIZE);
> attr.init.cap.max_inline_data =
> tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
> + tmpl.txq.inline_en = 1;
> + }
> + if (priv->tso) {
> + uint16_t max_tso_inline = ((MLX5_MAX_TSO_HEADER +
> +(RTE_CACHE_LINE_SIZE - 1)) /
> + RTE_CACHE_LINE_SIZE);
> +
> + attr.init.max_tso_header =
> + max_tso_inline * RTE_CACHE_LINE_SIZE;
> + attr.init.comp_mask |= IBV_EXP_QP_INIT_ATTR_MAX_TSO_HEADER;
> + tmpl.txq.max_inline = RTE_MAX(tmpl.txq.max_inline,
> + max_tso_inline);
> + tmpl.txq.tso_en = 1;
> }
> tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
> if (tmpl.qp == NULL) {
> --
> 1.8.3.1
Thanks,
--
Nélio Laranjeiro
6WIND
nneled packets are supported. */
> uint32_t qp_num_8s; /* QP number shifted by 8. */
> volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
> volatile void *wqes; /* Work queue (use volatile to write into). */
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 995b763..9d0c00f 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -356,6 +356,8 @@
> max_tso_inline);
> tmpl.txq.tso_en = 1;
> }
> + if (priv->tunnel_en)
> + tmpl.txq.tunnel_en = 1;
> tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
> if (tmpl.qp == NULL) {
> ret = (errno ? errno : EINVAL);
> --
> 1.8.3.1
Thanks,
--
Nélio Laranjeiro
6WIND
}
> if (unlikely(tso_header_sz >
>MLX5_MAX_TSO_HEADER))
> break;
> --
> 1.8.3.1
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
_rxtx.h
> +++ b/drivers/net/mlx5/mlx5_rxtx.h
> @@ -323,6 +323,8 @@ uint16_t mlx5_tx_burst_mpw_inline(void *, struct rte_mbuf
> **, uint16_t);
> uint16_t mlx5_rx_burst(void *, struct rte_mbuf **, uint16_t);
> uint16_t removed_tx_burst(void *, struct rte_mbuf **, uint16_t);
> uint16_t removed_rx_burst(void *, struct rte_mbuf **, uint16_t);
> +int mlx5_rx_descriptor_status(struct rte_eth_dev *, uint16_t, uint16_t);
> +int mlx5_tx_descriptor_status(struct rte_eth_dev *, uint16_t, uint16_t);
>
> /* mlx5_mr.c */
>
> --
> 2.8.1
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
to inline. */
> + uint16_t inline_en:1; /* When set inline is enabled. */
> + uint16_t tso_en:1; /* When set hardware TSO is enabled. */
> uint32_t qp_num_8s; /* QP number shifted by 8. */
> volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
> volatile void *wqes; /* Work queue (use volatile to write into). */
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 949035b..995b763 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -342,6 +342,19 @@
>RTE_CACHE_LINE_SIZE);
> attr.init.cap.max_inline_data =
> tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
> + tmpl.txq.inline_en = 1;
> + }
> + if (priv->tso) {
> + uint16_t max_tso_inline = ((MLX5_MAX_TSO_HEADER +
> +(RTE_CACHE_LINE_SIZE - 1)) /
> + RTE_CACHE_LINE_SIZE);
> +
> + attr.init.max_tso_header =
> + max_tso_inline * RTE_CACHE_LINE_SIZE;
> + attr.init.comp_mask |= IBV_EXP_QP_INIT_ATTR_MAX_TSO_HEADER;
> + tmpl.txq.max_inline = RTE_MAX(tmpl.txq.max_inline,
> + max_tso_inline);
> + tmpl.txq.tso_en = 1;
> }
> tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
> if (tmpl.qp == NULL) {
> --
> 1.8.3.1
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
ts are supported. */
> uint32_t qp_num_8s; /* QP number shifted by 8. */
> volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
> volatile void *wqes; /* Work queue (use volatile to write into). */
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 995b763..9d0c00f 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -356,6 +356,8 @@
> max_tso_inline);
> tmpl.txq.tso_en = 1;
> }
> + if (priv->tunnel_en)
> + tmpl.txq.tunnel_en = 1;
> tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
> if (tmpl.qp == NULL) {
> ret = (errno ? errno : EINVAL);
> --
> 1.8.3.1
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
flags |= MLX5_ETH_WQE_L4_CSUM;
> }
> if (unlikely(tso_header_sz >
> MLX5_MAX_TSO_HEADER))
> --
> 1.8.3.1
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
).
max_inline_len is confusing and seems to be redundant with max_inline
but they do not define the same thing. Please find an appropriate name
to help in future maintenance.
> uint32_t qp_num_8s; /* QP number shifted by 8. */
> volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
> volatile void *wqes; /* Work queue (use volatile to write into). */
> @@ -320,6 +324,7 @@ uint16_t mlx5_tx_burst_secondary_setup(void *, struct
> rte_mbuf **, uint16_t);
> uint16_t mlx5_tx_burst(void *, struct rte_mbuf **, uint16_t);
> uint16_t mlx5_tx_burst_mpw(void *, struct rte_mbuf **, uint16_t);
> uint16_t mlx5_tx_burst_mpw_inline(void *, struct rte_mbuf **, uint16_t);
> +uint16_t mlx5_tx_burst_empw(void *, struct rte_mbuf **, uint16_t);
> uint16_t mlx5_rx_burst(void *, struct rte_mbuf **, uint16_t);
> uint16_t removed_tx_burst(void *, struct rte_mbuf **, uint16_t);
> uint16_t removed_rx_burst(void *, struct rte_mbuf **, uint16_t);
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 949035bd4..ef8775382 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -276,6 +276,8 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl
> *txq_ctrl,
> (void)conf; /* Thresholds configuration (ignored). */
> assert(desc > MLX5_TX_COMP_THRESH);
> tmpl.txq.elts_n = log2above(desc);
> + if (priv->mps == MLX5_MPW_ENHANCED)
> + tmpl.txq.mpw_hdr_dseg = priv->mpw_hdr_dseg;
> /* MRs will be registered in mp2mr[] later. */
> attr.rd = (struct ibv_exp_res_domain_init_attr){
> .comp_mask = (IBV_EXP_RES_DOMAIN_THREAD_MODEL |
> @@ -340,8 +342,20 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl
> *txq_ctrl,
> tmpl.txq.max_inline =
> ((priv->txq_inline + (RTE_CACHE_LINE_SIZE - 1)) /
>RTE_CACHE_LINE_SIZE);
> - attr.init.cap.max_inline_data =
> - tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
> + if (priv->mps == MLX5_MPW_ENHANCED) {
> + tmpl.txq.max_inline_len = priv->txq_max_inline_len;
> + /* To minimize the size of data set, avoid requesting
> + * too large WQ
> + */
> + attr.init.cap.max_inline_data =
> + ((RTE_MIN(priv->txq_inline,
> + priv->txq_max_inline_len) +
> + (RTE_CACHE_LINE_SIZE - 1)) /
> + RTE_CACHE_LINE_SIZE) * RTE_CACHE_LINE_SIZE;
> + } else {
> + attr.init.cap.max_inline_data =
> + tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
> + }
> }
> tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
> if (tmpl.qp == NULL) {
> --
> 2.11.0
Great job,
Thanks,
--
Nélio Laranjeiro
6WIND
-git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 9d0c00f6d..e774954ca 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -266,6 +266,7 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl
> *txq_ctrl,
> struct ibv_exp_cq_attr cq_attr;
> } attr;
> enum ibv_exp_query_intf_status status;
> + unsigned int cqe_n;
> int ret = 0;
>
> if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) {
> @@ -276,6 +277,8 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl
> *txq_ctrl,
> (void)conf; /* Thresholds configuration (ignored). */
> assert(desc > MLX5_TX_COMP_THRESH);
> tmpl.txq.elts_n = log2above(desc);
> + if (priv->mps == MLX5_MPW_ENHANCED)
> + tmpl.txq.mpw_hdr_dseg = priv->mpw_hdr_dseg;
> /* MRs will be registered in mp2mr[] later. */
> attr.rd = (struct ibv_exp_res_domain_init_attr){
> .comp_mask = (IBV_EXP_RES_DOMAIN_THREAD_MODEL |
> @@ -294,9 +297,12 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl
> *txq_ctrl,
> .comp_mask = IBV_EXP_CQ_INIT_ATTR_RES_DOMAIN,
> .res_domain = tmpl.rd,
> };
> + cqe_n = ((desc / MLX5_TX_COMP_THRESH) - 1) ?
> + ((desc / MLX5_TX_COMP_THRESH) - 1) : 1;
> + if (priv->mps == MLX5_MPW_ENHANCED)
> + cqe_n += MLX5_TX_COMP_THRESH_INLINE_DIV;
> tmpl.cq = ibv_exp_create_cq(priv->ctx,
> - (((desc / MLX5_TX_COMP_THRESH) - 1) ?
> - ((desc / MLX5_TX_COMP_THRESH) - 1) : 1),
> + cqe_n,
> NULL, NULL, 0, &attr.cq);
> if (tmpl.cq == NULL) {
> ret = ENOMEM;
> @@ -340,9 +346,23 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl
> *txq_ctrl,
> tmpl.txq.max_inline =
> ((priv->txq_inline + (RTE_CACHE_LINE_SIZE - 1)) /
>RTE_CACHE_LINE_SIZE);
> - attr.init.cap.max_inline_data =
> - tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
> tmpl.txq.inline_en = 1;
> + /* TSO and MPS can't be enabled concurrently. */
> + assert(!priv->tso || !priv->mps);
> + if (priv->mps == MLX5_MPW_ENHANCED) {
> + tmpl.txq.max_inline_len = priv->txq_max_inline_len;
> + /* To minimize the size of data set, avoid requesting
> + * too large WQ.
> + */
> + attr.init.cap.max_inline_data =
> + ((RTE_MIN(priv->txq_inline,
> + priv->txq_max_inline_len) +
> + (RTE_CACHE_LINE_SIZE - 1)) /
> + RTE_CACHE_LINE_SIZE) * RTE_CACHE_LINE_SIZE;
> + } else {
> + attr.init.cap.max_inline_data =
> + tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
> + }
> }
> if (priv->tso) {
> uint16_t max_tso_inline = ((MLX5_MAX_TSO_HEADER +
> --
> 2.11.0
Thanks,
--
Nélio Laranjeiro
6WIND
On Wed, Mar 15, 2017 at 11:39:07AM +0100, Nélio Laranjeiro wrote:
> Hi Yongseok,
>
> I did not see this v2, in the future please use the "in-reply-to" in
> addition of the --thread option.
>
> Please see the comments below,
>
>[...]
> >
> > @@ -29
On Thu, Oct 13, 2016 at 02:35:05PM +, Oleg Kuporosov wrote:
>
> Hello DPDK Developers,
>
> Financial Services Industry which is pretty eager for several DPDK
> features especially low latency while high throughput. The major issue
> so far for increasing DPDK adoption there is requirement for
Hi all,
Am facing an issue with compilation on redhat 6.5 of DPDK v16.11-rc2,
compilation fails with:
cc1: warnings being treated as errors
/root/dpdk/drivers/net/i40e/i40e_ethdev_vf.c: In function
?i40evf_dev_interrupt_handler?:
/root/dpdk/drivers/net/i40e/i40e_ethdev_vf.c:1391: error: de
On Wed, Sep 14, 2016 at 11:43:35AM +0100, Ferruh Yigit wrote:
> Hi Nelio,
>
> On 9/7/2016 8:02 AM, Nelio Laranjeiro wrote:
> > To improve performance the NIC expects for large packets to have a pointer
> > to a cache aligned address, old inline code could break this assumption
> > which hurts perf
On Wed, Sep 14, 2016 at 01:53:47PM +0200, Nelio Laranjeiro wrote:
> - Flow director
> - Rx Capabilities
> - Inline
>
> Changes in V2:
>
> - Fix a compilation error.
>
> Adrien Mazarguil (1):
> net/mlx5: fix Rx VLAN offload capability report
>
> Nelio Laranjeiro (3):
> net/mlx5: force in
On Mon, Sep 19, 2016 at 05:14:26PM +0100, Bruce Richardson wrote:
> On Wed, Sep 14, 2016 at 02:18:02PM +0200, Nelio Laranjeiro wrote:
> > Rework Work Queue Element (aka WQE) structures to fit PMD needs.
> > A WQE is an aggregation of 16 bytes elements known as "data segments"
> > (aka dseg).
> >
>
On Mon, Sep 19, 2016 at 05:17:34PM +0100, Bruce Richardson wrote:
> On Wed, Sep 14, 2016 at 02:18:01PM +0200, Nelio Laranjeiro wrote:
> > - Rework structure elements to reduce their size.
> > - Removes a second useless loop in Tx burst function.
> >
> > This series should be applied on top of "n
Hi Bruce,
On Tue, Sep 27, 2016 at 03:11:10PM +0100, Bruce Richardson wrote:
> On Wed, Sep 14, 2016 at 10:16:05AM +0200, Nelio Laranjeiro wrote:
> > Signed-off-by: Nelio Laranjeiro
> > ---
> > drivers/net/mlx5/mlx5_rxq.c | 1 +
> > drivers/net/mlx5/mlx5_rxtx.c | 6 +-
> > drivers/net/mlx5/ml
On Tue, Sep 27, 2016 at 06:03:51PM +0100, Ferruh Yigit wrote:
> On 9/27/2016 3:53 PM, Nelio Laranjeiro wrote:
> > Signed-off-by: Nelio Laranjeiro
>
> <...>
>
> > @@ -1286,12 +1291,13 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf
> > **pkts, uint16_t pkts_n)
> > &(*rxq->cqes)[rxq-
on_drop);
> +#endif
> rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->qp,
> rte_flow->ibv_attr);
> if (!rte_flow->ibv_flow) {
>[...]
>From what I see by just changing the value of MLX5_DROP_WQ_N when
HAVE_VERBS_IBV_EXP_FLOW_SPEC_ACTION_DROP (in the same source file)
limits the patch to this point.
Am I missing something?
Regards,
--
Nélio Laranjeiro
6WIND
>> From: Nélio Laranjeiro [mailto:nelio.laranje...@6wind.com]
>> Sent: Monday, May 29, 2017 4:08 PM
>> To: Shachar Beiser
>> Cc: dev@dpdk.org; Adrien Mazarguil
>> Subject: Re: [PATCH] net/mlx5: implement drop action in hardware classifier
>>
>> On Sun,
mbuf *elt = (*elts)[elts_tail];
>
> assert(elt != NULL);
> - rte_pktmbuf_free(elt);
> + rte_pktmbuf_free_seg(elt);
> #ifndef NDEBUG
> /* Poisoning. */
> memset(&(*elts)[elts_tail],
> --
> 2.11.0
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
mlx5/mlx5_flow.c | 17 +
> 2 files changed, 22 insertions(+)
>
> --
> 1.8.3.1
For the series:
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
ions(+), 231 deletions(-)
>
> --
> 2.1.4
For the series:
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
;net/mlx5: use an RSS drop queue")
> >
> > Signed-off-by: Nelio Laranjeiro
> > Acked-by: Shahaf Shuler
>
> Applied to dpdk-next-net/master, thanks.
>
> This required solving merge conflict, although it looks simple, can you
> please confirm the final commit.
it is good for me.
Thanks Ferruh,
--
Nélio Laranjeiro
6WIND
; wqe->eseg = (rte_v128u32_t){
> 0,
> - cs_flags | (htons(buf->tso_segsz) << 16),
> + cs_flags | (htons(tso_segsz) << 16),
> 0,
> (ehdr << 16) | htons(tso_header_sz),
> };
> --
> 2.12.0
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
_flow = ibv_exp_create_flow(rte_flow->qp,
> rte_flow->ibv_attr);
> --
> 1.8.3.1
Acked-by: Nelio Laranjeiro
Ferruh, this patch is fixing an issue of a patch only present in your
master-net branch, the fixline sha1 will be wrong.
Thanks,
--
Nélio Laranjeiro
6WIND
mlx5_flow_action {
> ++flow->ibv_attr->num_of_specs;
> flow->offset += sizeof(struct ibv_exp_flow_spec_action_drop);
> #endif
> + rte_flow->ibv_attr = flow->ibv_attr;
> + if (!priv->started)
> + return rte_flow;
> rte_flow->qp = priv->flow_drop_queue->qp;
> rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->qp,
>rte_flow->ibv_attr);
> --
> 1.8.3.1
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
On Mon, Jun 26, 2017 at 01:55:33PM +0100, Ferruh Yigit wrote:
> On 6/26/2017 1:28 PM, Nélio Laranjeiro wrote:
> > On Sun, Jun 25, 2017 at 07:55:01AM +, Shachar Beiser wrote:
> >> Missing room in flow allocation to store the drop specification.
> >> Changing flow wi
VXLAN and VNI value
to redirect it to another queue, it work perfectly for me on mlx5.
You are facing some kind of issue?
Thanks,
[1] http://dpdk.org/ml/archives/dev/2016-November/050060.html
--
Nélio Laranjeiro
6WIND
priv_lock(priv);
> DEBUG("%p: adding MAC address at index %" PRIu32,
> (void *)dev, index);
> - if (index >= RTE_DIM(priv->mac))
> + if (index >= RTE_DIM(priv->mac)) {
> + re = -EINVAL;
> goto end
);
> - return;
> + return -EINVAL;
> }
>
> uc = alloca(VIRTIO_MAX_MAC_ADDRS * ETHER_ADDR_LEN +
> sizeof(uc->entries));
> @@ -1074,7 +1075,7 @@ virtio_mac_addr_add(struct rte_eth_dev *dev, struct
> ether_addr *mac_addr,
> memcpy(&tbl->macs[tbl->entries++], addr, ETHER_ADDR_LEN);
> }
>
> - virtio_mac_table_set(hw, uc, mc);
> + return virtio_mac_table_set(hw, uc, mc);
> }
>
> static void
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index fa6ae44..a6937bd 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -2167,6 +2167,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct
> ether_addr *addr,
> struct rte_eth_dev *dev;
> int index;
> uint64_t pool_mask;
> + int ret;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
> @@ -2199,15 +2200,17 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct
> ether_addr *addr,
> }
>
> /* Update NIC */
> - (*dev->dev_ops->mac_addr_add)(dev, addr, index, pool);
> + ret = (*dev->dev_ops->mac_addr_add)(dev, addr, index, pool);
>
> - /* Update address in NIC data structure */
> - ether_addr_copy(addr, &dev->data->mac_addrs[index]);
> + if (ret == 0) {
> + /* Update address in NIC data structure */
> + ether_addr_copy(addr, &dev->data->mac_addrs[index]);
>
> - /* Update pool bitmap in NIC data structure */
> - dev->data->mac_pool_sel[index] |= (1ULL << pool);
> + /* Update pool bitmap in NIC data structure */
> + dev->data->mac_pool_sel[index] |= (1ULL << pool);
> + }
>
> - return 0;
> + return ret;
> }
>
> int
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index d072538..08e6c13 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -1277,7 +1277,7 @@ typedef int (*eth_dev_led_off_t)(struct rte_eth_dev
> *dev);
> typedef void (*eth_mac_addr_remove_t)(struct rte_eth_dev *dev, uint32_t
> index);
> /**< @internal Remove MAC address from receive address register */
>
> -typedef void (*eth_mac_addr_add_t)(struct rte_eth_dev *dev,
> +typedef int (*eth_mac_addr_add_t)(struct rte_eth_dev *dev,
> struct ether_addr *mac_addr,
> uint32_t index,
> uint32_t vmdq);
> --
> 2.7.4
>
For mlx changes,
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
~(RTE_CACHE_LINE_SIZE - 1);
> unsigned int copy_b = (addr_end > addr) ?
> RTE_MIN((addr_end - addr), length) :
> --
> 2.11.0
Thanks,
--
Nélio Laranjeiro
6WIND
(pkt_inline_sz - 2);
> + uintptr_t addr_end = (addr + inline_room) &
>~(RTE_CACHE_LINE_SIZE - 1);
> unsigned int copy_b = (addr_end > addr) ?
> RTE_MIN((addr_end - addr), length) :
> --
> 2.11.0
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
; priv->tunnel_en = tunnel_en;
> - err = mlx5_args(priv, pci_dev->device.devargs);
> + err = mlx5_args(&args, pci_dev->device.devargs);
> if (err) {
> ERROR("failed to process device arguments: %s",
> strerror(err));
> goto port_error;
> }
> + mlx5_args_assign(priv, &args);
> if (ibv_exp_query_device(ctx, &exp_device_attr)) {
> ERROR("ibv_exp_query_device() failed");
> goto port_error;
> --
> 2.12.0
>
--
Nélio Laranjeiro
6WIND
t;cqe_comp = 1; /* Enable compression by default. */
> priv->tunnel_en = tunnel_en;
> - err = mlx5_args(priv, pci_dev->device.devargs);
> + err = mlx5_args(&args, pci_dev->device.devargs);
> if (err) {
> ERROR("failed to process device arguments: %s",
> strerror(err));
> goto port_error;
> }
> + mlx5_args_assign(priv, &args);
> if (ibv_exp_query_device(ctx, &exp_device_attr)) {
> ERROR("ibv_exp_query_device() failed");
> goto port_error;
> --
> 2.12.0
>
For the series,
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
if (fdq->wqs[i])
> + claim_zero(ibv_exp_destroy_wq(fdq->wqs[i]));
> }
> - claim_zero(ibv_destroy_cq(fdq->cq));
> + if (fdq->cq)
> + claim_zero(ibv_destroy_cq(fdq->cq));
> rte_free(fdq);
> priv->flow_drop_queue = NULL;
> }
> --
> 2.11.0
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
attr.init.max_tso_header =
> max_tso_inline * RTE_CACHE_LINE_SIZE;
> attr.init.comp_mask |= IBV_EXP_QP_INIT_ATTR_MAX_TSO_HEADER;
> --
> 2.12.0
>
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
: refactor Rx data path")
> Fixes: 1d88ba171942 ("net/mlx5: refactor Tx data path")
> Cc: sta...@dpdk.org
>
> Signed-off-by: Shahaf Shuler
> Acked-by: Yongseok Koh
Acked-by: Nelio Laranjeiro
--
Nélio Laranjeiro
6WIND
Andrew Rybchenko
> Cc: Pascal Mazon
Acked-by: Nelio Laranjeiro
By the way I have implemented it for mlx5 driver [1], your patch needs
to be updated on master-net, is does not apply like this.
Thanks,
[1] http://dpdk.org/dev/patchwork/patch/24487/
--
Nélio Laranjeiro
6WIND
annot allocate WQ for drop queue");
> + goto error;
> + }
>From here...
> + fdq->ind_table = ibv_exp_create_rwq_ind_table(priv->ctx,
> + &(struct ibv_exp_rwq_ind_table_init_attr){
> + .pd = priv->pd,
> + .log_ind_tbl_size = 0,
> + .ind_tbl = fdq->wqs,
> + .comp_mask = 0,
> + });
> + if (!fdq->ind_table) {
> + WARN("cannot allocate indirection table for drop queue");
> + goto error;
> + }
> +#endif
>[...]
To this point, the code is copy/paste from the block above, it should
not be present. Please keep the diff as small as possible.
Thanks,
--
Nélio Laranjeiro
6WIND
EGISTER_PCI_TABLE(net_mlx5, mlx5_pci_id_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_mlx5, "* ib_uverbs & mlx5_core & mlx5_ib");
> -
> -/** Initialize driver log type. */
> -RTE_INIT(vdev_netvsc_init_log)
> -{
> - mlx5_logtype = rte_log_register("pmd.net.mlx5");
> - if (mlx5_logtype >= 0)
> - rte_log_set_level(mlx5_logtype, RTE_LOG_NOTICE);
> -}
> --
> 2.17.1
Thanks,
--
Nélio Laranjeiro
6WIND
6_src and .ipv6_dst here ?
>[...]
Yes indeed initialisation of IPv6 is missing.
> > +- ``nvgre_decap``: Performs a decapsulation action by stripping all
> > +headers of
> > + the VXLAN tunnel network overlay from the matched flow.
>
> VXLAN should be NVGRE.
>
>[...]
Here also,
I am will update it in a V2.
Thanks for you review,
--
Nélio Laranjeiro
6WIND
1 - 100 of 466 matches
Mail list logo