Re: [PATCH] net/mlx5: do not poll CQEs when no available elts
Hi, From: Gavin Hu Sent: Friday, December 6, 2024 2:58 AM To: dev@dpdk.org Cc: sta...@dpdk.org; Dariusz Sosnowski; Slava Ovsiienko; Bing Zhao; Ori Kam; Suanming Mou; Matan Azrad; Alexander Kozyrev Subject: [PATCH] net/mlx5: do not poll CQEs when no available elts In certain situations, the receive queue (rxq) fails to replenish its internal ring with memory buffers (mbufs) from the pool. This can happen when the pool has a limited number of mbufs allocated, and the user application holds incoming packets for an extended period, resulting in a delayed release of mbufs. Consequently, the pool becomes depleted, preventing the rxq from replenishing from it. There was a bug in the behavior of the vectorized rxq_cq_process_v routine, which handled completion queue entries (CQEs) in batches of four. This routine consistently accessed four mbufs from the internal queue ring, regardless of whether they had been replenished. As a result, it could access mbufs that no longer belonged to the poll mode driver (PMD). The fix involves checking if there are four replenished mbufs available before allowing rxq_cq_process_v to handle the batch. Once replenishment succeeds during the polling process, the routine will resume its operation. Fixes: 1ded26239aa0 ("net/mlx5: refactor vectorized Rx") Cc: sta...@dpdk.org Reported-by: Changqi Dingluo Signed-off-by: Gavin Hu Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh
Re: [PATCH v3] net/mlx5: fix RSS hash for non-RSS CQE zipping
Raslan, please revert this patch. I rejected it last week. This fix is incorrect without the FW changes. Regards, Alex From: Raslan Darawsheh Sent: Sunday, January 19, 2025 6:47:48 a.m. To: Alexander Kozyrev ; dev@dpdk.org Cc: sta...@dpdk.org ; Slava Ovsiienko ; Matan Azrad ; Dariusz Sosnowski ; Bing Zhao ; Suanming Mou Subject: Re: [PATCH v3] net/mlx5: fix RSS hash for non-RSS CQE zipping Hi, From: Alexander Kozyrev Sent: Saturday, November 30, 2024 2:39 AM To: dev@dpdk.org Cc: sta...@dpdk.org; Raslan Darawsheh; Slava Ovsiienko; Matan Azrad; Dariusz Sosnowski; Bing Zhao; Suanming Mou Subject: [PATCH v3] net/mlx5: fix RSS hash for non-RSS CQE zipping Take the RSS hash value from the title packet before it gets overwritten by the decompression routine. Set the RSS hash flag in the packet mbuf if RSS is enabled in case of non-RSS CQE zipping format. Fixes: 54c2d46 ("net/mlx5: support flow tag and packet header miniCQEs") Cc: sta...@dpdk.org Signed-off-by: Alexander Kozyrev Sending reply to the correct Patch version, Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh
[PATCH 1/2] mempool: add rte_errno in rte_mempool_set_ops_byname
rte_errno is not set for error exits. For the scenario described in BugZilla ID 1559, rte_mempool_create_empty() calls rte_mempool_set_ops_byname(), but does not set as well the proper rte_errno. rte_errno is now set in rte_mempool_set_ops_byname(); from there it cascades down to the calling function. Bugzilla ID: 1559 Signed-off-by: Ariel Otilibili --- lib/mempool/rte_mempool_ops.c | 12 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index 1b33380259b3..b5c68ac61b67 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -169,8 +169,10 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name, unsigned i; /* too late, the mempool is already populated. */ - if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED) - return -EEXIST; + if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED) { + rte_errno = EEXIST; + return -rte_errno; + } for (i = 0; i < rte_mempool_ops_table.num_ops; i++) { if (!strcmp(name, @@ -180,8 +182,10 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name, } } - if (ops == NULL) - return -EINVAL; + if (ops == NULL) { + rte_errno = EINVAL; + return -rte_errno; + } mp->ops_index = i; mp->pool_config = pool_config; -- 2.30.2
[PATCH 0/2] mempool: add rte_errno, and turn functions into single-exit ones
Hello, This series is for BugZilla ID 1559. rte_mempool_set_ops_byname() did not set rte_errno for error exit. As well, other functions did not consistently set the variable. For avoiding that, they are turned into single-exit functions. Thank you, Ariel Otilibili (2): mempool: add rte_errno in rte_mempool_set_ops_byname mempool: turn functions into single-exit ones lib/mempool/rte_mempool_ops.c | 44 ++- 1 file changed, 33 insertions(+), 11 deletions(-) -- 2.30.2
[PATCH 2/2] mempool: turn functions into single-exit ones
Some functions did not set rte_errno; for avoiding that, they are turned into single-exit ones. Bugzilla ID: 1559 Signed-off-by: Ariel Otilibili --- lib/mempool/rte_mempool_ops.c | 38 ++- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index b5c68ac61b67..bf4328645196 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -33,7 +33,9 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) rte_spinlock_unlock(&rte_mempool_ops_table.sl); RTE_MEMPOOL_LOG(ERR, "Maximum number of mempool ops structs exceeded"); - return -ENOSPC; + rte_errno = ENOSPC; + ops_index = -rte_errno; + goto out; } if (h->alloc == NULL || h->enqueue == NULL || @@ -41,7 +43,9 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) rte_spinlock_unlock(&rte_mempool_ops_table.sl); RTE_MEMPOOL_LOG(ERR, "Missing callback while registering mempool ops"); - return -EINVAL; + rte_errno = -EINVAL; + ops_index = -rte_errno; + goto out; } if (strlen(h->name) >= sizeof(ops->name) - 1) { @@ -49,7 +53,8 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) RTE_MEMPOOL_LOG(DEBUG, "%s(): mempool_ops <%s>: name too long", __func__, h->name); rte_errno = EEXIST; - return -EEXIST; + ops_index = -rte_errno; + goto out; } ops_index = rte_mempool_ops_table.num_ops++; @@ -67,6 +72,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) rte_spinlock_unlock(&rte_mempool_ops_table.sl); +out: return ops_index; } @@ -151,12 +157,19 @@ rte_mempool_ops_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info) { struct rte_mempool_ops *ops; + int ret; ops = rte_mempool_get_ops(mp->ops_index); - if (ops->get_info == NULL) - return -ENOTSUP; - return ops->get_info(mp, info); + if (ops->get_info == NULL) { + rte_errno = ENOTSUP; + ret = -rte_errno; + goto out; + } + ret = ops->get_info(mp, info); + +out: + return ret; } @@ -166,12 +179,14 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name, void *pool_config) { struct rte_mempool_ops *ops = NULL; + int ret = 0; unsigned i; /* too late, the mempool is already populated. */ if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED) { rte_errno = EEXIST; - return -rte_errno; + ret = -rte_errno; + goto out; } for (i = 0; i < rte_mempool_ops_table.num_ops; i++) { @@ -182,13 +197,16 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name, } } - if (ops == NULL) { + if (!ops) { rte_errno = EINVAL; - return -rte_errno; + ret = -rte_errno; + goto out; } mp->ops_index = i; mp->pool_config = pool_config; rte_mempool_trace_set_ops_byname(mp, name, pool_config); - return 0; + +out: + return ret; } -- 2.30.2
Re: [PATCH v3] net/mlx5: fix RSS hash for non-RSS CQE zipping
Hi, From: Alexander Kozyrev Sent: Saturday, November 30, 2024 2:39 AM To: dev@dpdk.org Cc: sta...@dpdk.org; Raslan Darawsheh; Slava Ovsiienko; Matan Azrad; Dariusz Sosnowski; Bing Zhao; Suanming Mou Subject: [PATCH v3] net/mlx5: fix RSS hash for non-RSS CQE zipping Take the RSS hash value from the title packet before it gets overwritten by the decompression routine. Set the RSS hash flag in the packet mbuf if RSS is enabled in case of non-RSS CQE zipping format. Fixes: 54c2d46 ("net/mlx5: support flow tag and packet header miniCQEs") Cc: sta...@dpdk.org Signed-off-by: Alexander Kozyrev Sending reply to the correct Patch version, Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh
Re: [PATCH v2] net/mlx5: fix RSS hash for non-RSS CQE zipping
Hi, From: Alexander Kozyrev Sent: Friday, November 29, 2024 10:44 PM To: dev@dpdk.org Cc: sta...@dpdk.org; Raslan Darawsheh; Slava Ovsiienko; Matan Azrad; Dariusz Sosnowski; Bing Zhao; Suanming Mou Subject: [PATCH v2] net/mlx5: fix RSS hash for non-RSS CQE zipping Take the RSS hash and flow tag values from the title packet before they get overwritten by the decompressing routine. Set the RSS hash flag in the packet mbuf if RSS is enabled in case of non-RSS CQE zipping format. Signed-off-by: Alexander Kozyrev Patch applied to next-net-mlx, Kindest regards Raslan Darawsheh
RE: [PATCH 2/2] mempool: turn functions into single-exit ones
> Some functions did not set rte_errno; for avoiding that, they are turned > into single-exit ones. > > Bugzilla ID: 1559 > Signed-off-by: Ariel Otilibili But reading through public API comments none of these functions are expected to set rte_errno value. If rte_mempool_create_empty() forgets to set rte_errno, why it is not enough just to add missing one in rte_mempool_create_empty()? > --- > lib/mempool/rte_mempool_ops.c | 38 ++- > 1 file changed, 28 insertions(+), 10 deletions(-) > > diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c > index b5c68ac61b67..bf4328645196 100644 > --- a/lib/mempool/rte_mempool_ops.c > +++ b/lib/mempool/rte_mempool_ops.c > @@ -33,7 +33,9 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > RTE_MEMPOOL_LOG(ERR, > "Maximum number of mempool ops structs exceeded"); > - return -ENOSPC; > + rte_errno = ENOSPC; > + ops_index = -rte_errno; > + goto out; > } > > if (h->alloc == NULL || h->enqueue == NULL || > @@ -41,7 +43,9 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > RTE_MEMPOOL_LOG(ERR, > "Missing callback while registering mempool ops"); > - return -EINVAL; > + rte_errno = -EINVAL; > + ops_index = -rte_errno; > + goto out; > } > > if (strlen(h->name) >= sizeof(ops->name) - 1) { > @@ -49,7 +53,8 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) > RTE_MEMPOOL_LOG(DEBUG, "%s(): mempool_ops <%s>: name too long", > __func__, h->name); > rte_errno = EEXIST; > - return -EEXIST; > + ops_index = -rte_errno; > + goto out; > } > > ops_index = rte_mempool_ops_table.num_ops++; > @@ -67,6 +72,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) > > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > > +out: > return ops_index; > } > > @@ -151,12 +157,19 @@ rte_mempool_ops_get_info(const struct rte_mempool *mp, >struct rte_mempool_info *info) > { > struct rte_mempool_ops *ops; > + int ret; > > ops = rte_mempool_get_ops(mp->ops_index); > > - if (ops->get_info == NULL) > - return -ENOTSUP; > - return ops->get_info(mp, info); > + if (ops->get_info == NULL) { > + rte_errno = ENOTSUP; > + ret = -rte_errno; > + goto out; > + } > + ret = ops->get_info(mp, info); > + > +out: > + return ret; > } > > > @@ -166,12 +179,14 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, > const char *name, > void *pool_config) > { > struct rte_mempool_ops *ops = NULL; > + int ret = 0; > unsigned i; > > /* too late, the mempool is already populated. */ > if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED) { > rte_errno = EEXIST; > - return -rte_errno; > + ret = -rte_errno; > + goto out; > } > > for (i = 0; i < rte_mempool_ops_table.num_ops; i++) { > @@ -182,13 +197,16 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, > const char *name, > } > } > > - if (ops == NULL) { > + if (!ops) { > rte_errno = EINVAL; > - return -rte_errno; > + ret = -rte_errno; > + goto out; > } > > mp->ops_index = i; > mp->pool_config = pool_config; > rte_mempool_trace_set_ops_byname(mp, name, pool_config); > - return 0; > + > +out: > + return ret; > } > -- > 2.30.2
Re: [PATCH v1] dts: fix attribute error in checksum offload suite
Reviewed-by: Patrick Robb Applied to next-dts - thanks for finding this.
Re: [PATCH v1 1/2] dts: add fwd restart decorator to rx capabilities
On Fri, Jan 17, 2025 at 9:58 AM Nicholas Pratte wrote: > -@requires_started_ports +@requires_forwarding_restart > @requires_stopped_ports > def set_port_mtu(self, port_id: int, mtu: int, verify: bool = True) > -> None: > """Change the MTU of a port using testpmd. > Is the requires_stopped_ports decorated still required, or is the requires_forwarding_restart decorator alone sufficient? Looks good otherwise thanks! Reviewed-by: Patrick Robb
Re: [PATCH v1 2/2] dts: add mtu update and jumbo frames test suite
On Fri, Jan 17, 2025 at 9:58 AM Nicholas Pratte wrote: > > +def assess_mtu_boundary(self, testpmd_shell: TestPmdShell, mtu: int) > -> None: > +"""Sets the new MTU and verifies packets at the set boundary. > + > +Ensure that packets smaller than or equal to a set MTU will be > received and packets larger > +will not. > + > +First, start testpmd and update the MTU. Then ensure the new > value appears > +on port info for all ports. > +Next, start packet capturing and send 3 different lengths of > packet and verify > +they are handled correctly. > +# 1. VENDOR_AGNOSTIC_PADDING units smaller than the MTU > specified. > +# 2. Equal to the MTU specified. > +# 3. VENDOR_AGNOSTIC_PADDING units larger than the MTU > specified (should be fragmented). > +Finally, stop packet capturing. > + > +Args: > +testpmd_shell: Active testpmd shell of a given test case. > +mtu: New Maximum Transmission Unit to be tested. > +""" > +# Send 3 packets of different sizes (accounting for vendor > inconsistencies). > +# 1. VENDOR_AGNOSTIC_PADDING units smaller than the MTU specified. > +# 2. Equal to the MTU specified. > +# 3. VENDOR_AGNOSTIC_PADDING units larger than the MTU specified. > +smaller_frame_size: int = mtu - VENDOR_AGNOSTIC_PADDING > +equal_frame_size: int = mtu > +larger_frame_size: int = mtu + VENDOR_AGNOSTIC_PADDING > + > +self.send_packet_and_verify(pkt_size=smaller_frame_size, > should_receive=True) > +self.send_packet_and_verify(pkt_size=equal_frame_size, > should_receive=True) > + > +current_mtu = testpmd_shell.show_port_info(0).mtu > +self.verify(current_mtu is not None, "Error grabbing testpmd MTU > value.") > +if current_mtu and ( > +current_mtu >= STANDARD_MTU + VENDOR_AGNOSTIC_PADDING and mtu > == STANDARD_MTU > +): > +self.send_packet_and_verify(pkt_size=larger_frame_size, > should_receive=True) > I don't understand when this condition may be true - can you explain? Thanks! > +else: > +self.send_packet_and_verify(pkt_size=larger_frame_size, > should_receive=False) > + > +@func_test > +def test_runtime_mtu_updating_and_forwarding(self) -> None: > +"""Verify runtime MTU adjustments and assess packet forwarding > behavior. > + > +Test: > +Start TestPMD in a paired topology. > +Set port MTU to 1500. > +Send packets of 1493, 1500 and 1509 bytes. > I think 1493 should be 1491. > -- > 2.47.1 > > Thanks, other than a couple questions here and in the associated patch this looks good. I can merge on Tuesday. Reviewed-by: Patrick Robb Tested-by: Patrick Robb
RE: [PATCH v6 0/8] [v6]drivers/net Add Support mucse N10 Pmd Driver
Hi Thomas, I will continue to work on it on this quarter, a more detail servial patchs will be summit after the Spring Festival. Regards Wenbo > -Original Message- > From: Thomas Monjalon > Sent: 2025年1月17日 0:50 > To: yao...@mucse.com; Wenbo Cao > Cc: dev@dpdk.org; ferruh.yi...@amd.com; andrew.rybche...@oktetlabs.ru; > step...@networkplumber.org > Subject: Re: [PATCH v6 0/8] [v6]drivers/net Add Support mucse N10 Pmd Driver > > Hello, > > Is there any plan to resume this work? > > > 01/09/2023 04:30, Wenbo Cao: > > For This patchset just to support the basic chip init work and user > > can just found the eth_dev, but can't control more. > > For Now just support 2*10g nic,the chip can support > > 2*10g,4*10g,4*1g,8*1g,8*10g. > > The Feature rx side can support rx-cksum-offload,rss,vlan-filter > > flow_clow,uncast_filter,mcast_filter,1588,Jumbo-frame > > The Feature tx side can supprt tx-cksum-offload,tso,vxlan-tso flow > > director base on ntuple pattern of tcp/udp/ip/ eth_hdr->type for sriov > > is also support. > > > > Because of the chip desgin defect, for multiple-port mode one pci-bdf > > will have multiple-port (max can have four ports) so this code must be > > care of one bdf init multiple-port. > > > >
[PATCH v8] app/testpmd: add attach and detach port for multiple process
The port information needs to be updated due to attaching and detaching port. Currently, it is done in the same thread as removing or probing device, which doesn't satisfy the operation of attaching and detaching device in multiple process. If this operation is performed in one process, the other process can receive 'new' or 'destroy' event. So we can move updating port information to event callback to support attaching and detaching port in primary and secondary process. Note: the reason for adding an alarm callback in 'destroy' event is that the ethdev state is changed from 'ATTACHED' to 'UNUSED' only after the event callback finished. But the remove_invalid_ports() function removes invalid port only if ethdev state is 'UNUSED'. If we don't add alarm callback, this detached port information can not be removed. Signed-off-by: Huisong Li Signed-off-by: Dongdong Liu Acked-by: Chengwen Feng --- -v8: #1 remove other patches because they have been clarified in another patchset[1][2]. #2 move the configuring and querying the port to start_port() because they are not approprate in new event callback. -v7: fix conflicts -v6: adjust rte_eth_dev_is_used position based on alphabetical order in version.map -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break. -v4: fix a misspelling. -v3: #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification for other bus type. #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve the probelm in patch 2/5. -v2: resend due to CI unexplained failure. [1] https://patches.dpdk.org/project/dpdk/cover/20250113025521.32703-1-lihuis...@huawei.com/ [2] https://patches.dpdk.org/project/dpdk/cover/20250116114034.9858-1-lihuis...@huawei.com/ --- app/test-pmd/testpmd.c | 68 +++--- 1 file changed, 51 insertions(+), 17 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index ac654048df..e47d480205 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2895,6 +2895,9 @@ start_port(portid_t pid) at_least_one_port_exist = true; port = &ports[pi]; + if (port->need_setup) + setup_attached_port(pi); + if (port->port_status == RTE_PORT_STOPPED) { port->port_status = RTE_PORT_HANDLING; all_ports_already_started = false; @@ -3242,6 +3245,7 @@ remove_invalid_ports(void) remove_invalid_ports_in(ports_ids, &nb_ports); remove_invalid_ports_in(fwd_ports_ids, &nb_fwd_ports); nb_cfg_ports = nb_fwd_ports; + printf("Now total ports is %d\n", nb_ports); } static void @@ -3414,14 +3418,11 @@ attach_port(char *identifier) return; } - /* first attach mode: event */ + /* First attach mode: event +* New port flag is updated on RTE_ETH_EVENT_NEW event +*/ if (setup_on_probe_event) { - /* new ports are detected on RTE_ETH_EVENT_NEW event */ - for (pi = 0; pi < RTE_MAX_ETHPORTS; pi++) - if (ports[pi].port_status == RTE_PORT_HANDLING && - ports[pi].need_setup != 0) - setup_attached_port(pi); - return; + goto out; } /* second attach mode: iterator */ @@ -3431,6 +3432,9 @@ attach_port(char *identifier) continue; /* port was already attached before */ setup_attached_port(pi); } +out: + printf("Port %s is attached.\n", identifier); + printf("Done\n"); } static void @@ -3450,14 +3454,8 @@ setup_attached_port(portid_t pi) "Error during enabling promiscuous mode for port %u: %s - ignore\n", pi, rte_strerror(-ret)); - ports_ids[nb_ports++] = pi; - fwd_ports_ids[nb_fwd_ports++] = pi; - nb_cfg_ports = nb_fwd_ports; ports[pi].need_setup = 0; ports[pi].port_status = RTE_PORT_STOPPED; - - printf("Port %d is attached. Now total ports is %d\n", pi, nb_ports); - printf("Done\n"); } static void @@ -3487,10 +3485,8 @@ detach_device(struct rte_device *dev) TESTPMD_LOG(ERR, "Failed to detach device %s\n", rte_dev_name(dev)); return; } - remove_invalid_ports(); printf("Device is detached\n"); - printf("Now total ports is %d\n", nb_ports); printf("Done\n"); return; } @@ -3722,7 +3718,25 @@ rmv_port_callback(void *arg) struct rte_device *device = dev_info.device; close_port(port_id); detach_device(device); /* might be already removed or have more ports */ + remove_invalid_ports(); + } + if (need_to_start) + start_packet_forwarding(0); +} + +static void
[PATCH v8 06/15] net/zxdh: dev start/stop ops implementations
dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c| 71 drivers/net/zxdh/zxdh_pci.c | 21 +++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 91 +++ drivers/net/zxdh/zxdh_queue.h | 69 +++ drivers/net/zxdh/zxdh_rxtx.h | 17 +++--- 8 files changed, 266 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..7b72be5f25 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux= Y x86-64 = Y ARMv8= Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 521d7ed433..6e603b967e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -899,12 +899,40 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + uint16_t i; + int ret; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "intr disable failed"); + return ret; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); @@ -928,9 +956,52 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EINVAL; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop= zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..6b2c4482b2 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,26 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virt
[PATCH v8 04/15] net/zxdh: port tables unint implementations
delete port tables in host. Signed-off-by: Junlong Wang --- drivers/net/zxdh/zxdh_ethdev.c | 18 ++ drivers/net/zxdh/zxdh_msg.h| 1 + drivers/net/zxdh/zxdh_np.c | 103 + drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 33 ++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 164 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ff44816384..717a1d2b0b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -887,12 +887,30 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret; + + ret = zxdh_port_attr_uninit(dev); + if (ret) + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + + return ret; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 3387a339b4..5f7deb5e6a 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, }; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index db536d96e3..99a7dc11b4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_SDT_MGR_PTR_GET()(&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -1454,3 +1455,105 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, +uint32_t queue_id, +uint32_t entrynum, +ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_s
[PATCH v8 02/15] net/zxdh: zxdh np uninit implementation
(np)network processor release resources in host. Signed-off-by: Junlong Wang --- drivers/net/zxdh/zxdh_ethdev.c | 48 drivers/net/zxdh/zxdh_np.c | 470 + drivers/net/zxdh/zxdh_np.h | 107 3 files changed, 625 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b8f4415e00..4e114d95da 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret; + int i; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1010,6 +1056,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1177,6 +1224,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index e44d7ff501..28728b0c68 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -18,10 +18,21 @@ static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_REG_T g_dpp_reg_info[4]; #define ZXDH_SDT_MGR_PTR_GET()(&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -338,3 +349,462 @@ zxdh_np_host_init(uint32_t dev_id, return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[de
[PATCH v8 05/15] net/zxdh: rx/tx queue setup and intr enable
rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 + drivers/net/zxdh/zxdh_queue.h | 33 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 717a1d2b0b..521d7ed433 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -933,6 +933,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable= zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpc
[PATCH v8 00/15] net/zxdh: updated net zxdh driver
V8: - using __rte_packed_begin/__rte_packed_end replace __rte_packed. V7: - resolved warning '-Waddress-of-packed-member' in function 'zxdh_dev_rss_reta_update'. V6: - Remove unnecessary __rte_packed in the virtqueue structure and others. - Remove Some blank before or after log message, and remove some end with period in log message. V5: - Simplify the notify_data part in the zxdh_notify_queue function. - Replace rte_zmalloc with rte_calloc in the rss_reta_update function. - Remove unnecessary check in mtu_set function. V4: - resolved ci compile issues. V3: - use rte_zmalloc and rte_calloc to avoid memset. - remove unnecessary initialization, which first usage will set. - adjust some function which is always return 0, changed to void and skip the ASSERTION later. - resolved some WARNING:MACRO_ARG_UNUSED issues. - resolved some other issues. V2: - resolve code style and github-robot build issue. V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promisc/allmulti ops implementations net/zxdh: vlan filter/ offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 18 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build |4 + drivers/net/zxdh/zxdh_common.c | 28 +- drivers/net/zxdh/zxdh_common.h |1 + drivers/net/zxdh/zxdh_ethdev.c | 602 +++- drivers/net/zxdh/zxdh_ethdev.h | 48 +- drivers/net/zxdh/zxdh_ethdev_ops.c | 1573 + drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ drivers/net/zxdh/zxdh_msg.c| 205 ++- drivers/net/zxdh/zxdh_msg.h| 232 drivers/net/zxdh/zxdh_np.c | 2060 drivers/net/zxdh/zxdh_np.h | 579 drivers/net/zxdh/zxdh_pci.c| 27 +- drivers/net/zxdh/zxdh_pci.h|9 +- drivers/net/zxdh/zxdh_queue.c | 242 +++- drivers/net/zxdh/zxdh_queue.h | 189 ++- drivers/net/zxdh/zxdh_rxtx.c | 804 +++ drivers/net/zxdh/zxdh_rxtx.h | 23 +- drivers/net/zxdh/zxdh_tables.c | 794 +++ drivers/net/zxdh/zxdh_tables.h | 231 21 files changed, 7671 insertions(+), 95 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0
[PATCH v8 12/15] net/zxdh: vlan filter/ offload ops implementations
provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +- drivers/net/zxdh/zxdh_ethdev_ops.c | 223 + drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h| 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 99 + drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 417 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index e9b237e102..6fb006c2da 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -16,3 +16,6 @@ Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode= Y +VLAN filter = Y +VLAN offload = Y +QinQ offload = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1dd6624e30..468f2165cb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -759,6 +759,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -816,7 +844,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -848,6 +876,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1088,6 +1118,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable= zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set= zxdh_dev_vlan_offload_set, }; static int32_t @@ -1346,6 +1378,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " vlan filter table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index ad3d10258c..c4a1521723 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -523,3 +527,222 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int +zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); +
[PATCH v8 13/15] net/zxdh: rss hash config/update, reta update/get
provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 52 drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 407 + drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ drivers/net/zxdh/zxdh_msg.h| 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 603 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 6fb006c2da..415ca547d0 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -19,3 +19,6 @@ Allmulticast mode= Y VLAN filter = Y VLAN offload = Y QinQ offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS= Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 468f2165cb..e504b239c6 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -61,6 +61,9 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; + dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -785,9 +788,48 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) return ret; } + ret = zxdh_rss_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "rss configure failed"); + return ret; + } + return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -874,6 +916,12 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } + zxdh_pci_reinit_complete(hw); end: @@ -1120,6 +1168,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable= zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set= zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 3cdac5de73..2934fa264a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -82,6 +82,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; + uint16_t *rss_reta; uint8_t *isr; uint8_t wea
[PATCH v8 09/15] net/zxdh: link info update, set link up/down
provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 21 drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c| 60 ++ drivers/net/zxdh/zxdh_msg.h| 40 +++ drivers/net/zxdh/zxdh_np.c | 172 - drivers/net/zxdh/zxdh_np.h | 20 drivers/net/zxdh/zxdh_tables.c | 15 +++ drivers/net/zxdh/zxdh_tables.h | 6 +- 13 files changed, 514 insertions(+), 8 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index bb44e93fad..7da3aaced1 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -10,3 +10,5 @@ ARMv8= Y SR-IOV = Y Multiprocess aware = Y Scattered Rx = Y +Link status = Y +Link status event= Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', +'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bc4d2a937b..4fe5d8c23b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,9 +106,16 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } @@ -914,6 +922,13 @@ zxdh_dev_stop(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "intr disable failed"); return ret; } + + ret = zxdh_dev_set_link_down(dev); + if (ret) { + PMD_DRV_LOG(ERR, "set port %s link down failed!", dev->device->name); + return ret; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1012,6 +1027,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + + zxdh_dev_set_link_up(dev); + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1031,6 +1049,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable= zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index b1f398b28e..c0b719062c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -72,6 +72,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -93,6 +94,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zx
[PATCH v8 10/15] net/zxdh: mac set/add/remove ops implementations
provided mac set/add/remove ops. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 33 - drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 231 + drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h| 12 ++ drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 197 drivers/net/zxdh/zxdh_tables.h | 36 + 12 files changed, 548 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7da3aaced1..dc09fe3453 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess aware = Y Scattered Rx = Y Link status = Y Link status event= Y +Unicast MAC filter = Y +Multicast MAC filter = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 72c0ed65cc..f70c615d2f 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 4fe5d8c23b..3da51cda14 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -992,6 +992,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1030,6 +1047,10 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1052,6 +1073,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add= zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set= zxdh_dev_mac_addr_set, }; static int32_t @@ -1093,15 +1117,20 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) PMD_DRV_LOG(ERR, "Failed to get phyport"); return -1; } -
[PATCH v8 07/15] net/zxdh: provided dev simple tx implementations
provided dev simple tx implementations. Signed-off-by: Junlong Wang --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 22 ++ drivers/net/zxdh/zxdh_queue.h | 26 ++- drivers/net/zxdh/zxdh_rxtx.c | 396 + drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 448 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', +'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 6e603b967e..aef77e86a0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -956,6 +957,25 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -971,6 +991,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 698062ad62..daabb3530c 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,8 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + /* This marks a buffer as write-only (otherwise read-only). */ -#define ZXDH_VRING_DESC_F_WRITE 2 +#define ZXDH_VRING_DESC_F_WRITE2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) #define ZXDH_VRING_PACKED_DESC_F_USED(1 << (15)) @@ -35,11 +42,17 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 + #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZEsizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -355,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 00..10034a0e98 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include +#include + +#include + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20/* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE
[PATCH v8 08/15] net/zxdh: provided dev simple rx implementations
provided dev simple rx implementations. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c| 1 + drivers/net/zxdh/zxdh_rxtx.c | 313 ++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 318 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7b72be5f25..bb44e93fad 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8= Y SR-IOV = Y Multiprocess aware = Y +Scattered Rx = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index aef77e86a0..bc4d2a937b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -972,6 +972,7 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 10034a0e98..06290d48bb 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -394,3 +481,229 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** +* desc_is_used has a load-acquire or rte_io_rmb inside +* and wait for used desc in virtqueue. +*/ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id
[PATCH v8 11/15] net/zxdh: promisc/allmulti ops implementations
provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 21 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 128 + drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h| 10 ++ drivers/net/zxdh/zxdh_tables.c | 223 + drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 413 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index dc09fe3453..e9b237e102 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -14,3 +14,5 @@ Link status = Y Link status event= Y Unicast MAC filter = Y Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode= Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 3da51cda14..1dd6624e30 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -902,8 +902,16 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) int ret; ret = zxdh_port_attr_uninit(dev); - if (ret) + if (ret) { PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "uninit promisc_table failed"); + return ret; + } return ret; } @@ -1076,6 +1084,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add= zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set= zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable= zxdh_dev_allmulticast_disable, }; static int32_t @@ -1327,6 +1339,13 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 5b95cb1c2a..3cdac5de73 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -98,6 +98,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 35e37483e3..ad3d10258c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -395,3 +395,131 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; +