Re: release schedule change proposal
> Opinions? Atomic Rules has been releasing our Arkville product in lockstep with DPDK for the past 19 quarters. Our FPGA solution has the added burden of testing with async releases of FPGA vendor CAD tools. Although we have gotten used to the quarterly cadence, for the reasons given by Thomas and others, Atomic Rules supports the move to a three release per year schedule. Shepard Siegel, CTO and Founder atomicrules.com
[PATCH] doc: update ark guide
Include introduced FX2 PCIe ID and description. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 20 1 file changed, 20 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index ba00f14e80..39cd75064d 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -52,6 +52,10 @@ board. While specific capabilities such as number of physical hardware queue-pairs are negotiated; the driver is designed to remain constant over a broad and extendable feature set. +* FPGA Vendors Supported: AMD/Xilinx and Intel +* Number of RX/TX Queue-Pairs: up to 128 +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 + Intentionally, Arkville by itself DOES NOT provide common NIC capabilities such as offload or receive-side scaling (RSS). These capabilities would be viewed as a gate-level "tax" on @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] + +Arkville RTL Core Configurations +- + +Arkville's RTL core may be configured by the user for three different +datapath widths to balance throughput against FPGA logic area. The ARK PMD +has introspection on the RTL core configuration and acts accordingly. +All three configurations present identical RTL user-facing AXI stream +interfaces for both AMD/Xilinx and Intel FPGAs. + +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) DPDK and Arkville Firmware Versioning - @@ -334,6 +352,8 @@ Supported Features -- * Dynamic ARK PMD extensions +* Dynamic per-queue MBUF (re)sizing up to 32KB +* SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K * Hardware Statistics -- 2.25.1
Re: [PATCH] doc: update ark guide
Hi Ferruh, The new FX2 device *is* supported by the ark driver as of DPDK 23.03. These changes bring the ark doc up to date for the upcoming release. -Shep On Fri, Feb 10, 2023 at 3:34 PM Ferruh Yigit wrote: > On 2/10/2023 7:38 PM, Shepard Siegel wrote: > > Include introduced FX2 PCIe ID and description. > > > > Signed-off-by: Shepard Siegel > > --- > > doc/guides/nics/ark.rst | 20 > > 1 file changed, 20 insertions(+) > > > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst > > index ba00f14e80..39cd75064d 100644 > > --- a/doc/guides/nics/ark.rst > > +++ b/doc/guides/nics/ark.rst > > @@ -52,6 +52,10 @@ board. While specific capabilities such as number of > physical > > hardware queue-pairs are negotiated; the driver is designed to > > remain constant over a broad and extendable feature set. > > > > +* FPGA Vendors Supported: AMD/Xilinx and Intel > > +* Number of RX/TX Queue-Pairs: up to 128 > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 > > + > > Intentionally, Arkville by itself DOES NOT provide common NIC > > capabilities such as offload or receive-side scaling (RSS). > > These capabilities would be viewed as a gate-level "tax" on > > @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL PCIe > instances including: > > * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex > R-Tile] > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] > > +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] > > Hi Shepard, Ed, > > This device is not supported by ark driver, am I missing something? > > > + > > +Arkville RTL Core Configurations > > +- > > + > > +Arkville's RTL core may be configured by the user for three different > > +datapath widths to balance throughput against FPGA logic area. The ARK > PMD > > +has introspection on the RTL core configuration and acts accordingly. > > +All three configurations present identical RTL user-facing AXI stream > > +interfaces for both AMD/Xilinx and Intel FPGAs. > > + > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) > > > > DPDK and Arkville Firmware Versioning > > - > > @@ -334,6 +352,8 @@ Supported Features > > -- > > > > * Dynamic ARK PMD extensions > > +* Dynamic per-queue MBUF (re)sizing up to 32KB > > +* SR-IOV, VF-based queue-segregation > > * Multiple receive and transmit queues > > * Jumbo frames up to 9K > > * Hardware Statistics > >
Re: [PATCH] doc: update ark guide
Yes, exactly, the 1d6c:1022 device is the new FX2 device that will be supported in the 22.03 release. -Shep On Fri, Feb 10, 2023 at 4:11 PM Ferruh Yigit wrote: > On 2/10/2023 9:03 PM, Shepard Siegel wrote: > > Hi Ferruh, > > The new FX2 device *is* supported by the ark driver as of DPDK 23.03. > > These changes bring the ark doc up to date for the upcoming release. > > I don't know what exactly 'FX2' device is, but I was referring to > '1d6c:1022' device id. > Following is the device table from latest code [1], is '1d6c:1022' > supported? > > static const struct rte_pci_id pci_id_ark_map[] = { > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x100d)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x100e)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x100f)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1010)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1017)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1018)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1019)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101a)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101b)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, > {.vendor_id = 0, /* sentinel */ }, > }; > > [1] > > https://elixir.bootlin.com/dpdk/v22.11.1/source/drivers/net/ark/ark_ethdev.c#L89 > > > -Shep > > > > > > On Fri, Feb 10, 2023 at 3:34 PM Ferruh Yigit > <mailto:ferruh.yi...@amd.com>> wrote: > > > > On 2/10/2023 7:38 PM, Shepard Siegel wrote: > > > Include introduced FX2 PCIe ID and description. > > > > > > Signed-off-by: Shepard Siegel > <mailto:shepard.sie...@atomicrules.com>> > > > --- > > > doc/guides/nics/ark.rst | 20 > > > 1 file changed, 20 insertions(+) > > > > > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst > > > index ba00f14e80..39cd75064d 100644 > > > --- a/doc/guides/nics/ark.rst > > > +++ b/doc/guides/nics/ark.rst > > > @@ -52,6 +52,10 @@ board. While specific capabilities such as > > number of physical > > > hardware queue-pairs are negotiated; the driver is designed to > > > remain constant over a broad and extendable feature set. > > > > > > +* FPGA Vendors Supported: AMD/Xilinx and Intel > > > +* Number of RX/TX Queue-Pairs: up to 128 > > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 > > > + > > > Intentionally, Arkville by itself DOES NOT provide common NIC > > > capabilities such as offload or receive-side scaling (RSS). > > > These capabilities would be viewed as a gate-level "tax" on > > > @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL > > PCIe instances including: > > > * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] > > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for > > Agilex R-Tile] > > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] > > > +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for > > Agilex] > > > > Hi Shepard, Ed, > > > > This device is not supported by ark driver, am I missing something? > > > > > + > > > +Arkville RTL Core Configurations > > > +- > > > + > > > +Arkville's RTL core may be configured by the user for three > different > > > +datapath widths to balance throughput against FPGA logic area. > > The ARK PMD > > > +has introspection on the RTL core configuration and acts > accordingly. > > > +All three configurations present identical RTL user-facing AXI > stream > > > +interfaces for both AMD/Xilinx and Intel FPGAs. > > > + > > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > > +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) > > > > > > DPDK and Arkville Firmware Versioning > > > - > > > @@ -334,6 +352,8 @@ Supported Features > > > -- > > > > > > * Dynamic ARK PMD extensions > > > +* Dynamic per-queue MBUF (re)sizing up to 32KB > > > +* SR-IOV, VF-based queue-segregation > > > * Multiple receive and transmit queues > > > * Jumbo frames up to 9K > > > * Hardware Statistics > > > >
Re: [PATCH] doc: update ark guide
I meant to say "supported in the 23.03 release." just now in my email. sorry for the typo! On Fri, Feb 10, 2023 at 4:43 PM Shepard Siegel < shepard.sie...@atomicrules.com> wrote: > Yes, exactly, the 1d6c:1022 device is the new FX2 device that will be > supported in the 22.03 release. > > -Shep > > > > On Fri, Feb 10, 2023 at 4:11 PM Ferruh Yigit wrote: > >> On 2/10/2023 9:03 PM, Shepard Siegel wrote: >> > Hi Ferruh, >> > The new FX2 device *is* supported by the ark driver as of DPDK 23.03. >> > These changes bring the ark doc up to date for the upcoming release. >> >> I don't know what exactly 'FX2' device is, but I was referring to >> '1d6c:1022' device id. >> Following is the device table from latest code [1], is '1d6c:1022' >> supported? >> >> static const struct rte_pci_id pci_id_ark_map[] = { >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x100d)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x100e)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x100f)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1010)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1017)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1018)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1019)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101a)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101b)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, >> {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, >> {.vendor_id = 0, /* sentinel */ }, >> }; >> >> [1] >> >> https://elixir.bootlin.com/dpdk/v22.11.1/source/drivers/net/ark/ark_ethdev.c#L89 >> >> > -Shep >> > >> > >> > On Fri, Feb 10, 2023 at 3:34 PM Ferruh Yigit > > <mailto:ferruh.yi...@amd.com>> wrote: >> > >> > On 2/10/2023 7:38 PM, Shepard Siegel wrote: >> > > Include introduced FX2 PCIe ID and description. >> > > >> > > Signed-off-by: Shepard Siegel > > <mailto:shepard.sie...@atomicrules.com>> >> > > --- >> > > doc/guides/nics/ark.rst | 20 >> > > 1 file changed, 20 insertions(+) >> > > >> > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst >> > > index ba00f14e80..39cd75064d 100644 >> > > --- a/doc/guides/nics/ark.rst >> > > +++ b/doc/guides/nics/ark.rst >> > > @@ -52,6 +52,10 @@ board. While specific capabilities such as >> > number of physical >> > > hardware queue-pairs are negotiated; the driver is designed to >> > > remain constant over a broad and extendable feature set. >> > > >> > > +* FPGA Vendors Supported: AMD/Xilinx and Intel >> > > +* Number of RX/TX Queue-Pairs: up to 128 >> > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 >> > > + >> > > Intentionally, Arkville by itself DOES NOT provide common NIC >> > > capabilities such as offload or receive-side scaling (RSS). >> > > These capabilities would be viewed as a gate-level "tax" on >> > > @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL >> > PCIe instances including: >> > > * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] >> > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for >> > Agilex R-Tile] >> > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] >> > > +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for >> > Agilex] >> > >> > Hi Shepard, Ed, >> > >> > This device is not supported by ark driver, am I missing something? >> > >> > > + >> > > +Arkville RTL Core Configurations >> > > +- >> > > + >> > > +Arkville's RTL core may be configured by the user for three >> different >> > > +datapath widths to balance throughput against FPGA logic area. >> > The ARK PMD >> > > +has introspection on the RTL core configuration and acts >> accordingly. >> > > +All three configurations present identical RTL user-facing AXI >> stream >> > > +interfaces for both AMD/Xilinx and Intel FPGAs. >> > > + >> > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) >> > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) >> > > +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) >> > > >> > > DPDK and Arkville Firmware Versioning >> > > - >> > > @@ -334,6 +352,8 @@ Supported Features >> > > -- >> > > >> > > * Dynamic ARK PMD extensions >> > > +* Dynamic per-queue MBUF (re)sizing up to 32KB >> > > +* SR-IOV, VF-based queue-segregation >> > > * Multiple receive and transmit queues >> > > * Jumbo frames up to 9K >> > > * Hardware Statistics >> > >> >>
[PATCH v2] doc: update ark guide
Add ark PCIe device 1d6c:1022 FX2 to pci_id_ark_map. Include introduced FX2 PCIe ID and description. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 20 drivers/net/ark/ark_ethdev.c | 1 + 2 files changed, 21 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index ba00f14e80..39cd75064d 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -52,6 +52,10 @@ board. While specific capabilities such as number of physical hardware queue-pairs are negotiated; the driver is designed to remain constant over a broad and extendable feature set. +* FPGA Vendors Supported: AMD/Xilinx and Intel +* Number of RX/TX Queue-Pairs: up to 128 +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 + Intentionally, Arkville by itself DOES NOT provide common NIC capabilities such as offload or receive-side scaling (RSS). These capabilities would be viewed as a gate-level "tax" on @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] + +Arkville RTL Core Configurations +- + +Arkville's RTL core may be configured by the user for three different +datapath widths to balance throughput against FPGA logic area. The ARK PMD +has introspection on the RTL core configuration and acts accordingly. +All three configurations present identical RTL user-facing AXI stream +interfaces for both AMD/Xilinx and Intel FPGAs. + +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) DPDK and Arkville Firmware Versioning - @@ -334,6 +352,8 @@ Supported Features -- * Dynamic ARK PMD extensions +* Dynamic per-queue MBUF (re)sizing up to 32KB +* SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K * Hardware Statistics diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index c654a229f7..b2995427c8 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, {.vendor_id = 0, /* sentinel */ }, }; -- 2.25.1
Re: [PATCH v2] doc: update ark guide
Thank you Ferruh, I understand now. We will abandon this patch and do exactly as you suggest with two separate patches. Thanks for your guidance. Look for the two patches ETA tomorrow. -Shep On Fri, Feb 10, 2023 at 5:56 PM Ferruh Yigit wrote: > On 2/10/2023 10:35 PM, Shepard Siegel wrote: > > Add ark PCIe device 1d6c:1022 FX2 to pci_id_ark_map. > > Include introduced FX2 PCIe ID and description. > > > > Thanks for v2, but this is no more just a documentation patch, > can you please split the patch into two, > first one adds new device support and update documentation related to > new device, with a 'net/ark:' title, > second one updates the document for extended information which is most > of this patch > > > > Signed-off-by: Shepard Siegel > > --- > > doc/guides/nics/ark.rst | 20 > > drivers/net/ark/ark_ethdev.c | 1 + > > 2 files changed, 21 insertions(+) > > > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst > > index ba00f14e80..39cd75064d 100644 > > --- a/doc/guides/nics/ark.rst > > +++ b/doc/guides/nics/ark.rst > > @@ -52,6 +52,10 @@ board. While specific capabilities such as number of > physical > > hardware queue-pairs are negotiated; the driver is designed to > > remain constant over a broad and extendable feature set. > > > > +* FPGA Vendors Supported: AMD/Xilinx and Intel > > +* Number of RX/TX Queue-Pairs: up to 128 > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 > > + > > Intentionally, Arkville by itself DOES NOT provide common NIC > > capabilities such as offload or receive-side scaling (RSS). > > These capabilities would be viewed as a gate-level "tax" on > > @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL PCIe > instances including: > > * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex > R-Tile] > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] > > +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] > > + > > +Arkville RTL Core Configurations > > +- > > + > > +Arkville's RTL core may be configured by the user for three different > > +datapath widths to balance throughput against FPGA logic area. The ARK > PMD > > +has introspection on the RTL core configuration and acts accordingly. > > +All three configurations present identical RTL user-facing AXI stream > > +interfaces for both AMD/Xilinx and Intel FPGAs. > > + > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) > > > > DPDK and Arkville Firmware Versioning > > - > > @@ -334,6 +352,8 @@ Supported Features > > -- > > > > * Dynamic ARK PMD extensions > > +* Dynamic per-queue MBUF (re)sizing up to 32KB > > +* SR-IOV, VF-based queue-segregation > > * Multiple receive and transmit queues > > * Jumbo frames up to 9K > > * Hardware Statistics > > diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c > > index c654a229f7..b2995427c8 100644 > > --- a/drivers/net/ark/ark_ethdev.c > > +++ b/drivers/net/ark/ark_ethdev.c > > @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { > > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, > > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, > > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, > > + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, > > {.vendor_id = 0, /* sentinel */ }, > > }; > > > >
[PATCH v3 1/2] net/ark: introduce Arkville FX2 PCIe device
This commit introduces the Arkville FX2 PCIe device. Arkville FX2 provides support for PCIe Gen5x16 endpoints. FX2 is backwards-compatible with the net/ark PMD so that no changes other than the whitelist addition in this patch are needed for the basic operation of this device. Additional details are in a doc: patch with this submission. Signed-off-by: Shepard Siegel --- drivers/net/ark/ark_ethdev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index c654a229f7..b2995427c8 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, {.vendor_id = 0, /* sentinel */ }, }; -- 2.25.1
[PATCH v3 2/2] doc: add Arkville FX2 PCIe device description
Update net/ark guide for clarity. Include list of FX0, FX1 and FX2 PCIe devices. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 20 1 file changed, 20 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index ba00f14e80..39cd75064d 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -52,6 +52,10 @@ board. While specific capabilities such as number of physical hardware queue-pairs are negotiated; the driver is designed to remain constant over a broad and extendable feature set. +* FPGA Vendors Supported: AMD/Xilinx and Intel +* Number of RX/TX Queue-Pairs: up to 128 +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 + Intentionally, Arkville by itself DOES NOT provide common NIC capabilities such as offload or receive-side scaling (RSS). These capabilities would be viewed as a gate-level "tax" on @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] + +Arkville RTL Core Configurations +- + +Arkville's RTL core may be configured by the user for three different +datapath widths to balance throughput against FPGA logic area. The ARK PMD +has introspection on the RTL core configuration and acts accordingly. +All three configurations present identical RTL user-facing AXI stream +interfaces for both AMD/Xilinx and Intel FPGAs. + +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) DPDK and Arkville Firmware Versioning - @@ -334,6 +352,8 @@ Supported Features -- * Dynamic ARK PMD extensions +* Dynamic per-queue MBUF (re)sizing up to 32KB +* SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K * Hardware Statistics -- 2.25.1
Re: [PATCH v3 2/2] doc: add Arkville FX2 PCIe device description
Hi Ferruh, I don't think the submission system will allow your request of a single patch with both a code change and document change since they are in different directories. I must say I'm confused by your request. In the current v3 which is not your preference, we split up the new device into the 0001 and the doc on the new device in the 0002. Your current ask of "Can you please add new device related documentation to patch that adds new device?" was what we did back on v2. Guessing you don't like the mention of any other devices in the FX2 addition? It feels crazy to have to have three different patches to essentially add a one-liner to our PCIe device alowlist! Am I making this way more complicated than it needs to be? We will produce the sequence of three patches in the morning. Would a cover letter help explain when we are doing with this patch? If there are elements that are not clear, let me know and we will work to clarify it. Thank you for your efforts in this and countless other DPDK pushes! best, Shep On Fri, Feb 10, 2023 at 9:13 PM Ferruh Yigit wrote: > On 2/11/2023 1:26 AM, Shepard Siegel wrote: > > Update net/ark guide for clarity. > > Include list of FX0, FX1 and FX2 PCIe devices. > > > > Signed-off-by: Shepard Siegel > > --- > > doc/guides/nics/ark.rst | 20 > > 1 file changed, 20 insertions(+) > > > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst > > index ba00f14e80..39cd75064d 100644 > > --- a/doc/guides/nics/ark.rst > > +++ b/doc/guides/nics/ark.rst > > @@ -52,6 +52,10 @@ board. While specific capabilities such as number of > physical > > hardware queue-pairs are negotiated; the driver is designed to > > remain constant over a broad and extendable feature set. > > > > +* FPGA Vendors Supported: AMD/Xilinx and Intel > > +* Number of RX/TX Queue-Pairs: up to 128 > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 > > + > > Intentionally, Arkville by itself DOES NOT provide common NIC > > capabilities such as offload or receive-side scaling (RSS). > > These capabilities would be viewed as a gate-level "tax" on > > @@ -302,6 +306,20 @@ ARK PMD supports the following Arkville RTL PCIe > instances including: > > * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex > R-Tile] > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] > > +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] > > Can you please add new device related documentation to patch that adds > new device? > At least above line is related to it, but if any other updates in this > document is related to this new device that part also can go to other > patch. > > Or if it make more sense you can first introduce the document update > patch for old devices, and later add new device and new device related > documentation, like: > > First patch adds: > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > Second patch adds new device and appends following: > +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) > > > > + > > +Arkville RTL Core Configurations > > +- > > + > > +Arkville's RTL core may be configured by the user for three different > > +datapath widths to balance throughput against FPGA logic area. The ARK > PMD > > +has introspection on the RTL core configuration and acts accordingly. > > +All three configurations present identical RTL user-facing AXI stream > > +interfaces for both AMD/Xilinx and Intel FPGAs. > > + > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) > > > > DPDK and Arkville Firmware Versioning > > - > > @@ -334,6 +352,8 @@ Supported Features > > -- > > > > * Dynamic ARK PMD extensions > > +* Dynamic per-queue MBUF (re)sizing up to 32KB > > +* SR-IOV, VF-based queue-segregation > > * Multiple receive and transmit queues > > * Jumbo frames up to 9K > > * Hardware Statistics > >
[PATCH] doc: clarify the existing net/ark guide
Add detail for the existing Arkville configurations FX0 and FX1. Corrected minor errors of omission. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 18 ++ 1 file changed, 18 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index ba00f14e80..edaa02dc96 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -52,6 +52,10 @@ board. While specific capabilities such as number of physical hardware queue-pairs are negotiated; the driver is designed to remain constant over a broad and extendable feature set. +* FPGA Vendors Supported: AMD/Xilinx and Intel +* Number of RX/TX Queue-Pairs: up to 128 +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 + Intentionally, Arkville by itself DOES NOT provide common NIC capabilities such as offload or receive-side scaling (RSS). These capabilities would be viewed as a gate-level "tax" on @@ -303,6 +307,18 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +Arkville RTL Core Configurations +- + +Arkville's RTL core may be configured by the user with different +datapath widths to balance throughput against FPGA logic area. The ARK PMD +has introspection on the RTL core configuration and acts accordingly. +All Arkville configurations present identical RTL user-facing AXI stream +interfaces for both AMD/Xilinx and Intel FPGAs. + +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) + DPDK and Arkville Firmware Versioning - @@ -334,6 +350,8 @@ Supported Features -- * Dynamic ARK PMD extensions +* Dynamic per-queue MBUF (re)sizing up to 32KB +* SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K * Hardware Statistics -- 2.25.1
[PATCH 1/2] net/ark: add new device to PCIe allowlist
This patch adds the Arkville FX2 device to the PCIe allowlist. Signed-off-by: Shepard Siegel --- drivers/net/ark/ark_ethdev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index c654a229f7..b2995427c8 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, {.vendor_id = 0, /* sentinel */ }, }; -- 2.25.1
[PATCH 2/2] doc: update ark guide to include new PCIe device
Add descriptions of the new FX2 device to the existing devices. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index edaa02dc96..bbd7419d99 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -306,6 +306,7 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] Arkville RTL Core Configurations - @@ -318,6 +319,7 @@ interfaces for both AMD/Xilinx and Intel FPGAs. * ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) * ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) DPDK and Arkville Firmware Versioning - -- 2.25.1
[PATCH 1/4] doc: clarify the existing net/ark guide
Add detail for the existing Arkville configurations FX0 and FX1. Corrected minor errors of omission. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 18 ++ 1 file changed, 18 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index ba00f14e80..edaa02dc96 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -52,6 +52,10 @@ board. While specific capabilities such as number of physical hardware queue-pairs are negotiated; the driver is designed to remain constant over a broad and extendable feature set. +* FPGA Vendors Supported: AMD/Xilinx and Intel +* Number of RX/TX Queue-Pairs: up to 128 +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 + Intentionally, Arkville by itself DOES NOT provide common NIC capabilities such as offload or receive-side scaling (RSS). These capabilities would be viewed as a gate-level "tax" on @@ -303,6 +307,18 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +Arkville RTL Core Configurations +- + +Arkville's RTL core may be configured by the user with different +datapath widths to balance throughput against FPGA logic area. The ARK PMD +has introspection on the RTL core configuration and acts accordingly. +All Arkville configurations present identical RTL user-facing AXI stream +interfaces for both AMD/Xilinx and Intel FPGAs. + +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) + DPDK and Arkville Firmware Versioning - @@ -334,6 +350,8 @@ Supported Features -- * Dynamic ARK PMD extensions +* Dynamic per-queue MBUF (re)sizing up to 32KB +* SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K * Hardware Statistics -- 2.25.1
[PATCH 2/4] net/ark: add new device to PCIe allowlist
This patch adds the Arkville FX2 device to the PCIe allowlist. Signed-off-by: Shepard Siegel --- drivers/net/ark/ark_ethdev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index c654a229f7..b2995427c8 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, {.vendor_id = 0, /* sentinel */ }, }; -- 2.25.1
[PATCH 3/4] doc: update ark guide to include new PCIe device
Add descriptions of the new FX2 device to the existing devices. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index edaa02dc96..bbd7419d99 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -306,6 +306,7 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] Arkville RTL Core Configurations - @@ -318,6 +319,7 @@ interfaces for both AMD/Xilinx and Intel FPGAs. * ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) * ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) DPDK and Arkville Firmware Versioning - -- 2.25.1
[PATCH 4/4] doc: update Release Notes
New Arkville FX2 device for PCIe Gen5x16 supported with net/ark Signed-off-by: Shepard Siegel --- doc/guides/rel_notes/release_23_03.rst | 4 1 file changed, 4 insertions(+) diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index 7527c6d57f..7941b22c8a 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -72,6 +72,10 @@ New Features * Added multi-process support. +* **Updated Atomic Rules ark driver.** + + * Added Arkville FX2 device supporting PCIe Gen5x16 + * **Updated Corigine nfp driver.** * Added support for meter options. -- 2.25.1
Re: [PATCH 1/2] net/ark: add new device to PCIe allowlist
Thank you Ferruh, We have rolled all changes, including your suggestion on the Release Note, into the four distinct patches in this patchset which collectively brings the ark PMD up to date for this new capability. best, Shep On Mon, Feb 13, 2023 at 8:46 AM Ferruh Yigit wrote: > On 2/11/2023 2:14 PM, Shepard Siegel wrote: > > This patch adds the Arkville FX2 device to the PCIe allowlist. > > > > Signed-off-by: Shepard Siegel > > Hi Shepard, Ed, > > The patchset looks good, I can squash them into single patch while merging. > > I don't know how major this new device is, if it is an important one you > may want to introduce it in the release notes too, what do think? > > And if you will send a new version, can you please put previous doc > patch [1] and this to the same patchset since this one depends previous > doc patch to apply cleanly. > > Thanks. > > > [1] > doc: clarify the existing net/ark guide >
Re: [PATCH 1/4] doc: clarify the existing net/ark guide
Hi Ferruh, Yes, there will probably be next versions in the future. If you don't mind making the marker length adjustment, that would be great. Regarding MBUF (re)sizing - Arkville supports the ability to configure or reconfigure the MBUF size used on a per-queue basis. This feature is useful when the are conflicting motivations for using smaller/larger MBUF sizes. For example, user can switch a queue to use a size best for that queue's application workload. -Shep On Mon, Feb 13, 2023 at 10:46 AM Ferruh Yigit wrote: > On 2/13/2023 2:58 PM, Shepard Siegel wrote: > > Add detail for the existing Arkville configurations FX0 and FX1. > > Corrected minor errors of omission. > > > > Signed-off-by: Shepard Siegel > > --- > > doc/guides/nics/ark.rst | 18 ++ > > 1 file changed, 18 insertions(+) > > > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst > > index ba00f14e80..edaa02dc96 100644 > > --- a/doc/guides/nics/ark.rst > > +++ b/doc/guides/nics/ark.rst > > @@ -52,6 +52,10 @@ board. While specific capabilities such as number of > physical > > hardware queue-pairs are negotiated; the driver is designed to > > remain constant over a broad and extendable feature set. > > > > +* FPGA Vendors Supported: AMD/Xilinx and Intel > > +* Number of RX/TX Queue-Pairs: up to 128 > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 > > + > > Intentionally, Arkville by itself DOES NOT provide common NIC > > capabilities such as offload or receive-side scaling (RSS). > > These capabilities would be viewed as a gate-level "tax" on > > @@ -303,6 +307,18 @@ ARK PMD supports the following Arkville RTL PCIe > instances including: > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex > R-Tile] > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] > > > > +Arkville RTL Core Configurations > > +- > > + > > The title marker length (-) should be same as title length, can you > please fix if there will be next version, if not I can fix while merging. > > > > +Arkville's RTL core may be configured by the user with different > > +datapath widths to balance throughput against FPGA logic area. The ARK > PMD > > +has introspection on the RTL core configuration and acts accordingly. > > +All Arkville configurations present identical RTL user-facing AXI stream > > +interfaces for both AMD/Xilinx and Intel FPGAs. > > + > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > + > > DPDK and Arkville Firmware Versioning > > - > > > > @@ -334,6 +350,8 @@ Supported Features > > -- > > > > * Dynamic ARK PMD extensions > > +* Dynamic per-queue MBUF (re)sizing up to 32KB > > What is this feature? What does it mean to size/resize mbuf dynamically? > > > +* SR-IOV, VF-based queue-segregation > > * Multiple receive and transmit queues > > * Jumbo frames up to 9K > > * Hardware Statistics > >
Re: [PATCH 2/4] net/ark: add new device to PCIe allowlist
Hi Ferruh, If you dont mind squashing 2/4 3/4 4/4 together, that would be great. Thank you for correcting on the allowlist, s..b "supported PCIe ids list" instead. best, Shep On Mon, Feb 13, 2023 at 10:51 AM Ferruh Yigit wrote: > On 2/13/2023 2:58 PM, Shepard Siegel wrote: > > This patch adds the Arkville FX2 device to the PCIe allowlist. > > > > Signed-off-by: Shepard Siegel > > --- > > drivers/net/ark/ark_ethdev.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c > > index c654a229f7..b2995427c8 100644 > > --- a/drivers/net/ark/ark_ethdev.c > > +++ b/drivers/net/ark/ark_ethdev.c > > @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { > > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, > > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, > > {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, > > + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, > > {.vendor_id = 0, /* sentinel */ }, > > }; > > > > It is possible to squash 2/4, 3/4 & 4/4 into single patch, I can do it > while merging. > > In the patch title it mentions 'PCIe allowlist', this is not a kind of > allow list, this is list of PCI ids supported by driver, so I can update > patch title as following while merging if there won't be a new version: > "net/ark: support Arkville FX2 device" >
Re: [PATCH 1/4] doc: clarify the existing net/ark guide
Yes, what is different here is that the MBUF size is communicated from the PMD to the hardware which *changes its behavior* of data motion to optimize throughput and latency as a function of that setting. And it does that per-queue. And can be done at runtime (that's the dynamic part). ... To the best our knowledge, other PMDs use this as a host-software setting only - and their DPDK naive DMA engines just use the same fixed settings (respecting PCIe, of course). Hope that helps. If it is contentious in any way, we are fine with removing that line. We added it as users have remarked it is a unique capability they think we should point out. -Shep On Mon, Feb 13, 2023 at 12:23 PM Ferruh Yigit wrote: > On 2/13/2023 5:09 PM, Shepard Siegel wrote: > > Hi Ferruh, > > > > Yes, there will probably be next versions in the future. If you don't > > mind making the marker length adjustment, that would be great. > > > > Regarding MBUF (re)sizing - Arkville supports the ability to configure > > or reconfigure the MBUF size used on a per-queue basis. This feature is > > useful when the are conflicting motivations for using smaller/larger > > MBUF sizes. For example, user can switch a queue to use a size best for > > that queue's application workload. > > > > Application can allocate multiple mempool with different sizes and set > these to specific queues, this is same for all PMDs, is ark PMD doing > something specific here? Or are you referring to something else? > > And what does 'dynamic' emphasis means here? > > > > -Shep > > > > > > On Mon, Feb 13, 2023 at 10:46 AM Ferruh Yigit > <mailto:ferruh.yi...@amd.com>> wrote: > > > > On 2/13/2023 2:58 PM, Shepard Siegel wrote: > > > Add detail for the existing Arkville configurations FX0 and FX1. > > > Corrected minor errors of omission. > > > > > > Signed-off-by: Shepard Siegel > <mailto:shepard.sie...@atomicrules.com>> > > > --- > > > doc/guides/nics/ark.rst | 18 ++ > > > 1 file changed, 18 insertions(+) > > > > > > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst > > > index ba00f14e80..edaa02dc96 100644 > > > --- a/doc/guides/nics/ark.rst > > > +++ b/doc/guides/nics/ark.rst > > > @@ -52,6 +52,10 @@ board. While specific capabilities such as > > number of physical > > > hardware queue-pairs are negotiated; the driver is designed to > > > remain constant over a broad and extendable feature set. > > > > > > +* FPGA Vendors Supported: AMD/Xilinx and Intel > > > +* Number of RX/TX Queue-Pairs: up to 128 > > > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 > > > + > > > Intentionally, Arkville by itself DOES NOT provide common NIC > > > capabilities such as offload or receive-side scaling (RSS). > > > These capabilities would be viewed as a gate-level "tax" on > > > @@ -303,6 +307,18 @@ ARK PMD supports the following Arkville RTL > > PCIe instances including: > > > * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for > > Agilex R-Tile] > > > * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] > > > > > > +Arkville RTL Core Configurations > > > +- > > > + > > > > The title marker length (-) should be same as title length, can you > > please fix if there will be next version, if not I can fix while > > merging. > > > > > > > +Arkville's RTL core may be configured by the user with different > > > +datapath widths to balance throughput against FPGA logic area. > > The ARK PMD > > > +has introspection on the RTL core configuration and acts > accordingly. > > > +All Arkville configurations present identical RTL user-facing AXI > > stream > > > +interfaces for both AMD/Xilinx and Intel FPGAs. > > > + > > > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) > > > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) > > > + > > > DPDK and Arkville Firmware Versioning > > > - > > > > > > @@ -334,6 +350,8 @@ Supported Features > > > -- > > > > > > * Dynamic ARK PMD extensions > > > +* Dynamic per-queue MBUF (re)sizing up to 32KB > > > > What is this feature? What does it mean to size/resize mbuf > dynamically? > > > > > +* SR-IOV, VF-based queue-segregation > > > * Multiple receive and transmit queues > > > * Jumbo frames up to 9K > > > * Hardware Statistics > > > >
[PATCH 2/2] net/ark: add new ark PCIe device
This patch adds the Arkville FX2 to known PCIe device list Update documentation and release notes. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst| 3 ++- doc/guides/rel_notes/release_23_03.rst | 4 drivers/net/ark/ark_ethdev.c | 1 + 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index 14fedbf252..81a37c161f 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -306,6 +306,7 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101c`` - AR-ARK-SRIOV-VF [Arkville Virtual Function] * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +* ``1d6c:1022`` - AR-ARKA-FX2 [Arkville 128B DPDK Data Mover for Agilex] Arkville RTL Core Configurations @@ -318,6 +319,7 @@ interfaces for both AMD/Xilinx and Intel FPGAs. * ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) * ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) +* ARK-FX2 - 1024-bit 128B datapath (PCIe Gen5x16 Only) DPDK and Arkville Firmware Versioning - @@ -350,7 +352,6 @@ Supported Features -- * Dynamic ARK PMD extensions -* Dynamic per-queue MBUF (re)sizing up to 32KB * SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index 7527c6d57f..7941b22c8a 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -72,6 +72,10 @@ New Features * Added multi-process support. +* **Updated Atomic Rules ark driver.** + + * Added Arkville FX2 device supporting PCIe Gen5x16 + * **Updated Corigine nfp driver.** * Added support for meter options. diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index c654a229f7..b2995427c8 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -99,6 +99,7 @@ static const struct rte_pci_id pci_id_ark_map[] = { {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101c)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101e)}, {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x101f)}, + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1022)}, {.vendor_id = 0, /* sentinel */ }, }; -- 2.25.1
[PATCH 1/2] doc: clarify the existing net/ark guide
Add detail for the existing Arkville configurations FX0 and FX1. Corrected minor errors of omission. Signed-off-by: Shepard Siegel --- doc/guides/nics/ark.rst | 20 +++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst index ba00f14e80..14fedbf252 100644 --- a/doc/guides/nics/ark.rst +++ b/doc/guides/nics/ark.rst @@ -52,6 +52,10 @@ board. While specific capabilities such as number of physical hardware queue-pairs are negotiated; the driver is designed to remain constant over a broad and extendable feature set. +* FPGA Vendors Supported: AMD/Xilinx and Intel +* Number of RX/TX Queue-Pairs: up to 128 +* PCIe Endpoint Technology: Gen3, Gen4, Gen5 + Intentionally, Arkville by itself DOES NOT provide common NIC capabilities such as offload or receive-side scaling (RSS). These capabilities would be viewed as a gate-level "tax" on @@ -73,7 +77,7 @@ between the user's FPGA application and the existing DPDK features via the PMD. Device Parameters +- The ARK PMD supports device parameters that are used for packet routing and for internal packet generation and packet checking. This @@ -303,6 +307,18 @@ ARK PMD supports the following Arkville RTL PCIe instances including: * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for Agilex R-Tile] * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device] +Arkville RTL Core Configurations + + +Arkville's RTL core may be configured by the user with different +datapath widths to balance throughput against FPGA logic area. The ARK PMD +has introspection on the RTL core configuration and acts accordingly. +All Arkville configurations present identical RTL user-facing AXI stream +interfaces for both AMD/Xilinx and Intel FPGAs. + +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4) +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5) + DPDK and Arkville Firmware Versioning - @@ -334,6 +350,8 @@ Supported Features -- * Dynamic ARK PMD extensions +* Dynamic per-queue MBUF (re)sizing up to 32KB +* SR-IOV, VF-based queue-segregation * Multiple receive and transmit queues * Jumbo frames up to 9K * Hardware Statistics -- 2.25.1
[dpdk-dev] Intent to upstream Atomic Rules net/ark "Arkville" in DPDK 17.05
Atomic Rules would like to include our Arkville DPDK PMD net/ark in the DPDK 17.05 release. We have been watching the recent process of Solarflare’s net/sfc upstreaming and we decided it would be too aggressive for us to get in on 17.02. Rather than be the last in queue for 17.02, we would prefer to be one of the first in the queue for 17.05. This post is our statement of that intent. Arkville is a product from Atomic Rules which is a combination of hardware and software. In the DPDK community, the easy way to describe Arkville is that it is a line-rate agnostic FPGA-based NIC that does include any specific MAC. Arkville is unique in that the design process worked backward from the DPDK API/ABI to allow us to design RTL DPDK-aware data movers. Arkville’s customers are the small and brave set of users that demand an FPGA exist between their MAC ports and their host. A link to a slide deck and product preview shown last month at SC16 is at the end of this post. Although we’ve done substantial testing; we are just now setting up a proper DTS environment. Our first course of business is to add two 10 GbE ports and make Arkville look like a Fortville X710-DA2. This is strange for us because we started out with four 100 GbE ports, and not much else to talk to! We are eager to work with merchant 100 GbE ASIC NICs to help bring DTS into the 100 GbE realm. But 100 GbE aside, as soon as we see our net/ark PMD playing nice in DTS with a Fortville, and the 17.05 aperture opens; we will commence the patch submission process. Thanks all who have helped us get this far so soon. Anyone needing additional details that aren’t DPDK community wide, please contact me directly. Shep for AR Team Shepard Siegel, CTO atomicrules.com Links: https://dl.dropboxusercontent.com/u/5548901/share/AtomicRules_Arkville_SC16.pdf <https://dl.dropboxusercontent.com/u/5548901/share/AtomicRules_Arkville_SC16.pdf> https://forums.xilinx.com/t5/Xcell-Daily-Blog/BittWare-s-UltraScale-XUPP3R-board-and-Atomic-Rules-IP-run-Intel/ba-p/734110
[dpdk-dev] DPDK v16.07 Build Failure with Unsupported OS CentOS 6.8
A client of ours shipped us an IBM x3650 with CentOS 6.8 installed. Our software team is off for the holidays. We use DPDK 16.07 and have had no issues with supported OSs. We understand that CentOS 6.8 is not supported for DPDK v16.07. Still, since IBM doesn't support CentOS 7 on the x3650, I figured I'd give it a try before risking a server OS change. We do this little dance to setup DPDK on our systems... git clone git://dpdk.org/dpdk cd dpdk git branch develop git checkout develop git reset --hard v16.07 make config T=x86_64-native-linuxapp-gcc sed -ri 's,(PMD_PCAP=).*,\1y,' build/.config sed -ri 's/CONFIG_RTE_BUILD_SHARED_LIB=n/CONFIG_RTE_BUILD_SHARED_LIB=y/' build/.config sudo yum install libpcap-devel make Then the make fails some ways in as below... [labuser@bw-x3650 dpdk]$ git rev-parse HEAD 20e2b6eba13d9eb61b23ea75f09f2aa966fa6325 [labuser@bw-x3650 dpdk]$ uname -r 2.6.32-642.11.1.el6.x86_64 [labuser@bw-x3650 dpdk]$ cat /etc/redhat-release CentOS release 6.8 (Final) [labuser@bw-x3650 dpdk]$ make == Build lib == Build lib/librte_compat == Build lib/librte_eal == Build lib/librte_eal/common == Build lib/librte_eal/linuxapp == Build lib/librte_eal/linuxapp/eal LD librte_eal.so.2.1 INSTALL-LIB librte_eal.so.2.1 == Build lib/librte_eal/linuxapp/igb_uio (cat /dev/null; echo kernel//home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko;) > /home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/modules.order Building modules, stage 2. MODPOST 1 modules == Build lib/librte_eal/linuxapp/kni CC [M] /home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/kni/ixgbe_main.o In file included from /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h:41, from /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_type.h:31, from /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h:32, from /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h:52, from /home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/kni/ixgbe_main.c:56: /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h: In function ‘__kc_vlan_get_protocol’: /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h:2836: error: implicit declaration of function ‘vlan_tx_tag_present’ make[8]: *** [/home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/kni/ixgbe_main.o] Error 1 make[7]: *** [_module_/home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/kni] Error 2 make[6]: *** [sub-make] Error 2 make[5]: *** [rte_kni.ko] Error 2 make[4]: *** [kni] Error 2 make[3]: *** [linuxapp] Error 2 make[2]: *** [librte_eal] Error 2 make[1]: *** [lib] Error 2 make: *** [all] Error 2 Google suggested some hacks; but I'm out of my element in driver source. Is this a lost cause and I should try to get the server up to CentOS 7? Or is there a relatively simple (if untested) patch that we could try? Our objective here is to get through the DPDK and our net/ark PMD build process and see if we can run some of our most-basic packet ingress/egress ops on our Arkville IP in this system. Thanks in advance for any constructive feedback. Happy Holidays All. -Shep Shepard Siegel, CTO atomicrules.com
Re: [dpdk-dev] DPDK v16.07 Build Failure with Unsupported OS CentOS 6.8
Follow-Up: We're all set, thanks John Miller; and my apologies for incorrectly posting this to "dev" as opposed to "users" mailing list. -Shep On Wed, Dec 28, 2016 at 9:13 AM, Shepard Siegel < shepard.sie...@atomicrules.com> wrote: > A client of ours shipped us an IBM x3650 with CentOS 6.8 installed. Our > software team is off for the holidays. We use DPDK 16.07 and have had no > issues with supported OSs. We understand that CentOS 6.8 is not supported > for DPDK v16.07. Still, since IBM doesn't support CentOS 7 on the x3650, I > figured I'd give it a try before risking a server OS change. > > We do this little dance to setup DPDK on our systems... > > git clone git://dpdk.org/dpdk > cd dpdk > git branch develop > git checkout develop > git reset --hard v16.07 > make config T=x86_64-native-linuxapp-gcc > sed -ri 's,(PMD_PCAP=).*,\1y,' build/.config > sed -ri 's/CONFIG_RTE_BUILD_SHARED_LIB=n/CONFIG_RTE_BUILD_SHARED_LIB=y/' > build/.config > sudo yum install libpcap-devel > make > > Then the make fails some ways in as below... > > [labuser@bw-x3650 dpdk]$ git rev-parse HEAD > 20e2b6eba13d9eb61b23ea75f09f2aa966fa6325 > > [labuser@bw-x3650 dpdk]$ uname -r > 2.6.32-642.11.1.el6.x86_64 > > [labuser@bw-x3650 dpdk]$ cat /etc/redhat-release > CentOS release 6.8 (Final) > > [labuser@bw-x3650 dpdk]$ make > == Build lib > == Build lib/librte_compat > == Build lib/librte_eal > == Build lib/librte_eal/common > == Build lib/librte_eal/linuxapp > == Build lib/librte_eal/linuxapp/eal > LD librte_eal.so.2.1 > INSTALL-LIB librte_eal.so.2.1 > == Build lib/librte_eal/linuxapp/igb_uio > (cat /dev/null; echo kernel//home/labuser/projects/ > dpdk/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko;) > > /home/labuser/projects/dpdk/build/build/lib/librte_eal/ > linuxapp/igb_uio/modules.order > Building modules, stage 2. > MODPOST 1 modules > == Build lib/librte_eal/linuxapp/kni > CC [M] /home/labuser/projects/dpdk/build/build/lib/librte_eal/ > linuxapp/kni/ixgbe_main.o > In file included from /home/labuser/projects/dpdk/ > lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h:41, > from /home/labuser/projects/dpdk/ > lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_type.h:31, > from /home/labuser/projects/dpdk/ > lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h:32, > from /home/labuser/projects/dpdk/ > lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h:52, > from /home/labuser/projects/dpdk/ > build/build/lib/librte_eal/linuxapp/kni/ixgbe_main.c:56: > /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h: > In function ‘__kc_vlan_get_protocol’: > /home/labuser/projects/dpdk/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h:2836: > error: implicit declaration of function ‘vlan_tx_tag_present’ > make[8]: *** > [/home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/kni/ixgbe_main.o] > Error 1 > make[7]: *** > [_module_/home/labuser/projects/dpdk/build/build/lib/librte_eal/linuxapp/kni] > Error 2 > make[6]: *** [sub-make] Error 2 > make[5]: *** [rte_kni.ko] Error 2 > make[4]: *** [kni] Error 2 > make[3]: *** [linuxapp] Error 2 > make[2]: *** [librte_eal] Error 2 > make[1]: *** [lib] Error 2 > make: *** [all] Error 2 > > Google suggested some hacks; but I'm out of my element in driver source. > Is this a lost cause and I should try to get the server up to CentOS 7? Or > is there a relatively simple (if untested) patch that we could try? Our > objective here is to get through the DPDK and our net/ark PMD build process > and see if we can run some of our most-basic packet ingress/egress ops on > our Arkville IP in this system. > > Thanks in advance for any constructive feedback. Happy Holidays All. -Shep > > Shepard Siegel, CTO > atomicrules.com > > > > > >
Re: [dpdk-dev] [PATCH] net/ark: poll-mode driver for AtomicRules Arkville
Thanks Ferruh, Our team spent the weekend clearing the issues with last week's v1 submission, knocking down as many checkpatch issues as we can, cross compiling to both 32-bit and ARM targets, and of-course, testing on DTS and other hardware. We expect to follow Thomas' suggestion and re-send the whole patch as a v2, using -v2 and --in-reply-to options. We feel it would complicate things to pull the net/ark PMD into still-smaller pieces. Watch for the v2 patch as early as today; but certainly within a few days. We understand the integration deadline for 17.05 closes 3/30. best, -Shep for AR team On Mon, Mar 20, 2017 at 8:36 AM, Ferruh Yigit wrote: > On 3/17/2017 9:15 PM, Ed Czeck wrote: > > This is the PMD for Atomic Rules's Arkville ARK family of devices. > > See doc/guides/nics/ark.rst for detailed description. > > > > > > Signed-off-by: Shepard Siegel > > Signed-off-by: John Miller > > Signed-off-by: Ed Czeck > > > Hi Shepard, John, Ed, > > Thank you for the new PMD, it is great to see that DPDK coverage is > expanding. > > Intention to share this PMD was already communicated in previous release > [1]. I hope PMD gets reviewed enough to be in this release. > > And as a first thing, instead of one big patch, it helps a lot to get > reviewed if the patch split into smaller functional patches. There are > some good samples of it in the repo, or among the waiting PMDs.. > > Thanks, > ferruh > > > [1] > http://dpdk.org/ml/archives/dev/2016-December/051304.html > > >
Re: [dpdk-dev] [PATCH v7 1/7] net/ark: stub PMD for Atomic Rules Arkville
Hi Ferruh, Thank you for the terrific news. Ed is traveling today; we agree with you that pushing to RC2 will allow more chance for reviews with little or no other impact. We will update the release notes [1] to announce ARK PMD on Tuesday by 5PM EDT. As an aside, I want to mention that Atomic Rules has joined LF/DPDK as a Silver Member. I've been in contact with Tim and Thomas on this. Atomic Rules will included in some of the press around next week's ONS event. We are honored to be part of this group! -Shep for the Atomic Rules team Shepard Siegel, CTO atomicrules.com On Fri, Mar 31, 2017 at 10:51 AM, Ferruh Yigit wrote: > On 3/29/2017 10:32 PM, Ed Czeck wrote: > > Enable Arkville on supported configurations > > Add overview documentation > > Minimum driver support for valid compile > > Arkville PMD is not supported on ARM or PowerPC at this time > > > > v6: > > * Address review comments > > * Unify messaging, logging and debug macros to ark_logs.h > > > > v5: > > * Address comments from Ferruh Yigit > > * Added documentation on driver args > > * Makefile fixes > > * Safe argument processing > > * vdev args to dev args > > > > v4: > > * Address issues report from review > > * Add internal comments on driver arg > > * provide a bare-bones dev init to avoid compiler warnings > > > > v3: > > * Split large patch into several smaller ones > > > > Signed-off-by: Ed Czeck > > Signed-off-by: John Miller > > Hi Ed, > > Can you please update release notes [1] to announce the ARK PMD? > > > Apart from that, overall PMD looks good to me. > > Still I suggest pushing the PMD integration to RC2 phase, to give more > chance for reviews. > > If PMD gets acked before that time, we can integrate earlier. If there > is no objection till RC2, it gets merged in RC2. > > This means less time for testing after integration, is merging on RC2 > will leave you enough time for testing? > > Thanks, > ferruh > > > [1] > doc/guides/rel_notes/release_17_05.rst > > >
Re: [dpdk-dev] [PATCH 1/2] net/ark: allow unique user data for each port in extension calls
Hi Ferruh, I was absent from the code review John Miller and Ed Czeck held for earlier this week for this patch set. John and Ed are both on a long weekend this weekend. If my reply below is correct and sufficient, please ack this patch set. Otherwise, no reply is needed and please hold this open until next Monday when John will respond. The Arkville net/ark PMD has the unique challenge of supporting a plurality of MACs through singleton ingress and egress AXI interfaces in hardware. Arkville is in the DPDK business of mbuf structures and data-moving; and does not itself have a concept of line-rate, MTU, and other MAC-centric properties. The extension calls, and the unique user data for each port in the extension calls provide users of Arkville and the net/ark PMD a DPDK normalized way of accessing their MAC parameters, regardless of how many MACs they have instanced in their Arkville DPDK device. -Shep Shepard Siegel, CTO atomicrules.com On Fri, Jun 23, 2017 at 5:21 AM, Ferruh Yigit wrote: > On 6/22/2017 1:30 PM, John Miller wrote: > > Provide unique user data pointer in the extension calls for > > each port. > > Hi John, > > Can you please give more details about why this is done. I can see in > adapter data user_data pointer kept per port now, but why? What was > observed with previous code? If this is to fix something, what we should > expect to be fixed? > > And there is mtu_set implementation, is it related modification? Can be > separated to another patch? > > Thanks, > ferruh > > > > > Signed-off-by: John Miller > > <...> > >
[dpdk-dev] Proposal - ARM Support for Arkville PMD in DPDK 17.08
This email is notification ahead of the May 28 proposal deadline for 17.08 that Atomic Rules proposes to implement, test and support our Arkville net/ark PMD for ARM architecture in the DPDK 17.08 release. Our intent is for ARM architecture support with Arkville to stand equal with the current deep level of testing that is done with x86. Informally, we are reaching out to other DPDK developers who are currently supporting ARM so that we may share their best-practices and avoid their pitfalls. Shep for AR team Shepard Siegel, CTO atomicrules.com
[dpdk-dev] Best Practices for PMD Verification before Upstream Requests
Hi, Atomic Rules is new to the DPDK community. We attended the DPDK Summit last week and received terrific advice and encouragement. We are developing a DPDK PMD for our Arkville product which is a DPDK-aware data mover, capable of marshaling packets between FPGA/ASIC gates with AXI interfaces on one side, and the DPDK API/ABI on the other. Arkville plus a MAC looks like a line-rate-agnostic bare-bones L2 NIC. We have testpmd and our first DPDK applications running using our early-alpha Arkville PMD. This post is to ask of the DPDK community what tests, regressions, check-lists or similar verification assets we might work through before starting the process to upstream our code? We know device-specific PMDs are rather cloistered and unlikely to interfere; but still, others must have managed to find a way to fail with even an L2 baseline NIC. We don?t want to needlessly repeat those mistakes. Any DPDK-specific collateral that we can use to verify and validate our codes before attempting to upstream them would be greatly appreciated. To the DPDK PMD developers, what can you share so that we are more aligned with your regressions? To the DPDK application developers, what?s your top gripe we might try to avoid in our Arkville L2 baseline PMD? Thanks in advance. We won?t have anyone at the Dublin DPDK Summit, but we will be at FPL2016 in two weeks. Any constructive feedback is greatly appreciated! Shepard Siegel, CTO atomicrules.com <http://atomicrules.com>
[dpdk-dev] Best Practices for PMD Verification before Upstream Requests
Thomas and DPDK devs, Almost a year into our DPDK development, we have shipped an alpha version of our "Arkville" product. We've thankful for all the support from this group. Most everyone has suggested "get your code upstream ASAP"; but our team is cut from the "if it isn't tested, it doesn't work" cloth. We now have some solid miles on our Arkville PMD driver "ark" with 16.07. Mostly testpmd and a suite of user apps; dts not so much, only because our use case is a little different. We expect almost all of our contribution would land under $dpdk/drivers/net/ark . We are looking past 16.11 to possibly jump on board when the 17.02 window opens in December. One question that came up is "Should we do a thorough port and regression against 16.11 as a precursor to up streaming at 17.02?". Constructive feedback always welcome! -Shep Shepard Siegel, CTO atomicrules.com On Mon, Aug 22, 2016 at 9:07 AM, Thomas Monjalon wrote: > 2016-08-17 08:34, Shepard Siegel: > > Atomic Rules is new to the DPDK community. We attended the DPDK Summit > last > > week and received terrific advice and encouragement. We are developing a > > DPDK PMD for our Arkville product which is a DPDK-aware data mover, > capable > > of marshaling packets between FPGA/ASIC gates with AXI interfaces on one > > side, and the DPDK API/ABI on the other. Arkville plus a MAC looks like a > > line-rate-agnostic bare-bones L2 NIC. We have testpmd and our first DPDK > > applications running using our early-alpha Arkville PMD. > > Welcome :) > > Any release targeted for upstream support? > > >