Re: 回复: [PATCH] lib/telemetry:fix telemetry conns leak in case of socket write fail
On Sat, Jan 20, 2024 at 12:18:38PM +0800, 1819846787 wrote: >I have modified my commitlog and resubmitted the patch, but I seem to >have forgotten to add a [v2] flag to the patch. Do I need to resubmit >the patch again? > It's better if the v2 is added, but it's probably not necessary in this case. However, I see now you have resubmitted as a v2 anyway, which is fine. Please mark older versions of the patch as "superceded" in patchwork (patches.dpdk.org site - you'll need an account created with the email address used to submit your patches. That will allow you to update the status of your own patches yourself) Also, one other tip, when submitting a new version of a previously reviewed patch, unless there are major changes, you can keep any previously added acks from reviewers. For your v2, for example, after your signed-off-by you could have also added the "Acked-by: Ciara Power ... " line. Hope these tips help in the future! Thanks for the contribution. /Bruce >-- ԭʼʼ� -- > >: "David Marchand"; >��ʱ�: 2024119() 11:54 >ռ�: " "<1819846...@qq.com>; >��: "dev"; "ciara.power"; "Bruce >Richardson"; >: Re: [PATCH] lib/telemetry:fix telemetry conns leak in case of socket >write fail > >Hello, >On Fri, Jan 19, 2024 at 12:40�6�2PM sunshaowei01 <[1]1819846...@qq.com> >wrote: >> >> Telemetry can only create 10 conns by default, each of which is >processed >> by a thread. >> >> When a thread fails to write using socket, the thread will end >directly >> without reducing the total number of conns. >> >> This will result in the machine running for a long time, and if there >are >> 10 failures, the telemetry will be unavailable >> >Thanks for the patch, do you know which commit first triggered the >issue? >This is needed so we add a Fixes: tag in the commitlog for backporting >the fix in stable releases. >See >[2]https://doc.dpdk.org/guides/contributing/patches.html#patch-fix-rela >ted-issues >> Signed-off-by: sunshaowei01 <[3]1819846...@qq.com> >We need your full name in the SoB tag. >Like, for example in my case, it would be David Marchand ><[4]david.march...@redhat.com>. >-- >David Marchand > > References > >1. mailto:1819846...@qq.com >2. > https://doc.dpdk.org/guides/contributing/patches.html#patch-fix-related-issues >3. mailto:1819846...@qq.com >4. mailto:david.march...@redhat.com
RE: [PATCH v4 1/1] ethdev: parsing multiple representor devargs string
Hi Chaoyong, Please see responses inline > > return ret; > > } > > > ... > > a/drivers/net/nfp/flower/nfp_flower_representor.c > > b/drivers/net/nfp/flower/nfp_flower_representor.c > > index 5f7c1fa737..63fe37c8d7 100644 > > --- a/drivers/net/nfp/flower/nfp_flower_representor.c > > +++ b/drivers/net/nfp/flower/nfp_flower_representor.c > > @@ -792,8 +792,8 @@ nfp_flower_repr_create(struct nfp_app_fw_flower > > *app_fw_flower) > > > > /* Now parse PCI device args passed for representor info */ > > if (pci_dev->device.devargs != NULL) { > > - ret = rte_eth_devargs_parse(pci_dev->device.devargs->args, > > ð_da); > > - if (ret != 0) { > > + ret = > > + rte_eth_devargs_parse(pci_dev->device.devargs->args, > > ð_da, 1); > > + if (ret < 0) { > > PMD_INIT_LOG(ERR, "devarg parse failed"); > > return -EINVAL; > > } > > Sorry, I don't quite understand why change 'ret != 0' to 'ret < 0'? > Compare return value with 0 or NULL is the specification for our PMD, we > prefer not to change it if don't have a good reason. > Thanks. To support multiple representors under one PCI BDF, "eth_devargs args" parameter passed to rte_eth_devargs_parse() is now an array which gets populated with multiple "struct rte_eth_devargs" elements viz different pfvf representor devargs. We are proposing a change to the return value of this API, I.e. negative means error condition and positive value refers to no of "struct rte_eth_devargs" elements populated. So it can't be zero, else we wont know how many elements were populated in the array. Thanks Harman > > > -- > > 2.18.0
Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list
On 1/19/2024 5:10 PM, Power, Ciara wrote: > Hi Ferruh, > >> -Original Message- >> From: Ferruh Yigit >> Sent: Friday, January 19, 2024 2:59 PM >> To: Sivaramakrishnan, VenkatX ; >> Hemant Agrawal ; Sachin Saxena >> ; Zyta Szpak ; Liron Himi >> ; Chaoyong He ; >> Gagandeep Singh ; Jerin Jacob ; >> Maciej Czekaj >> Cc: dev@dpdk.org; Power, Ciara ; >> pascal.ma...@6wind.com; shreyansh.j...@nxp.com; t...@semihalf.com; >> qin...@corigine.com; jerin.ja...@caviumnetworks.com; sta...@dpdk.org >> Subject: Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list >> >> On 1/18/2024 12:07 PM, Sivaramakrishnan Venkat wrote: >>> Address Sanitizer detects a buffer overflow caused by an incorrect >>> ptypes list. Missing "RTE_PTYPE_UNKNOWN" ptype causes buffer overflow. >>> Fix the ptypes list for drivers. >>> >>> Fixes: 0849ac3b6122 ("net/tap: add packet type management") >>> Fixes: a7bdc3bd4244 ("net/dpaa: support packet type parsing") >>> Fixes: 4ccc8d770d3b ("net/mvneta: add PMD skeleton") >>> Fixes: f3f0d77db6b0 ("net/mrvl: support packet type parsing") >>> Fixes: 71e8bb65046e ("net/nfp: update supported list of packet types") >>> Fixes: 659b494d3d88 ("net/pfe: add packet types and basic statistics") >>> Fixes: 398a1be14168 ("net/thunderx: remove generic passX references") >>> Cc: pascal.ma...@6wind.com >>> Cc: shreyansh.j...@nxp.com >>> Cc: z...@semihalf.com >>> Cc: t...@semihalf.com >>> Cc: qin...@corigine.com >>> Cc: g.si...@nxp.com >>> Cc: jerin.ja...@caviumnetworks.com >>> Cc: sta...@dpdk.org >>> >>> Signed-off-by: Sivaramakrishnan Venkat >>> >>> >> >> Reviewed-by: Ferruh Yigit >> >> >> ("Cc: " shouldn't be in the commit message, but not big deal I can >> remove them while merging. >> If you want to help, in next version please put them after '---', as you are >> already doing with changelog) > > Thanks for the review on this one. > > The DPDK docs suggest adding the Fixes line + CC in the commit message body - > has the guidelines changed for this? > Can you please point me the mentioned doc, let me check it?
RE: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list
Hi Ferruh, > -Original Message- > From: Ferruh Yigit > Sent: Monday, January 22, 2024 9:43 AM > To: Power, Ciara ; Sivaramakrishnan, VenkatX > ; Hemant Agrawal > ; Sachin Saxena ; Zyta > Szpak ; Liron Himi ; Chaoyong He > ; Gagandeep Singh ; Jerin > Jacob ; Maciej Czekaj > Cc: dev@dpdk.org; pascal.ma...@6wind.com; shreyansh.j...@nxp.com; > t...@semihalf.com; qin...@corigine.com; jerin.ja...@caviumnetworks.com; > sta...@dpdk.org > Subject: Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list > > On 1/19/2024 5:10 PM, Power, Ciara wrote: > > Hi Ferruh, > > > >> -Original Message- > >> From: Ferruh Yigit > >> Sent: Friday, January 19, 2024 2:59 PM > >> To: Sivaramakrishnan, VenkatX ; > >> Hemant Agrawal ; Sachin Saxena > >> ; Zyta Szpak ; Liron Himi > >> ; Chaoyong He ; > >> Gagandeep Singh ; Jerin Jacob ; > >> Maciej Czekaj > >> Cc: dev@dpdk.org; Power, Ciara ; > >> pascal.ma...@6wind.com; shreyansh.j...@nxp.com; t...@semihalf.com; > >> qin...@corigine.com; jerin.ja...@caviumnetworks.com; sta...@dpdk.org > >> Subject: Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for > >> ptypes list > >> > >> On 1/18/2024 12:07 PM, Sivaramakrishnan Venkat wrote: > >>> Address Sanitizer detects a buffer overflow caused by an incorrect > >>> ptypes list. Missing "RTE_PTYPE_UNKNOWN" ptype causes buffer > overflow. > >>> Fix the ptypes list for drivers. > >>> > >>> Fixes: 0849ac3b6122 ("net/tap: add packet type management") > >>> Fixes: a7bdc3bd4244 ("net/dpaa: support packet type parsing") > >>> Fixes: 4ccc8d770d3b ("net/mvneta: add PMD skeleton") > >>> Fixes: f3f0d77db6b0 ("net/mrvl: support packet type parsing") > >>> Fixes: 71e8bb65046e ("net/nfp: update supported list of packet > >>> types") > >>> Fixes: 659b494d3d88 ("net/pfe: add packet types and basic > >>> statistics") > >>> Fixes: 398a1be14168 ("net/thunderx: remove generic passX > >>> references") > >>> Cc: pascal.ma...@6wind.com > >>> Cc: shreyansh.j...@nxp.com > >>> Cc: z...@semihalf.com > >>> Cc: t...@semihalf.com > >>> Cc: qin...@corigine.com > >>> Cc: g.si...@nxp.com > >>> Cc: jerin.ja...@caviumnetworks.com > >>> Cc: sta...@dpdk.org > >>> > >>> Signed-off-by: Sivaramakrishnan Venkat > >>> > >>> > >> > >> Reviewed-by: Ferruh Yigit > >> > >> > >> ("Cc: " shouldn't be in the commit message, but not big deal I > >> can remove them while merging. > >> If you want to help, in next version please put them after '---', as > >> you are already doing with changelog) > > > > Thanks for the review on this one. > > > > The DPDK docs suggest adding the Fixes line + CC in the commit message > body - has the guidelines changed for this? > > > > Can you please point me the mentioned doc, let me check it? Sure, it's in section 8.7 here: https://doc.dpdk.org/guides/contributing/patches.html Thanks, Ciara
Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list
On 1/22/2024 9:46 AM, Power, Ciara wrote: > Hi Ferruh, > >> -Original Message- >> From: Ferruh Yigit >> Sent: Monday, January 22, 2024 9:43 AM >> To: Power, Ciara ; Sivaramakrishnan, VenkatX >> ; Hemant Agrawal >> ; Sachin Saxena ; Zyta >> Szpak ; Liron Himi ; Chaoyong He >> ; Gagandeep Singh ; Jerin >> Jacob ; Maciej Czekaj >> Cc: dev@dpdk.org; pascal.ma...@6wind.com; shreyansh.j...@nxp.com; >> t...@semihalf.com; qin...@corigine.com; jerin.ja...@caviumnetworks.com; >> sta...@dpdk.org >> Subject: Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list >> >> On 1/19/2024 5:10 PM, Power, Ciara wrote: >>> Hi Ferruh, >>> -Original Message- From: Ferruh Yigit Sent: Friday, January 19, 2024 2:59 PM To: Sivaramakrishnan, VenkatX ; Hemant Agrawal ; Sachin Saxena ; Zyta Szpak ; Liron Himi ; Chaoyong He ; Gagandeep Singh ; Jerin Jacob ; Maciej Czekaj Cc: dev@dpdk.org; Power, Ciara ; pascal.ma...@6wind.com; shreyansh.j...@nxp.com; t...@semihalf.com; qin...@corigine.com; jerin.ja...@caviumnetworks.com; sta...@dpdk.org Subject: Re: [PATCH v5 1/2] drivers/net: fix buffer overflow for ptypes list On 1/18/2024 12:07 PM, Sivaramakrishnan Venkat wrote: > Address Sanitizer detects a buffer overflow caused by an incorrect > ptypes list. Missing "RTE_PTYPE_UNKNOWN" ptype causes buffer >> overflow. > Fix the ptypes list for drivers. > > Fixes: 0849ac3b6122 ("net/tap: add packet type management") > Fixes: a7bdc3bd4244 ("net/dpaa: support packet type parsing") > Fixes: 4ccc8d770d3b ("net/mvneta: add PMD skeleton") > Fixes: f3f0d77db6b0 ("net/mrvl: support packet type parsing") > Fixes: 71e8bb65046e ("net/nfp: update supported list of packet > types") > Fixes: 659b494d3d88 ("net/pfe: add packet types and basic > statistics") > Fixes: 398a1be14168 ("net/thunderx: remove generic passX > references") > Cc: pascal.ma...@6wind.com > Cc: shreyansh.j...@nxp.com > Cc: z...@semihalf.com > Cc: t...@semihalf.com > Cc: qin...@corigine.com > Cc: g.si...@nxp.com > Cc: jerin.ja...@caviumnetworks.com > Cc: sta...@dpdk.org > > Signed-off-by: Sivaramakrishnan Venkat > > Reviewed-by: Ferruh Yigit ("Cc: " shouldn't be in the commit message, but not big deal I can remove them while merging. If you want to help, in next version please put them after '---', as you are already doing with changelog) >>> >>> Thanks for the review on this one. >>> >>> The DPDK docs suggest adding the Fixes line + CC in the commit message >> body - has the guidelines changed for this? >>> >> >> Can you please point me the mentioned doc, let me check it? > > Sure, it's in section 8.7 here: > https://doc.dpdk.org/guides/contributing/patches.html > > Right, Cc the author of the fix added later to the 'git fixline' alias, this is useful because 'git send-email' picks "Cc: ..." and cc the author to the patch. But this info is redundant in the commit log, so maintainers are removing it from commit log while merging. If the "Cc: ...", is after the '---', it is still picked by 'git send-email' and removed automatically while merging, so this is better option. It is possible to update documentation but I am concerned to make adding fixline more complex for new contributors. Perhaps can add a new line as a suggestion. But briefly, if you move after '---' it helps.
RE: [PATCH v4 1/1] ethdev: parsing multiple representor devargs string
> > Hi Chaoyong, > > Please see responses inline > > > > > > > return ret; > > > } > > > > > ... > > > a/drivers/net/nfp/flower/nfp_flower_representor.c > > > b/drivers/net/nfp/flower/nfp_flower_representor.c > > > index 5f7c1fa737..63fe37c8d7 100644 > > > --- a/drivers/net/nfp/flower/nfp_flower_representor.c > > > +++ b/drivers/net/nfp/flower/nfp_flower_representor.c > > > @@ -792,8 +792,8 @@ nfp_flower_repr_create(struct > nfp_app_fw_flower > > > *app_fw_flower) > > > > > > /* Now parse PCI device args passed for representor info */ > > > if (pci_dev->device.devargs != NULL) { > > > - ret = rte_eth_devargs_parse(pci_dev->device.devargs->args, > > > ð_da); > > > - if (ret != 0) { > > > + ret = > > > + rte_eth_devargs_parse(pci_dev->device.devargs->args, > > > ð_da, 1); > > > + if (ret < 0) { > > > PMD_INIT_LOG(ERR, "devarg parse failed"); > > > return -EINVAL; > > > } > > > > Sorry, I don't quite understand why change 'ret != 0' to 'ret < 0'? > > Compare return value with 0 or NULL is the specification for our PMD, > > we prefer not to change it if don't have a good reason. > > Thanks. > > To support multiple representors under one PCI BDF, "eth_devargs args" > parameter passed to rte_eth_devargs_parse() is now an array which gets > populated with multiple "struct rte_eth_devargs" elements viz different pfvf > representor devargs. > > We are proposing a change to the return value of this API, I.e. negative means > error condition and positive value refers to no of "struct rte_eth_devargs" > elements populated. So it can't be zero, else we wont know how many > elements were populated in the array. > > Thanks > Harman > Got it, thanks to make it clear. Then it looks good to me. > > > > > -- > > > 2.18.0
Re: [PATCH 1/2] config/arm: avoid mcpu and march conflicts
On Sun, Jan 21, 2024 at 10:37 AM wrote: > > From: Pavan Nikhilesh > > The compiler options march and mtune are a subset > of mcpu and will lead to conflicts if improper march > is chosen for a given mcpu. > To avoid conflicts, force part number march when > mcpu is available and is supported by the compiler. > > Example: > march = armv9-a > mcpu = neoverse-n2 > > mcpu supported, march supported > machine_args = ['-mcpu=neoverse-n2', '-march=armv9-a'] > > mcpu supported, march not supported > machine_args = ['-mcpu=neoverse-n2'] > > mcpu not supported, march supported > machine_args = ['-march=armv9-a'] > > mcpu not supported, march not supported > machine_args = ['-march=armv8.6-a'] > > Signed-off-by: Pavan Nikhilesh > --- > config/arm/meson.build | 109 + > 1 file changed, 67 insertions(+), 42 deletions(-) > > diff --git a/config/arm/meson.build b/config/arm/meson.build > index 36f21d2259..8c8cfccca0 100644 > --- a/config/arm/meson.build > +++ b/config/arm/meson.build > @@ -127,21 +128,22 @@ implementer_cavium = { > ], > 'part_number_config': { > '0xa1': { > -'compiler_options': ['-mcpu=thunderxt88'], > +'mcpu': 'thunderxt88', > 'flags': flags_part_number_thunderx > }, > '0xa2': { > -'compiler_options': ['-mcpu=thunderxt81'], > +'mcpu': 'thunderxt81', > 'flags': flags_part_number_thunderx > }, > '0xa3': { > -'compiler_options': ['-march=armv8-a+crc', '-mcpu=thunderxt83'], > +'mcpu': 'thunderxt83', > +'compiler_options': ['-march=armv8-a+crc'], Let's unify this with the rest and specify 'march': 'armv8-a+crc' instead of having it under compiler_options. > 'flags': flags_part_number_thunderx > }, > '0xaf': { > 'march': 'armv8.1-a', > 'march_features': ['crc', 'crypto'], > -'compiler_options': ['-mcpu=thunderx2t99'], > +'mcpu': 'thunderx2t99', > 'flags': [ > ['RTE_MACHINE', '"thunderx2"'], > ['RTE_ARM_FEATURE_ATOMICS', true], > @@ -153,7 +155,7 @@ implementer_cavium = { > '0xb2': { > 'march': 'armv8.2-a', > 'march_features': ['crc', 'crypto', 'lse'], > -'compiler_options': ['-mcpu=octeontx2'], > +'mcpu': 'octeontx2', > 'flags': [ > ['RTE_MACHINE', '"cn9k"'], > ['RTE_ARM_FEATURE_ATOMICS', true], > @@ -176,7 +178,7 @@ implementer_ampere = { > '0x0': { > 'march': 'armv8-a', > 'march_features': ['crc', 'crypto'], > -'compiler_options': ['-mtune=emag'], > +'mcpu': 'emag', We're changing mtune to mcpu, is this equivalent? > 'flags': [ > ['RTE_MACHINE', '"eMAG"'], > ['RTE_MAX_LCORE', 32], > @@ -186,7 +188,7 @@ implementer_ampere = { > '0xac3': { > 'march': 'armv8.6-a', > 'march_features': ['crc', 'crypto'], > -'compiler_options': ['-mcpu=ampere1'], > +'mcpu': 'ampere1', > 'flags': [ > ['RTE_MACHINE', '"AmpereOne"'], > ['RTE_MAX_LCORE', 320], > @@ -206,7 +208,7 @@ implementer_hisilicon = { > '0xd01': { > 'march': 'armv8.2-a', > 'march_features': ['crypto'], > -'compiler_options': ['-mtune=tsv110'], > +'mcpu': 'tsv110', > 'flags': [ > ['RTE_MACHINE', '"Kunpeng 920"'], > ['RTE_ARM_FEATURE_ATOMICS', true], > @@ -695,11 +697,23 @@ if update_flags > > machine_args = [] # Clear previous machine args > > +candidate_mcpu = '' > +support_mcpu = false > +if part_number_config.has_key('mcpu') > +mcpu = part_number_config['mcpu'] > +if (cc.has_argument('-mcpu=' + mcpu)) > +candidate_mcpu = mcpu > +support_mcpu = true > +endif > +endif > + > # probe supported archs and their features > candidate_march = '' > if part_number_config.has_key('march') > -if part_number_config.get('force_march', false) > -candidate_march = part_number_config['march'] > +if part_number_config.get('force_march', false) or support_mcpu Instead of using the extra "support_mcpu" variable, we could do the same check as with candidate march (if candidate_mcpu != '', which we actually do below in the last lines of the patch). If I understand the logic correctly, we don't want to do the march fallback if mcpu is specified - either the march works with the given mcpu or we do without it (because we don't actually need it with mcpu). Is that correct? > +if cc.has_argument('-march=' + part_number_config['m
Re: [v4] net/gve: enable imissed stats for GQ format
On 1/19/2024 5:09 PM, Rushil Gupta wrote: > Read from shared region to retrieve imissed statistics for GQ from device. > > Signed-off-by: Rushil Gupta > Reviewed-by: Joshua Washington > Acked-by: Ferruh Yigit Applied to dpdk-next-net/main, thanks.
Re: [PATCH 2/2] config/arm: add support for fallback march
On Sun, Jan 21, 2024 at 10:37 AM wrote: > > From: Pavan Nikhilesh > > Some ARM CPUs have specific march requirements and > are not compatible with the supported march list. > Add fallback march in case the mcpu and the march > advertised in the part_number_config are not supported > by the compiler. > It's not clear to me what this patch adds. We already have a fallback mechanism and this basically does the same thing, but there's some extra logic that's not clear to me. Looks like there are some extra conditions around mcpu. In that case, all of the mcpu/march processing should be done first and then we should do a common fallback. > Example > mcpu = neoverse-n2 > march = armv9-a > fallback_march = armv8.5-a > > mcpu, march not supported > machine_args = ['-march=armv8.5-a'] > > mcpu, march, fallback_march not supported > least march supported = armv8-a > > machine_args = ['-march=armv8-a'] > > Signed-off-by: Pavan Nikhilesh > --- > config/arm/meson.build | 15 +-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > diff --git a/config/arm/meson.build b/config/arm/meson.build > index 8c8cfccca0..2aaf78a81a 100644 > --- a/config/arm/meson.build > +++ b/config/arm/meson.build > @@ -94,6 +94,7 @@ part_number_config_arm = { > '0xd49': { > 'march': 'armv9-a', > 'march_features': ['sve2'], > +'fallback_march': 'armv8.5-a', > 'mcpu': 'neoverse-n2', > 'flags': [ > ['RTE_MACHINE', '"neoverse-n2"'], > @@ -709,14 +710,14 @@ if update_flags > > # probe supported archs and their features > candidate_march = '' > +supported_marchs = ['armv9-a', 'armv8.6-a', 'armv8.5-a', 'armv8.4-a', > +'armv8.3-a', 'armv8.2-a', 'armv8.1-a', 'armv8-a'] > if part_number_config.has_key('march') > if part_number_config.get('force_march', false) or support_mcpu > if cc.has_argument('-march=' + part_number_config['march']) > candidate_march = part_number_config['march'] > endif > else > -supported_marchs = ['armv8.6-a', 'armv8.5-a', 'armv8.4-a', > 'armv8.3-a', > -'armv8.2-a', 'armv8.1-a', 'armv8-a'] > check_compiler_support = false > foreach supported_march: supported_marchs > if supported_march == part_number_config['march'] > @@ -733,6 +734,16 @@ if update_flags > endif > > if candidate_march != part_number_config['march'] > +if part_number_config.has_key('fallback_march') and not > support_mcpu > +fallback_march = part_number_config['fallback_march'] > +foreach supported_march: supported_marchs > +if (supported_march == fallback_march > +and cc.has_argument('-march=' + supported_march)) > +candidate_march = supported_march > +break > +endif > +endforeach > +endif > warning('Configuration march version is @0@, not supported.' > .format(part_number_config['march'])) > if candidate_march != '' > -- > 2.25.1 >
Re: [PATCH v2] net/ice: fix tso tunnel setting to not take effect
> > The Tx offload capabilities of ICE ethdev doesn't include tso tunnel, which > > will > > result in tso tunnel setting to not take effect. > > > > The patch adds tunnel tso offload to ICE_TX_NO_VECTOR_FLAGS. > > > > This commit will add tso tunnel capabilities in ice_dev_info_get(). > > > > Bugzilla ID: 1327 > > Fixes: d852fec1be63 ("net/ice: fix Tx offload path choice") > > Fixes: 295968d17407 ("ethdev: add namespace") > > Cc: sta...@dpdk.org > > > > Signed-off-by: Kaiwen Deng > > Acked-by: Qi Zhang > > Applied to dpdk-next-net-intel. As said by David in v1, 295968d17407 ("ethdev: add namespace") is not a cause. Removing while pulling.
RE: [EXT] Re: [PATCH 1/2] config/arm: avoid mcpu and march conflicts
> On Sun, Jan 21, 2024 at 10:37 AM wrote: > > > > From: Pavan Nikhilesh > > > > The compiler options march and mtune are a subset > > of mcpu and will lead to conflicts if improper march > > is chosen for a given mcpu. > > To avoid conflicts, force part number march when > > mcpu is available and is supported by the compiler. > > > > Example: > > march = armv9-a > > mcpu = neoverse-n2 > > > > mcpu supported, march supported > > machine_args = ['-mcpu=neoverse-n2', '-march=armv9-a'] > > > > mcpu supported, march not supported > > machine_args = ['-mcpu=neoverse-n2'] > > > > mcpu not supported, march supported > > machine_args = ['-march=armv9-a'] > > > > mcpu not supported, march not supported > > machine_args = ['-march=armv8.6-a'] > > > > Signed-off-by: Pavan Nikhilesh > > --- > > config/arm/meson.build | 109 +-- > -- > > 1 file changed, 67 insertions(+), 42 deletions(-) > > > > diff --git a/config/arm/meson.build b/config/arm/meson.build > > index 36f21d2259..8c8cfccca0 100644 > > --- a/config/arm/meson.build > > +++ b/config/arm/meson.build > > > @@ -127,21 +128,22 @@ implementer_cavium = { > > ], > > 'part_number_config': { > > '0xa1': { > > -'compiler_options': ['-mcpu=thunderxt88'], > > +'mcpu': 'thunderxt88', > > 'flags': flags_part_number_thunderx > > }, > > '0xa2': { > > -'compiler_options': ['-mcpu=thunderxt81'], > > +'mcpu': 'thunderxt81', > > 'flags': flags_part_number_thunderx > > }, > > '0xa3': { > > -'compiler_options': ['-march=armv8-a+crc', > > '-mcpu=thunderxt83'], > > +'mcpu': 'thunderxt83', > > +'compiler_options': ['-march=armv8-a+crc'], > > Let's unify this with the rest and specify 'march': 'armv8-a+crc' > instead of having it under compiler_options. Ack. > > > 'flags': flags_part_number_thunderx > > }, > > '0xaf': { > > 'march': 'armv8.1-a', > > 'march_features': ['crc', 'crypto'], > > -'compiler_options': ['-mcpu=thunderx2t99'], > > +'mcpu': 'thunderx2t99', > > 'flags': [ > > ['RTE_MACHINE', '"thunderx2"'], > > ['RTE_ARM_FEATURE_ATOMICS', true], > > @@ -153,7 +155,7 @@ implementer_cavium = { > > '0xb2': { > > 'march': 'armv8.2-a', > > 'march_features': ['crc', 'crypto', 'lse'], > > -'compiler_options': ['-mcpu=octeontx2'], > > +'mcpu': 'octeontx2', > > 'flags': [ > > ['RTE_MACHINE', '"cn9k"'], > > ['RTE_ARM_FEATURE_ATOMICS', true], > > @@ -176,7 +178,7 @@ implementer_ampere = { > > '0x0': { > > 'march': 'armv8-a', > > 'march_features': ['crc', 'crypto'], > > -'compiler_options': ['-mtune=emag'], > > +'mcpu': 'emag', > > We're changing mtune to mcpu, is this equivalent? > Both march and mtune are a subset of mcpu. > > 'flags': [ > > ['RTE_MACHINE', '"eMAG"'], > > ['RTE_MAX_LCORE', 32], > > @@ -186,7 +188,7 @@ implementer_ampere = { > > '0xac3': { > > 'march': 'armv8.6-a', > > 'march_features': ['crc', 'crypto'], > > -'compiler_options': ['-mcpu=ampere1'], > > +'mcpu': 'ampere1', > > 'flags': [ > > ['RTE_MACHINE', '"AmpereOne"'], > > ['RTE_MAX_LCORE', 320], > > @@ -206,7 +208,7 @@ implementer_hisilicon = { > > '0xd01': { > > 'march': 'armv8.2-a', > > 'march_features': ['crypto'], > > -'compiler_options': ['-mtune=tsv110'], > > +'mcpu': 'tsv110', > > 'flags': [ > > ['RTE_MACHINE', '"Kunpeng 920"'], > > ['RTE_ARM_FEATURE_ATOMICS', true], > > @@ -695,11 +697,23 @@ if update_flags > > > > machine_args = [] # Clear previous machine args > > > > +candidate_mcpu = '' > > +support_mcpu = false > > +if part_number_config.has_key('mcpu') > > +mcpu = part_number_config['mcpu'] > > +if (cc.has_argument('-mcpu=' + mcpu)) > > +candidate_mcpu = mcpu > > +support_mcpu = true > > +endif > > +endif > > + > > # probe supported archs and their features > > candidate_march = '' > > if part_number_config.has_key('march') > > -if part_number_config.get('force_march', false) > > -candidate_march = part_number_config['march'] > > +if part_number_config.get('force_march', false) or support_mcpu > > Instead of using the extra "support_mcpu" variable, we could do the > same check as with candidate march (if candidate_mcpu != '', which we > actually do below in the la
[PATCH v2 0/3] dts: API docs generation
The generation is done with Sphinx, which DPDK already uses, with slightly modified configuration of the sidebar present in an if block. Dependencies are installed using Poetry from the dts directory: poetry install --with docs After installing, enter the Poetry shell: poetry shell And then run the build: ninja -C dts-doc The patchset contains the .rst sources which Sphinx uses to generate the html pages. These were first generated with the sphinx-apidoc utility and modified to provide a better look. The documentation just doesn't look that good without the modifications and there isn't enough configuration options to achieve that without manual changes to the .rst files. This introduces extra maintenance which involves adding new .rst files when a new Python module is added or changing the .rst structure if the Python directory/file structure is changed (moved, renamed files). This small maintenance burden is outweighed by the flexibility afforded by the ability to make manual changes to the .rst files. v2: Removed the use of sphinx-apidoc from meson in favor of adding the files generated by it directly to the repository (and modifying them). Juraj Linkeš (3): dts: add doc generation dependencies dts: add API doc sources dts: add API doc generation buildtools/call-sphinx-build.py | 33 +- doc/api/doxy-api-index.md | 3 + doc/api/doxy-api.conf.in | 2 + doc/api/meson.build | 11 +- doc/guides/conf.py| 39 +- doc/guides/meson.build| 1 + doc/guides/tools/dts.rst | 34 +- dts/doc/conf_yaml_schema.json | 1 + dts/doc/framework.config.rst | 12 + dts/doc/framework.config.types.rst| 6 + dts/doc/framework.dts.rst | 6 + dts/doc/framework.exception.rst | 6 + dts/doc/framework.logger.rst | 6 + ...ote_session.interactive_remote_session.rst | 6 + ...ework.remote_session.interactive_shell.rst | 6 + .../framework.remote_session.python_shell.rst | 6 + ...ramework.remote_session.remote_session.rst | 6 + dts/doc/framework.remote_session.rst | 17 + .../framework.remote_session.ssh_session.rst | 6 + ...framework.remote_session.testpmd_shell.rst | 6 + dts/doc/framework.rst | 30 ++ dts/doc/framework.settings.rst| 6 + dts/doc/framework.test_result.rst | 6 + dts/doc/framework.test_suite.rst | 6 + dts/doc/framework.testbed_model.cpu.rst | 6 + .../framework.testbed_model.linux_session.rst | 6 + dts/doc/framework.testbed_model.node.rst | 6 + .../framework.testbed_model.os_session.rst| 6 + dts/doc/framework.testbed_model.port.rst | 6 + .../framework.testbed_model.posix_session.rst | 6 + dts/doc/framework.testbed_model.rst | 26 + dts/doc/framework.testbed_model.sut_node.rst | 6 + dts/doc/framework.testbed_model.tg_node.rst | 6 + ..._generator.capturing_traffic_generator.rst | 6 + ...mework.testbed_model.traffic_generator.rst | 14 + testbed_model.traffic_generator.scapy.rst | 6 + ...el.traffic_generator.traffic_generator.rst | 6 + ...framework.testbed_model.virtual_device.rst | 6 + dts/doc/framework.utils.rst | 6 + dts/doc/index.rst | 41 ++ dts/doc/meson.build | 27 + dts/meson.build | 16 + dts/poetry.lock | 499 +- dts/pyproject.toml| 7 + meson.build | 1 + 45 files changed, 950 insertions(+), 20 deletions(-) create mode 12 dts/doc/conf_yaml_schema.json create mode 100644 dts/doc/framework.config.rst create mode 100644 dts/doc/framework.config.types.rst create mode 100644 dts/doc/framework.dts.rst create mode 100644 dts/doc/framework.exception.rst create mode 100644 dts/doc/framework.logger.rst create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst create mode 100644 dts/doc/framework.remote_session.python_shell.rst create mode 100644 dts/doc/framework.remote_session.remote_session.rst create mode 100644 dts/doc/framework.remote_session.rst create mode 100644 dts/doc/framework.remote_session.ssh_session.rst create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst create mode 100644 dts/doc/framework.rst create mode 100644 dts/doc/framework.settings.rst create mode 100644 dts/doc/framework.test_result.rst create mode 100644 dts/doc/framework.test_suite.rst create mode 100644 dts/doc/framework.testbed_model.cpu.rst create mode 100644 dts/doc/framework.testbed_model.linux_session.rst create mode 100644 dts/doc/fra
[PATCH v2 1/3] dts: add doc generation dependencies
Sphinx imports every Python module when generating documentation from docstrings, meaning all dts dependencies, including Python version, must be satisfied. By adding Sphinx to dts dependencies we make sure that the proper Python version and dependencies are used when Sphinx is executed. Signed-off-by: Juraj Linkeš --- dts/poetry.lock| 499 - dts/pyproject.toml | 7 + 2 files changed, 505 insertions(+), 1 deletion(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index a734fa71f0..8b27b0d751 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -1,5 +1,16 @@ # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand. +[[package]] +name = "alabaster" +version = "0.7.13" +description = "A configurable sidebar-enabled Sphinx theme" +optional = false +python-versions = ">=3.6" +files = [ +{file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"}, +{file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"}, +] + [[package]] name = "attrs" version = "23.1.0" @@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib- tests = ["attrs[tests-no-zope]", "zope-interface"] tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +[[package]] +name = "babel" +version = "2.13.1" +description = "Internationalization utilities" +optional = false +python-versions = ">=3.7" +files = [ +{file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"}, +{file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"}, +] + +[package.dependencies] +setuptools = {version = "*", markers = "python_version >= \"3.12\""} + +[package.extras] +dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"] + [[package]] name = "bcrypt" version = "4.0.1" @@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"] jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"] uvloop = ["uvloop (>=0.15.2)"] +[[package]] +name = "certifi" +version = "2023.7.22" +description = "Python package for providing Mozilla's CA Bundle." +optional = false +python-versions = ">=3.6" +files = [ +{file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, +{file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, +] + [[package]] name = "cffi" version = "1.15.1" @@ -162,6 +201,105 @@ files = [ [package.dependencies] pycparser = "*" +[[package]] +name = "charset-normalizer" +version = "3.3.2" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +optional = false +python-versions = ">=3.7.0" +files = [ +{file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"}, +{file = "ch
[PATCH v2 2/3] dts: add API doc sources
These sources could be generated with the sphinx-apidoc utility, but that doesn't give us enough flexibility. The sources included in this patch were in fact generated by said utility, but modified to improve the look of the documentation. The improvements are mainly in toctree definitions and the titles of the modules/packages. These were made with specific config options in mind. Signed-off-by: Juraj Linkeš --- dts/doc/conf_yaml_schema.json | 1 + dts/doc/framework.config.rst | 12 ++ dts/doc/framework.config.types.rst| 6 +++ dts/doc/framework.dts.rst | 6 +++ dts/doc/framework.exception.rst | 6 +++ dts/doc/framework.logger.rst | 6 +++ ...ote_session.interactive_remote_session.rst | 6 +++ ...ework.remote_session.interactive_shell.rst | 6 +++ .../framework.remote_session.python_shell.rst | 6 +++ ...ramework.remote_session.remote_session.rst | 6 +++ dts/doc/framework.remote_session.rst | 17 .../framework.remote_session.ssh_session.rst | 6 +++ ...framework.remote_session.testpmd_shell.rst | 6 +++ dts/doc/framework.rst | 30 ++ dts/doc/framework.settings.rst| 6 +++ dts/doc/framework.test_result.rst | 6 +++ dts/doc/framework.test_suite.rst | 6 +++ dts/doc/framework.testbed_model.cpu.rst | 6 +++ .../framework.testbed_model.linux_session.rst | 6 +++ dts/doc/framework.testbed_model.node.rst | 6 +++ .../framework.testbed_model.os_session.rst| 6 +++ dts/doc/framework.testbed_model.port.rst | 6 +++ .../framework.testbed_model.posix_session.rst | 6 +++ dts/doc/framework.testbed_model.rst | 26 dts/doc/framework.testbed_model.sut_node.rst | 6 +++ dts/doc/framework.testbed_model.tg_node.rst | 6 +++ ..._generator.capturing_traffic_generator.rst | 6 +++ ...mework.testbed_model.traffic_generator.rst | 14 +++ testbed_model.traffic_generator.scapy.rst | 6 +++ ...el.traffic_generator.traffic_generator.rst | 6 +++ ...framework.testbed_model.virtual_device.rst | 6 +++ dts/doc/framework.utils.rst | 6 +++ dts/doc/index.rst | 41 +++ 33 files changed, 297 insertions(+) create mode 12 dts/doc/conf_yaml_schema.json create mode 100644 dts/doc/framework.config.rst create mode 100644 dts/doc/framework.config.types.rst create mode 100644 dts/doc/framework.dts.rst create mode 100644 dts/doc/framework.exception.rst create mode 100644 dts/doc/framework.logger.rst create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst create mode 100644 dts/doc/framework.remote_session.python_shell.rst create mode 100644 dts/doc/framework.remote_session.remote_session.rst create mode 100644 dts/doc/framework.remote_session.rst create mode 100644 dts/doc/framework.remote_session.ssh_session.rst create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst create mode 100644 dts/doc/framework.rst create mode 100644 dts/doc/framework.settings.rst create mode 100644 dts/doc/framework.test_result.rst create mode 100644 dts/doc/framework.test_suite.rst create mode 100644 dts/doc/framework.testbed_model.cpu.rst create mode 100644 dts/doc/framework.testbed_model.linux_session.rst create mode 100644 dts/doc/framework.testbed_model.node.rst create mode 100644 dts/doc/framework.testbed_model.os_session.rst create mode 100644 dts/doc/framework.testbed_model.port.rst create mode 100644 dts/doc/framework.testbed_model.posix_session.rst create mode 100644 dts/doc/framework.testbed_model.rst create mode 100644 dts/doc/framework.testbed_model.sut_node.rst create mode 100644 dts/doc/framework.testbed_model.tg_node.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst create mode 100644 dts/doc/framework.utils.rst create mode 100644 dts/doc/index.rst diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json new file mode 12 index 00..d89eb81b72 --- /dev/null +++ b/dts/doc/conf_yaml_schema.json @@ -0,0 +1 @@ +../framework/config/conf_yaml_schema.json \ No newline at end of file diff --git a/dts/doc/framework.config.rst b/dts/doc/framework.config.rst new file mode 100644 index 00..f765ef0e32 --- /dev/null +++ b/dts/doc/framework.config.rst @@ -0,0 +1,12 @@ +config - Configuration Package +== + +.. automodule:: framework.config + :members: +
[PATCH v2 3/3] dts: add API doc generation
The tool used to generate developer docs is Sphinx, which is already in use in DPDK. The same configuration is used to preserve style, but it's been augmented with doc-generating configuration. There's a change that modifies how the sidebar displays the content hierarchy that's been put into an if block to not interfere with regular docs. Sphinx generates the documentation from Python docstrings. The docstring format is the Google format [0] which requires the sphinx.ext.napoleon extension. The other extension, sphinx.ext.intersphinx, enables linking to object in external documentations, such as the Python documentation. There are two requirements for building DTS docs: * The same Python version as DTS or higher, because Sphinx imports the code. * Also the same Python packages as DTS, for the same reason. [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings Signed-off-by: Juraj Linkeš --- buildtools/call-sphinx-build.py | 33 +++- doc/api/doxy-api-index.md | 3 +++ doc/api/doxy-api.conf.in| 2 ++ doc/api/meson.build | 11 +++--- doc/guides/conf.py | 39 - doc/guides/meson.build | 1 + doc/guides/tools/dts.rst| 34 +++- dts/doc/meson.build | 27 +++ dts/meson.build | 16 ++ meson.build | 1 + 10 files changed, 148 insertions(+), 19 deletions(-) create mode 100644 dts/doc/meson.build create mode 100644 dts/meson.build diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py index 39a60d09fa..aea771a64e 100755 --- a/buildtools/call-sphinx-build.py +++ b/buildtools/call-sphinx-build.py @@ -3,37 +3,50 @@ # Copyright(c) 2019 Intel Corporation # +import argparse import sys import os from os.path import join from subprocess import run, PIPE, STDOUT from packaging.version import Version -# assign parameters to variables -(sphinx, version, src, dst, *extra_args) = sys.argv[1:] +parser = argparse.ArgumentParser() +parser.add_argument('sphinx') +parser.add_argument('version') +parser.add_argument('src') +parser.add_argument('dst') +parser.add_argument('--dts-root', default=None) +args, extra_args = parser.parse_known_args() # set the version in environment for sphinx to pick up -os.environ['DPDK_VERSION'] = version +os.environ['DPDK_VERSION'] = args.version +if args.dts_root: +os.environ['DTS_ROOT'] = args.dts_root # for sphinx version >= 1.7 add parallelism using "-j auto" -ver = run([sphinx, '--version'], stdout=PIPE, +ver = run([args.sphinx, '--version'], stdout=PIPE, stderr=STDOUT).stdout.decode().split()[-1] -sphinx_cmd = [sphinx] + extra_args +sphinx_cmd = [args.sphinx] + extra_args if Version(ver) >= Version('1.7'): sphinx_cmd += ['-j', 'auto'] # find all the files sphinx will process so we can write them as dependencies srcfiles = [] -for root, dirs, files in os.walk(src): +for root, dirs, files in os.walk(args.src): srcfiles.extend([join(root, f) for f in files]) +if not os.path.exists(args.dst): +os.makedirs(args.dst) + # run sphinx, putting the html output in a "html" directory -with open(join(dst, 'sphinx_html.out'), 'w') as out: -process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')], - stdout=out) +with open(join(args.dst, 'sphinx_html.out'), 'w') as out: +process = run( +sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')], +stdout=out +) # create a gcc format .d file giving all the dependencies of this doc build -with open(join(dst, '.html.d'), 'w') as d: +with open(join(args.dst, '.html.d'), 'w') as d: d.write('html: ' + ' '.join(srcfiles) + '\n') sys.exit(process.returncode) diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index a6a768bd7c..b49b24acce 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -241,3 +241,6 @@ The public API headers are grouped by topics: [experimental APIs](@ref rte_compat.h), [ABI versioning](@ref rte_function_versioning.h), [version](@ref rte_version.h) + +- **tests**: + [**DTS**](@dts_api_main_page) diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index e94c9e4e46..d53edeba57 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -121,6 +121,8 @@ SEARCHENGINE= YES SORT_MEMBER_DOCS= NO SOURCE_BROWSER = YES +ALIASES = "dts_api_main_page=@DTS_API_MAIN_PAGE@" + EXAMPLE_PATH= @TOPDIR@/examples EXAMPLE_PATTERNS= *.c EXAMPLE_RECURSIVE = YES diff --git a/doc/api/meson.build b/doc/api/meson.build index 5b50692df9..ffc75d7b5a 100644 --- a/doc/api/meson.build +++ b/doc/api/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Luca Boccassi +doc_api_build_dir = meson.current_build
RE: [EXT] Re: [PATCH 2/2] config/arm: add support for fallback march
> On Sun, Jan 21, 2024 at 10:37 AM wrote: > > > > From: Pavan Nikhilesh > > > > Some ARM CPUs have specific march requirements and > > are not compatible with the supported march list. > > Add fallback march in case the mcpu and the march > > advertised in the part_number_config are not supported > > by the compiler. > > > > It's not clear to me what this patch adds. We already have a fallback > mechanism and this basically does the same thing, but there's some > extra logic that's not clear to me. Looks like there are some extra > conditions around mcpu. In that case, all of the mcpu/march processing > should be done first and then we should do a common fallback. > The current fallback does a simple reverse compatibility check with the compiler when force march is not enabled. But this is not true for neoverse-n2 case, as it is based on armv9-a which is a super set of armv8.5-a and other features[1] In the current fallback path if both march neoverse-n2 and mcpu armv9-a are not supported then it would fallback to armv8.6-a but this not correct as neoverse-n2 is not based on armv8.5-a The fallback march armv8.5-a kicks in (if supported) when neoverse-n2 and armv9-a are not supported. [1] https://github.com/gcc-mirror/gcc/blob/615e25c82de97acc17ab438f88d6788cf7ffe1d6/gcc/config/arm/arm-cpus.in#L306 > > Example > > mcpu = neoverse-n2 > > march = armv9-a > > fallback_march = armv8.5-a > > > > mcpu, march not supported > > machine_args = ['-march=armv8.5-a'] > > > > mcpu, march, fallback_march not supported > > least march supported = armv8-a > > > > machine_args = ['-march=armv8-a'] > > > > Signed-off-by: Pavan Nikhilesh > > --- > > config/arm/meson.build | 15 +-- > > 1 file changed, 13 insertions(+), 2 deletions(-) > > > > diff --git a/config/arm/meson.build b/config/arm/meson.build > > index 8c8cfccca0..2aaf78a81a 100644 > > --- a/config/arm/meson.build > > +++ b/config/arm/meson.build > > @@ -94,6 +94,7 @@ part_number_config_arm = { > > '0xd49': { > > 'march': 'armv9-a', > > 'march_features': ['sve2'], > > +'fallback_march': 'armv8.5-a', > > 'mcpu': 'neoverse-n2', > > 'flags': [ > > ['RTE_MACHINE', '"neoverse-n2"'], > > @@ -709,14 +710,14 @@ if update_flags > > > > # probe supported archs and their features > > candidate_march = '' > > +supported_marchs = ['armv9-a', 'armv8.6-a', 'armv8.5-a', 'armv8.4-a', > > +'armv8.3-a', 'armv8.2-a', 'armv8.1-a', 'armv8-a'] > > if part_number_config.has_key('march') > > if part_number_config.get('force_march', false) or support_mcpu > > if cc.has_argument('-march=' + part_number_config['march']) > > candidate_march = part_number_config['march'] > > endif > > else > > -supported_marchs = ['armv8.6-a', 'armv8.5-a', 'armv8.4-a', > > 'armv8.3- > a', > > -'armv8.2-a', 'armv8.1-a', 'armv8-a'] > > check_compiler_support = false > > foreach supported_march: supported_marchs > > if supported_march == part_number_config['march'] > > @@ -733,6 +734,16 @@ if update_flags > > endif > > > > if candidate_march != part_number_config['march'] > > +if part_number_config.has_key('fallback_march') and not > support_mcpu > > +fallback_march = part_number_config['fallback_march'] > > +foreach supported_march: supported_marchs > > +if (supported_march == fallback_march > > +and cc.has_argument('-march=' + supported_march)) > > +candidate_march = supported_march > > +break > > +endif > > +endforeach > > +endif > > warning('Configuration march version is @0@, not supported.' > > .format(part_number_config['march'])) > > if candidate_march != '' > > -- > > 2.25.1 > >
Re: [PATCH] net/mana: rename mana_find_pmd_mr() to mana_alloc_pmd_mr()
On 1/20/2024 12:55 AM, lon...@linuxonhyperv.com wrote: > From: Long Li > > The function name mana_find_pmd_mr() is misleading as there might be > allocations to get a MR. Change function name to mana_alloc_pmd_mr(). > > Signed-off-by: Long Li > Applied to dpdk-next-net/main, thanks.
RE: [EXT] Re: [PATCH 1/2] net/virtio-user: improve kick performance with notification area mapping
> Hi Srujana, > > Thanks for your contribution! > Is it possible to provide information on which hardware it can be tested on? It can be tested on Marvell's Octeon DPU. > > On 12/8/23 06:31, Srujana Challa wrote: > > This patch introduces new virtio-user callback to map the vq > > notification area and implements it for the vhost-vDPA backend. > > This is simply done by using mmap()/munmap() for the vhost-vDPA fd. > > > > This patch also adds a parameter for configuring feature bit > > VIRTIO_NET_F_NOTIFICATION_DATA. If feature is disabled, also > > VIRTIO_F_NOTIFICATION_DATA* > > > update corresponding unsupported feature bit. And also adds code to > > write to queue notify address in notify callback. > > This will help in increasing the kick performance. > > Do you have any number about the performance improvement? We don't have exact comparison with the performance data, we are mainly supporting notification data with notify area which enables direct data path notifications and avoids intermediate checks in-between. > > > Signed-off-by: Srujana Challa > > --- > > doc/guides/nics/virtio.rst| 5 ++ > > drivers/net/virtio/virtio_user/vhost.h| 1 + > > drivers/net/virtio/virtio_user/vhost_vdpa.c | 56 ++ > > .../net/virtio/virtio_user/virtio_user_dev.c | 52 +++-- > > .../net/virtio/virtio_user/virtio_user_dev.h | 5 +- > > drivers/net/virtio/virtio_user_ethdev.c | 57 --- > > 6 files changed, 162 insertions(+), 14 deletions(-) > > > > diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst > > index c22ce56a02..11dd6359e5 100644 > > --- a/doc/guides/nics/virtio.rst > > +++ b/doc/guides/nics/virtio.rst > > @@ -349,6 +349,11 @@ Below devargs are supported by the virtio-user > vdev: > > election. > > (Default: 0 (disabled)) > > > > +#. ``notification_data``: > > + > > +It is used to enable virtio device notification data feature. > > +(Default: 1 (enabled)) > > Is there any reason currently to make it configurable? > > As it is enabled by default, I guess we want it to be negociated if the device > supports it. > > Let's remove the devarg for now, and we can revisit if the need arise? Ack. > > > + > > Virtio paths Selection and Usage > > > > > > diff --git a/drivers/net/virtio/virtio_user/vhost.h > > b/drivers/net/virtio/virtio_user/vhost.h > > index f817cab77a..1bce00c7ac 100644 > > --- a/drivers/net/virtio/virtio_user/vhost.h > > +++ b/drivers/net/virtio/virtio_user/vhost.h > > @@ -90,6 +90,7 @@ struct virtio_user_backend_ops { > > int (*server_disconnect)(struct virtio_user_dev *dev); > > int (*server_reconnect)(struct virtio_user_dev *dev); > > int (*get_intr_fd)(struct virtio_user_dev *dev); > > + int (*map_notification_area)(struct virtio_user_dev *dev, bool > map); > > }; > > > > extern struct virtio_user_backend_ops virtio_ops_user; diff --git > > a/drivers/net/virtio/virtio_user/vhost_vdpa.c > > b/drivers/net/virtio/virtio_user/vhost_vdpa.c > > index 2c36b26224..1eb0f9ec48 100644 > > --- a/drivers/net/virtio/virtio_user/vhost_vdpa.c > > +++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c > > @@ -5,6 +5,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -622,6 +623,60 @@ vhost_vdpa_get_intr_fd(struct virtio_user_dev > *dev __rte_unused) > > return -1; > > } > > > > +static int > > +unmap_notification_area(struct virtio_user_dev *dev, int nr_vrings) { > > + int i; > > + > > + for (i = 0; i < nr_vrings; i++) { > > + if (dev->notify_area[i]) > > + munmap(dev->notify_area[i], getpagesize()); > > + } > > + free(dev->notify_area); > > + > > + return 0; > > +} > > + > > +static int > > +vhost_vdpa_map_notification_area(struct virtio_user_dev *dev, bool > > +map) { > > + struct vhost_vdpa_data *data = dev->backend_data; > > + int nr_vrings, i, page_size = getpagesize(); > > + uint16_t **notify_area; > > + > > + nr_vrings = dev->max_queue_pairs * 2; > > + if (dev->device_features & (1ull << VIRTIO_NET_F_CTRL_VQ)) > > + nr_vrings++; > > + > > + if (!map) > > + return unmap_notification_area(dev, nr_vrings); > > + > > + notify_area = malloc(nr_vrings * sizeof(*notify_area)); > > + if (!notify_area) { > > + PMD_DRV_LOG(ERR, "(%s) Failed to allocate notify area > array", dev->path); > > + return -1; > > + } > > Add new line here. > > > + for (i = 0; i < nr_vrings; i++) { > > + notify_area[i] = mmap(NULL, page_size, PROT_WRITE, > MAP_SHARED | MAP_FILE, > > + data->vhostfd, i * page_size); > > + if (notify_area[i] == MAP_FAILED) { > > + PMD_DRV_LOG(ERR, "(%s) Map failed for notify > address of queue %d\n", > > + dev->path, i); > > +
RE: [EXT] Re: [PATCH 2/2] net/virtio-user: add VIRTIO_NET_F_RSS to supported features
> Hi, > > On 12/8/23 06:31, Srujana Challa wrote: > > This patch introduces new function to get rss device config and adds > > code to forward the RSS control command to backend through hw control > > queue if RSS feature is negotiated. > > This patch will help to negotiate VIRTIO_NET_F_RSS feature if > > vhost-vdpa backend supports RSS in HW. > > > > Signed-off-by: Srujana Challa > > --- > > .../net/virtio/virtio_user/virtio_user_dev.c | 31 ++- > > .../net/virtio/virtio_user/virtio_user_dev.h | 2 ++ > > drivers/net/virtio/virtio_user_ethdev.c | 3 ++ > > 3 files changed, 35 insertions(+), 1 deletion(-) > > > > Nice! Same question as on patch 1, I would be interested in knowing which > hardware supports this feature. Marvell's Octeon DPU supports this feature. > > Reviewed-by: Maxime Coquelin > > Thanks, > Maxime
RE: [dpdk-dev] [v2] ethdev: support Tx queue used count
> From: Jerin Jacob > > Introduce a new API to retrieve the number of used descriptors > in a Tx queue. Applications can leverage this API in the fast path to > inspect the Tx queue occupancy and take appropriate actions based on the > available free descriptors. > > A notable use case could be implementing Random Early Discard (RED) > in software based on Tx queue occupancy. > > Signed-off-by: Jerin Jacob > Reviewed-by: Andrew Rybchenko > Acked-by: Morten Brørup > --- > devtools/libabigail.abignore | 3 + > doc/guides/nics/features.rst | 10 > doc/guides/nics/features/default.ini | 1 + > doc/guides/rel_notes/release_24_03.rst | 5 ++ > lib/ethdev/ethdev_driver.h | 2 + > lib/ethdev/ethdev_private.c| 1 + > lib/ethdev/ethdev_trace_points.c | 3 + > lib/ethdev/rte_ethdev.h| 80 ++ > lib/ethdev/rte_ethdev_core.h | 7 ++- > lib/ethdev/rte_ethdev_trace_fp.h | 8 +++ > lib/ethdev/version.map | 1 + > 11 files changed, 120 insertions(+), 1 deletion(-) > > v2: > - Rename _nic_features_tx_queue_used_count to _nic_features_tx_queue_count > - Fix trace emission of case fops->tx_queue_count == NULL > - Rename tx_queue_id to queue_id in implementation symbols and prints > - Added "goto out" for better error handling > - Add release note > - Added libabigail suppression rule for the reserved2 field update > - Fix all ordering and grouping, empty line comment from Ferruh > - Added following notes in doxygen documentation for better clarity on API > usage > * @note There is no requirement to call this function before > rte_eth_tx_burst() invocation. > * @note Utilize this function exclusively when the caller needs to determine > the used queue count > * across all descriptors of a Tx queue. If the use case only involves > checking the status of a > * specific descriptor slot, opt for rte_eth_tx_descriptor_status() instead. > > rfc..v1: > - Updated API similar to rte_eth_rx_queue_count() where it returns > "used" count instead of "free" count > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore > index 21b8cd6113..d6e98c6f52 100644 > --- a/devtools/libabigail.abignore > +++ b/devtools/libabigail.abignore > @@ -33,3 +33,6 @@ > > ; Temporary exceptions till next major ABI version ; > > +[suppress_type] > + name = rte_eth_fp_ops > + has_data_member_inserted_between = {offset_of(reserved2), end} > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst > index f7d9980849..f38941c719 100644 > --- a/doc/guides/nics/features.rst > +++ b/doc/guides/nics/features.rst > @@ -697,6 +697,16 @@ or "Unavailable." > * **[related]API**: ``rte_eth_tx_descriptor_status()``. > > > +.. _nic_features_tx_queue_count: > + > +Tx queue count > +-- > + > +Supports to get the number of used descriptors of a Tx queue. > + > +* **[implements] eth_dev_ops**: ``tx_queue_count``. > +* **[related] API**: ``rte_eth_tx_queue_count()``. > + > .. _nic_features_basic_stats: > > Basic stats > diff --git a/doc/guides/nics/features/default.ini > b/doc/guides/nics/features/default.ini > index 6d50236292..5115963136 100644 > --- a/doc/guides/nics/features/default.ini > +++ b/doc/guides/nics/features/default.ini > @@ -59,6 +59,7 @@ Packet type parsing = > Timesync = > Rx descriptor status = > Tx descriptor status = > +Tx queue count = > Basic stats = > Extended stats = > Stats per queue = > diff --git a/doc/guides/rel_notes/release_24_03.rst > b/doc/guides/rel_notes/release_24_03.rst > index c4fc8ad583..16dd367178 100644 > --- a/doc/guides/rel_notes/release_24_03.rst > +++ b/doc/guides/rel_notes/release_24_03.rst > @@ -65,6 +65,11 @@ New Features >* Added ``RTE_FLOW_ITEM_TYPE_RANDOM`` to match random value. >* Added ``RTE_FLOW_FIELD_RANDOM`` to represent it in field ID struct. > > +* ** Support for getting the number of used descriptors of a Tx queue. ** > + > + * Added a fath path function ``rte_eth_tx_queue_count`` to get the number > of used > +descriptors of a Tx queue. > + > > Removed Items > - > diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h > index b482cd12bb..f05f68a67c 100644 > --- a/lib/ethdev/ethdev_driver.h > +++ b/lib/ethdev/ethdev_driver.h > @@ -58,6 +58,8 @@ struct rte_eth_dev { > eth_rx_queue_count_t rx_queue_count; > /** Check the status of a Rx descriptor */ > eth_rx_descriptor_status_t rx_descriptor_status; > + /** Get the number of used Tx descriptors */ > + eth_tx_queue_count_t tx_queue_count; > /** Check the status of a Tx descriptor */ > eth_tx_descriptor_status_t tx_descriptor_status; > /** Pointer to PMD transmit mbufs reuse function */ > diff --git a/lib/ethdev/ethde
RE: [PATCH] examples/ipsec-secgw: fix cryptodev to SA mapping
Hi Radu, > -Original Message- > From: Nicolau, Radu > Sent: Monday, December 11, 2023 9:54 AM > To: Nicolau, Radu ; Akhil Goyal > > Cc: dev@dpdk.org; Power, Ciara ; Ku, Ting-Kai kai...@intel.com>; sta...@dpdk.org; vfia...@marvell.com > Subject: [PATCH] examples/ipsec-secgw: fix cryptodev to SA mapping > > There are use cases where a SA should be able to use different cryptodevs on > different lcores, for example there can be cryptodevs with just 1 qp per VF. > For this purpose this patch relaxes the check in create lookaside session > function. > Also add a check to verify that a CQP is available for the current lcore. > > Fixes: a8ade12123c3 ("examples/ipsec-secgw: create lookaside sessions at > init") > Cc: sta...@dpdk.org > Cc: vfia...@marvell.com > > Signed-off-by: Radu Nicolau > --- > examples/ipsec-secgw/ipsec.c | 13 - > 1 file changed, 8 insertions(+), 5 deletions(-) Acked-by: Ciara Power
Re: [PATCH] raw/cnxk_bphy: extend link state capabilities
On Tue, Jan 16, 2024 at 8:59 PM Tomasz Duszynski wrote: > > Recent version of firmware extended capabilities of setting link state > by adding two extra parameters i.e. timeout and flag disabling auto > enable of rx/tx during linkup. > > This change adds supports for both. > > Signed-off-by: Tomasz Duszynski Updated the git commit as follows and applied to dpdk-next-net-mrvl/for-main. Thanks raw/cnxk_bphy: extend link state capabilities Update the driver to leverage the firmware capabilities of setting link state by adding two extra parameters i.e. timeout and flag disabling auto enable of rx/tx during linkup. Signed-off-by: Tomasz Duszynski
Re: [PATCH] examples/ipsec-secgw: fix cryptodev to SA mapping
Acked-by: Kai Ji From: Radu Nicolau Sent: 11 December 2023 09:53 To: Nicolau, Radu ; Akhil Goyal Cc: dev@dpdk.org ; Power, Ciara ; Ku, Ting-Kai ; sta...@dpdk.org ; vfia...@marvell.com Subject: [PATCH] examples/ipsec-secgw: fix cryptodev to SA mapping There are use cases where a SA should be able to use different cryptodevs on different lcores, for example there can be cryptodevs with just 1 qp per VF. For this purpose this patch relaxes the check in create lookaside session function. Also add a check to verify that a CQP is available for the current lcore. Fixes: a8ade12123c3 ("examples/ipsec-secgw: create lookaside sessions at init") Cc: sta...@dpdk.org Cc: vfia...@marvell.com Signed-off-by: Radu Nicolau --- -- 2.25.1
Re: [PATCH] net/mana: prevent values overflow returned from RDMA layer
On 1/18/2024 6:05 PM, lon...@linuxonhyperv.com wrote: > From: Long Li > > The device capabilities reported from RDMA layer are in int. Those values can > overflow with the data types defined in dev_info_get(). > > Fix this by doing a upper bound before returning those values. > > Fixes: 517ed6e2d590 ("net/mana: add basic driver with build environment") > Cc: sta...@dpdk.org > > Signed-off-by: Long Li > Applied to dpdk-next-net/main, thanks. '%lu' format specifiers replaced with 'PRIu64' for 'priv->max_mr_size' while merging.
[PATCH v2] mempool: test performance with larger bursts
Bursts of up to 64 or 128 packets are not uncommon, so increase the maximum tested get and put burst sizes from 32 to 128. Some applications keep more than 512 objects, so increase the maximum number of kept objects from 512 to 8192, still in jumps of factor four. This exceeds the typical mempool cache size of 512 objects, so the test also exercises the mempool driver. Signed-off-by: Morten Brørup --- v2: Addressed feedback by Chengwen Feng * Added get and put burst sizes of 64 packets, which is probably also not uncommon. * Fixed list of number of kept objects so list remains in jumps of factor four. * Added three derivative test cases, for faster testing. --- app/test/test_mempool_perf.c | 107 --- 1 file changed, 62 insertions(+), 45 deletions(-) diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c index 96de347f04..a5a7d43608 100644 --- a/app/test/test_mempool_perf.c +++ b/app/test/test_mempool_perf.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation - * Copyright(c) 2022 SmartShare Systems + * Copyright(c) 2022-2024 SmartShare Systems */ #include @@ -54,22 +54,24 @@ * *- Bulk size (*n_get_bulk*, *n_put_bulk*) * - * - Bulk get from 1 to 32 - * - Bulk put from 1 to 32 - * - Bulk get and put from 1 to 32, compile time constant + * - Bulk get from 1 to 128 + * - Bulk put from 1 to 128 + * - Bulk get and put from 1 to 128, compile time constant * *- Number of kept objects (*n_keep*) * * - 32 * - 128 * - 512 + * - 2048 + * - 8192 */ #define N 65536 #define TIME_S 5 #define MEMPOOL_ELT_SIZE 2048 -#define MAX_KEEP 512 -#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) +#define MAX_KEEP 8192 +#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE*2))-1) /* Number of pointers fitting into one cache line. */ #define CACHE_LINE_BURST (RTE_CACHE_LINE_SIZE / sizeof(uintptr_t)) @@ -204,6 +206,10 @@ per_lcore_mempool_test(void *arg) CACHE_LINE_BURST, CACHE_LINE_BURST); else if (n_get_bulk == 32) ret = test_loop(mp, cache, n_keep, 32, 32); + else if (n_get_bulk == 64) + ret = test_loop(mp, cache, n_keep, 64, 64); + else if (n_get_bulk == 128) + ret = test_loop(mp, cache, n_keep, 128, 128); else ret = -1; @@ -289,9 +295,9 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) static int do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) { - unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 0 }; - unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 0 }; - unsigned int keep_tab[] = { 32, 128, 512, 0 }; + unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128, 0 }; + unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128, 0 }; + unsigned int keep_tab[] = { 32, 128, 512, 2048, 8192, 0 }; unsigned *get_bulk_ptr; unsigned *put_bulk_ptr; unsigned *keep_ptr; @@ -301,6 +307,9 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) for (put_bulk_ptr = bulk_tab_put; *put_bulk_ptr; put_bulk_ptr++) { for (keep_ptr = keep_tab; *keep_ptr; keep_ptr++) { + if (*keep_ptr < *get_bulk_ptr || *keep_ptr < *put_bulk_ptr) + continue; + use_constant_values = 0; n_get_bulk = *get_bulk_ptr; n_put_bulk = *put_bulk_ptr; @@ -323,7 +332,7 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) } static int -test_mempool_perf(void) +do_all_mempool_perf_tests(unsigned int cores) { struct rte_mempool *mp_cache = NULL; struct rte_mempool *mp_nocache = NULL; @@ -376,65 +385,73 @@ test_mempool_perf(void) rte_mempool_obj_iter(default_pool, my_obj_init, NULL); - /* performance test with 1, 2 and max cores */ printf("start performance test (without cache)\n"); - - if (do_one_mempool_test(mp_nocache, 1) < 0) + if (do_one_mempool_test(mp_nocache, cores) < 0) goto err; - if (do_one_mempool_test(mp_nocache, 2) < 0) - goto err; - - if (do_one_mempool_test(mp_nocache, rte_lcore_count()) < 0) - goto err; - - /* performance test with 1, 2 and max cores */ printf("start performance test for %s (without cache)\n", default_pool_ops); - - if (do_one_mempool_test(default_pool, 1) < 0) + if (do_one_mempool_test(default_pool, cores) < 0) goto err; - if (do_one_mempool_test(default_pool
[PATCH v2] mempool: test performance with larger bursts
Bursts of up to 64 or 128 packets are not uncommon, so increase the maximum tested get and put burst sizes from 32 to 128. Some applications keep more than 512 objects, so increase the maximum number of kept objects from 512 to 8192, still in jumps of factor four. This exceeds the typical mempool cache size of 512 objects, so the test also exercises the mempool driver. Signed-off-by: Morten Brørup --- v2: Addressed feedback by Chengwen Feng * Added get and put burst sizes of 64 packets, which is probably also not uncommon. * Fixed list of number of kept objects so list remains in jumps of factor four. * Added three derivative test cases, for faster testing. --- app/test/test_mempool_perf.c | 107 --- 1 file changed, 62 insertions(+), 45 deletions(-) diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c index 96de347f04..a5a7d43608 100644 --- a/app/test/test_mempool_perf.c +++ b/app/test/test_mempool_perf.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation - * Copyright(c) 2022 SmartShare Systems + * Copyright(c) 2022-2024 SmartShare Systems */ #include @@ -54,22 +54,24 @@ * *- Bulk size (*n_get_bulk*, *n_put_bulk*) * - * - Bulk get from 1 to 32 - * - Bulk put from 1 to 32 - * - Bulk get and put from 1 to 32, compile time constant + * - Bulk get from 1 to 128 + * - Bulk put from 1 to 128 + * - Bulk get and put from 1 to 128, compile time constant * *- Number of kept objects (*n_keep*) * * - 32 * - 128 * - 512 + * - 2048 + * - 8192 */ #define N 65536 #define TIME_S 5 #define MEMPOOL_ELT_SIZE 2048 -#define MAX_KEEP 512 -#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) +#define MAX_KEEP 8192 +#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE*2))-1) /* Number of pointers fitting into one cache line. */ #define CACHE_LINE_BURST (RTE_CACHE_LINE_SIZE / sizeof(uintptr_t)) @@ -204,6 +206,10 @@ per_lcore_mempool_test(void *arg) CACHE_LINE_BURST, CACHE_LINE_BURST); else if (n_get_bulk == 32) ret = test_loop(mp, cache, n_keep, 32, 32); + else if (n_get_bulk == 64) + ret = test_loop(mp, cache, n_keep, 64, 64); + else if (n_get_bulk == 128) + ret = test_loop(mp, cache, n_keep, 128, 128); else ret = -1; @@ -289,9 +295,9 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) static int do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) { - unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 0 }; - unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 0 }; - unsigned int keep_tab[] = { 32, 128, 512, 0 }; + unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128, 0 }; + unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128, 0 }; + unsigned int keep_tab[] = { 32, 128, 512, 2048, 8192, 0 }; unsigned *get_bulk_ptr; unsigned *put_bulk_ptr; unsigned *keep_ptr; @@ -301,6 +307,9 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) for (put_bulk_ptr = bulk_tab_put; *put_bulk_ptr; put_bulk_ptr++) { for (keep_ptr = keep_tab; *keep_ptr; keep_ptr++) { + if (*keep_ptr < *get_bulk_ptr || *keep_ptr < *put_bulk_ptr) + continue; + use_constant_values = 0; n_get_bulk = *get_bulk_ptr; n_put_bulk = *put_bulk_ptr; @@ -323,7 +332,7 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) } static int -test_mempool_perf(void) +do_all_mempool_perf_tests(unsigned int cores) { struct rte_mempool *mp_cache = NULL; struct rte_mempool *mp_nocache = NULL; @@ -376,65 +385,73 @@ test_mempool_perf(void) rte_mempool_obj_iter(default_pool, my_obj_init, NULL); - /* performance test with 1, 2 and max cores */ printf("start performance test (without cache)\n"); - - if (do_one_mempool_test(mp_nocache, 1) < 0) + if (do_one_mempool_test(mp_nocache, cores) < 0) goto err; - if (do_one_mempool_test(mp_nocache, 2) < 0) - goto err; - - if (do_one_mempool_test(mp_nocache, rte_lcore_count()) < 0) - goto err; - - /* performance test with 1, 2 and max cores */ printf("start performance test for %s (without cache)\n", default_pool_ops); - - if (do_one_mempool_test(default_pool, 1) < 0) + if (do_one_mempool_test(default_pool, cores) < 0) goto err; - if (do_one_mempool_test(default_pool
RE: [PATCH v2] mempool: test performance with larger bursts
Replied on wrong thread. Sorry. Resubmitted with correct in-reply-to. -Morten
Re: [PATCH 0/8] optimize the firmware loading process
On 1/15/2024 2:54 AM, Chaoyong He wrote: > This patch series aims to speedup the DPDK application start by optimize > the firmware loading process in sereval places. > We also simplify the port name in multiple PF firmware case to make the > customer happy. > <...> > net/nfp: add the elf module > net/nfp: reload the firmware only when firmware changed Above commit adds elf parser capability and second one loads firmware when build time is different. I can see this is an optimization effort, to understand FW status before loading FW, but relying on build time seems fragile. Does it help to add a new section to store version information and evaluate based on this information?
Re: [EXT] Re: [PATCH 1/2] config/arm: avoid mcpu and march conflicts
On Mon, Jan 22, 2024 at 12:54 PM Pavan Nikhilesh Bhagavatula wrote: > > > On Sun, Jan 21, 2024 at 10:37 AM wrote: > > > > > > From: Pavan Nikhilesh > > > > > > The compiler options march and mtune are a subset > > > of mcpu and will lead to conflicts if improper march > > > is chosen for a given mcpu. > > > To avoid conflicts, force part number march when > > > mcpu is available and is supported by the compiler. > > > > > > Example: > > > march = armv9-a > > > mcpu = neoverse-n2 > > > > > > mcpu supported, march supported > > > machine_args = ['-mcpu=neoverse-n2', '-march=armv9-a'] > > > > > > mcpu supported, march not supported > > > machine_args = ['-mcpu=neoverse-n2'] > > > > > > mcpu not supported, march supported > > > machine_args = ['-march=armv9-a'] > > > > > > mcpu not supported, march not supported > > > machine_args = ['-march=armv8.6-a'] > > > > > > Signed-off-by: Pavan Nikhilesh > > > --- > > > config/arm/meson.build | 109 +-- > > -- > > > 1 file changed, 67 insertions(+), 42 deletions(-) > > > > > > diff --git a/config/arm/meson.build b/config/arm/meson.build > > > index 36f21d2259..8c8cfccca0 100644 > > > --- a/config/arm/meson.build > > > +++ b/config/arm/meson.build > > > > > @@ -127,21 +128,22 @@ implementer_cavium = { > > > ], > > > 'part_number_config': { > > > '0xa1': { > > > -'compiler_options': ['-mcpu=thunderxt88'], > > > +'mcpu': 'thunderxt88', > > > 'flags': flags_part_number_thunderx > > > }, > > > '0xa2': { > > > -'compiler_options': ['-mcpu=thunderxt81'], > > > +'mcpu': 'thunderxt81', > > > 'flags': flags_part_number_thunderx > > > }, > > > '0xa3': { > > > -'compiler_options': ['-march=armv8-a+crc', > > > '-mcpu=thunderxt83'], > > > +'mcpu': 'thunderxt83', > > > +'compiler_options': ['-march=armv8-a+crc'], > > > > Let's unify this with the rest and specify 'march': 'armv8-a+crc' > > instead of having it under compiler_options. > > Ack. > > > > > > 'flags': flags_part_number_thunderx > > > }, > > > '0xaf': { > > > 'march': 'armv8.1-a', > > > 'march_features': ['crc', 'crypto'], > > > -'compiler_options': ['-mcpu=thunderx2t99'], > > > +'mcpu': 'thunderx2t99', > > > 'flags': [ > > > ['RTE_MACHINE', '"thunderx2"'], > > > ['RTE_ARM_FEATURE_ATOMICS', true], > > > @@ -153,7 +155,7 @@ implementer_cavium = { > > > '0xb2': { > > > 'march': 'armv8.2-a', > > > 'march_features': ['crc', 'crypto', 'lse'], > > > -'compiler_options': ['-mcpu=octeontx2'], > > > +'mcpu': 'octeontx2', > > > 'flags': [ > > > ['RTE_MACHINE', '"cn9k"'], > > > ['RTE_ARM_FEATURE_ATOMICS', true], > > > @@ -176,7 +178,7 @@ implementer_ampere = { > > > '0x0': { > > > 'march': 'armv8-a', > > > 'march_features': ['crc', 'crypto'], > > > -'compiler_options': ['-mtune=emag'], > > > +'mcpu': 'emag', > > > > We're changing mtune to mcpu, is this equivalent? > > > > Both march and mtune are a subset of mcpu. > Sure, but we replaced '-mtune=emag' with '-mcpu=emag'. Are these two builds going to be different or the same? > > > 'flags': [ > > > ['RTE_MACHINE', '"eMAG"'], > > > ['RTE_MAX_LCORE', 32], > > > @@ -186,7 +188,7 @@ implementer_ampere = { > > > '0xac3': { > > > 'march': 'armv8.6-a', > > > 'march_features': ['crc', 'crypto'], > > > -'compiler_options': ['-mcpu=ampere1'], > > > +'mcpu': 'ampere1', > > > 'flags': [ > > > ['RTE_MACHINE', '"AmpereOne"'], > > > ['RTE_MAX_LCORE', 320], > > > @@ -206,7 +208,7 @@ implementer_hisilicon = { > > > '0xd01': { > > > 'march': 'armv8.2-a', > > > 'march_features': ['crypto'], > > > -'compiler_options': ['-mtune=tsv110'], > > > +'mcpu': 'tsv110', > > > 'flags': [ > > > ['RTE_MACHINE', '"Kunpeng 920"'], > > > ['RTE_ARM_FEATURE_ATOMICS', true], > > > @@ -695,11 +697,23 @@ if update_flags > > > > > > machine_args = [] # Clear previous machine args > > > > > > +candidate_mcpu = '' > > > +support_mcpu = false > > > +if part_number_config.has_key('mcpu') > > > +mcpu = part_number_config['mcpu'] > > > +if (cc.has_argument('-mcpu=' + mcpu)) > > > +candidate_mcpu = mcpu > > > +support_mcpu = true > > > +endif > > > +endif > > > + > > > # probe supported archs and their features > > > candidat
Re: [EXT] Re: [PATCH 2/2] config/arm: add support for fallback march
On Mon, Jan 22, 2024 at 1:16 PM Pavan Nikhilesh Bhagavatula wrote: > > > On Sun, Jan 21, 2024 at 10:37 AM wrote: > > > > > > From: Pavan Nikhilesh > > > > > > Some ARM CPUs have specific march requirements and > > > are not compatible with the supported march list. > > > Add fallback march in case the mcpu and the march > > > advertised in the part_number_config are not supported > > > by the compiler. > > > > > > > It's not clear to me what this patch adds. We already have a fallback > > mechanism and this basically does the same thing, but there's some > > extra logic that's not clear to me. Looks like there are some extra > > conditions around mcpu. In that case, all of the mcpu/march processing > > should be done first and then we should do a common fallback. > > > > The current fallback does a simple reverse compatibility check with the > compiler > when force march is not enabled. > But this is not true for neoverse-n2 case, as it is based on armv9-a which is > a super set of > armv8.5-a and other features[1] > In the current fallback path if both march neoverse-n2 and mcpu armv9-a are > not supported > then it would fallback to armv8.6-a but this not correct as neoverse-n2 is > not based on armv8.5-a > > The fallback march armv8.5-a kicks in (if supported) when neoverse-n2 and > armv9-a are not supported. > Can the two fallback mechanisms be combined? They seem very similar. > > [1] > https://github.com/gcc-mirror/gcc/blob/615e25c82de97acc17ab438f88d6788cf7ffe1d6/gcc/config/arm/arm-cpus.in#L306 > > > > > Example > > > mcpu = neoverse-n2 > > > march = armv9-a > > > fallback_march = armv8.5-a > > > > > > mcpu, march not supported > > > machine_args = ['-march=armv8.5-a'] > > > > > > mcpu, march, fallback_march not supported > > > least march supported = armv8-a > > > > > > machine_args = ['-march=armv8-a'] > > > > > > Signed-off-by: Pavan Nikhilesh > > > --- > > > config/arm/meson.build | 15 +-- > > > 1 file changed, 13 insertions(+), 2 deletions(-) > > > > > > diff --git a/config/arm/meson.build b/config/arm/meson.build > > > index 8c8cfccca0..2aaf78a81a 100644 > > > --- a/config/arm/meson.build > > > +++ b/config/arm/meson.build > > > @@ -94,6 +94,7 @@ part_number_config_arm = { > > > '0xd49': { > > > 'march': 'armv9-a', > > > 'march_features': ['sve2'], > > > +'fallback_march': 'armv8.5-a', > > > 'mcpu': 'neoverse-n2', > > > 'flags': [ > > > ['RTE_MACHINE', '"neoverse-n2"'], > > > @@ -709,14 +710,14 @@ if update_flags > > > > > > # probe supported archs and their features > > > candidate_march = '' > > > +supported_marchs = ['armv9-a', 'armv8.6-a', 'armv8.5-a', 'armv8.4-a', > > > +'armv8.3-a', 'armv8.2-a', 'armv8.1-a', 'armv8-a'] > > > if part_number_config.has_key('march') > > > if part_number_config.get('force_march', false) or support_mcpu > > > if cc.has_argument('-march=' + part_number_config['march']) > > > candidate_march = part_number_config['march'] > > > endif > > > else > > > -supported_marchs = ['armv8.6-a', 'armv8.5-a', 'armv8.4-a', > > > 'armv8.3- > > a', > > > -'armv8.2-a', 'armv8.1-a', 'armv8-a'] > > > check_compiler_support = false > > > foreach supported_march: supported_marchs > > > if supported_march == part_number_config['march'] > > > @@ -733,6 +734,16 @@ if update_flags > > > endif > > > > > > if candidate_march != part_number_config['march'] > > > +if part_number_config.has_key('fallback_march') and not > > support_mcpu > > > +fallback_march = part_number_config['fallback_march'] > > > +foreach supported_march: supported_marchs > > > +if (supported_march == fallback_march > > > +and cc.has_argument('-march=' + supported_march)) > > > +candidate_march = supported_march > > > +break > > > +endif > > > +endforeach > > > +endif > > > warning('Configuration march version is @0@, not supported.' > > > .format(part_number_config['march'])) > > > if candidate_march != '' > > > -- > > > 2.25.1 > > >
[PATCH v3 0/3] dts: API docs generation
The generation is done with Sphinx, which DPDK already uses, with slightly modified configuration of the sidebar present in an if block. Dependencies are installed using Poetry from the dts directory: poetry install --with docs After installing, enter the Poetry shell: poetry shell And then run the build: ninja -C dts-doc The patchset contains the .rst sources which Sphinx uses to generate the html pages. These were first generated with the sphinx-apidoc utility and modified to provide a better look. The documentation just doesn't look that good without the modifications and there isn't enough configuration options to achieve that without manual changes to the .rst files. This introduces extra maintenance which involves adding new .rst files when a new Python module is added or changing the .rst structure if the Python directory/file structure is changed (moved, renamed files). This small maintenance burden is outweighed by the flexibility afforded by the ability to make manual changes to the .rst files. v2: Removed the use of sphinx-apidoc from meson in favor of adding the files generated by it directly to the repository (and modifying them). v3: Rebase. Juraj Linkeš (3): dts: add doc generation dependencies dts: add API doc sources dts: add API doc generation buildtools/call-sphinx-build.py | 33 +- doc/api/doxy-api-index.md | 3 + doc/api/doxy-api.conf.in | 2 + doc/api/meson.build | 11 +- doc/guides/conf.py| 39 +- doc/guides/meson.build| 1 + doc/guides/tools/dts.rst | 34 +- dts/doc/conf_yaml_schema.json | 1 + dts/doc/framework.config.rst | 12 + dts/doc/framework.config.types.rst| 6 + dts/doc/framework.dts.rst | 6 + dts/doc/framework.exception.rst | 6 + dts/doc/framework.logger.rst | 6 + ...ote_session.interactive_remote_session.rst | 6 + ...ework.remote_session.interactive_shell.rst | 6 + .../framework.remote_session.python_shell.rst | 6 + ...ramework.remote_session.remote_session.rst | 6 + dts/doc/framework.remote_session.rst | 17 + .../framework.remote_session.ssh_session.rst | 6 + ...framework.remote_session.testpmd_shell.rst | 6 + dts/doc/framework.rst | 30 ++ dts/doc/framework.settings.rst| 6 + dts/doc/framework.test_result.rst | 6 + dts/doc/framework.test_suite.rst | 6 + dts/doc/framework.testbed_model.cpu.rst | 6 + .../framework.testbed_model.linux_session.rst | 6 + dts/doc/framework.testbed_model.node.rst | 6 + .../framework.testbed_model.os_session.rst| 6 + dts/doc/framework.testbed_model.port.rst | 6 + .../framework.testbed_model.posix_session.rst | 6 + dts/doc/framework.testbed_model.rst | 26 + dts/doc/framework.testbed_model.sut_node.rst | 6 + dts/doc/framework.testbed_model.tg_node.rst | 6 + ..._generator.capturing_traffic_generator.rst | 6 + ...mework.testbed_model.traffic_generator.rst | 14 + testbed_model.traffic_generator.scapy.rst | 6 + ...el.traffic_generator.traffic_generator.rst | 6 + ...framework.testbed_model.virtual_device.rst | 6 + dts/doc/framework.utils.rst | 6 + dts/doc/index.rst | 41 ++ dts/doc/meson.build | 27 + dts/meson.build | 16 + dts/poetry.lock | 499 +- dts/pyproject.toml| 7 + meson.build | 1 + 45 files changed, 950 insertions(+), 20 deletions(-) create mode 12 dts/doc/conf_yaml_schema.json create mode 100644 dts/doc/framework.config.rst create mode 100644 dts/doc/framework.config.types.rst create mode 100644 dts/doc/framework.dts.rst create mode 100644 dts/doc/framework.exception.rst create mode 100644 dts/doc/framework.logger.rst create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst create mode 100644 dts/doc/framework.remote_session.python_shell.rst create mode 100644 dts/doc/framework.remote_session.remote_session.rst create mode 100644 dts/doc/framework.remote_session.rst create mode 100644 dts/doc/framework.remote_session.ssh_session.rst create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst create mode 100644 dts/doc/framework.rst create mode 100644 dts/doc/framework.settings.rst create mode 100644 dts/doc/framework.test_result.rst create mode 100644 dts/doc/framework.test_suite.rst create mode 100644 dts/doc/framework.testbed_model.cpu.rst create mode 100644 dts/doc/framework.testbed_model.linux_session.rst create mode 10064
[PATCH v3 1/3] dts: add doc generation dependencies
Sphinx imports every Python module when generating documentation from docstrings, meaning all dts dependencies, including Python version, must be satisfied. By adding Sphinx to dts dependencies we make sure that the proper Python version and dependencies are used when Sphinx is executed. Signed-off-by: Juraj Linkeš --- dts/poetry.lock| 499 - dts/pyproject.toml | 7 + 2 files changed, 505 insertions(+), 1 deletion(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index a734fa71f0..8b27b0d751 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -1,5 +1,16 @@ # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand. +[[package]] +name = "alabaster" +version = "0.7.13" +description = "A configurable sidebar-enabled Sphinx theme" +optional = false +python-versions = ">=3.6" +files = [ +{file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"}, +{file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"}, +] + [[package]] name = "attrs" version = "23.1.0" @@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib- tests = ["attrs[tests-no-zope]", "zope-interface"] tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] +[[package]] +name = "babel" +version = "2.13.1" +description = "Internationalization utilities" +optional = false +python-versions = ">=3.7" +files = [ +{file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"}, +{file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"}, +] + +[package.dependencies] +setuptools = {version = "*", markers = "python_version >= \"3.12\""} + +[package.extras] +dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"] + [[package]] name = "bcrypt" version = "4.0.1" @@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"] jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"] uvloop = ["uvloop (>=0.15.2)"] +[[package]] +name = "certifi" +version = "2023.7.22" +description = "Python package for providing Mozilla's CA Bundle." +optional = false +python-versions = ">=3.6" +files = [ +{file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, +{file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, +] + [[package]] name = "cffi" version = "1.15.1" @@ -162,6 +201,105 @@ files = [ [package.dependencies] pycparser = "*" +[[package]] +name = "charset-normalizer" +version = "3.3.2" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +optional = false +python-versions = ">=3.7.0" +files = [ +{file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"}, +{file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"}, +{file = "ch
[PATCH v3 2/3] dts: add API doc sources
These sources could be generated with the sphinx-apidoc utility, but that doesn't give us enough flexibility. The sources included in this patch were in fact generated by said utility, but modified to improve the look of the documentation. The improvements are mainly in toctree definitions and the titles of the modules/packages. These were made with specific config options in mind. Signed-off-by: Juraj Linkeš --- dts/doc/conf_yaml_schema.json | 1 + dts/doc/framework.config.rst | 12 ++ dts/doc/framework.config.types.rst| 6 +++ dts/doc/framework.dts.rst | 6 +++ dts/doc/framework.exception.rst | 6 +++ dts/doc/framework.logger.rst | 6 +++ ...ote_session.interactive_remote_session.rst | 6 +++ ...ework.remote_session.interactive_shell.rst | 6 +++ .../framework.remote_session.python_shell.rst | 6 +++ ...ramework.remote_session.remote_session.rst | 6 +++ dts/doc/framework.remote_session.rst | 17 .../framework.remote_session.ssh_session.rst | 6 +++ ...framework.remote_session.testpmd_shell.rst | 6 +++ dts/doc/framework.rst | 30 ++ dts/doc/framework.settings.rst| 6 +++ dts/doc/framework.test_result.rst | 6 +++ dts/doc/framework.test_suite.rst | 6 +++ dts/doc/framework.testbed_model.cpu.rst | 6 +++ .../framework.testbed_model.linux_session.rst | 6 +++ dts/doc/framework.testbed_model.node.rst | 6 +++ .../framework.testbed_model.os_session.rst| 6 +++ dts/doc/framework.testbed_model.port.rst | 6 +++ .../framework.testbed_model.posix_session.rst | 6 +++ dts/doc/framework.testbed_model.rst | 26 dts/doc/framework.testbed_model.sut_node.rst | 6 +++ dts/doc/framework.testbed_model.tg_node.rst | 6 +++ ..._generator.capturing_traffic_generator.rst | 6 +++ ...mework.testbed_model.traffic_generator.rst | 14 +++ testbed_model.traffic_generator.scapy.rst | 6 +++ ...el.traffic_generator.traffic_generator.rst | 6 +++ ...framework.testbed_model.virtual_device.rst | 6 +++ dts/doc/framework.utils.rst | 6 +++ dts/doc/index.rst | 41 +++ 33 files changed, 297 insertions(+) create mode 12 dts/doc/conf_yaml_schema.json create mode 100644 dts/doc/framework.config.rst create mode 100644 dts/doc/framework.config.types.rst create mode 100644 dts/doc/framework.dts.rst create mode 100644 dts/doc/framework.exception.rst create mode 100644 dts/doc/framework.logger.rst create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst create mode 100644 dts/doc/framework.remote_session.python_shell.rst create mode 100644 dts/doc/framework.remote_session.remote_session.rst create mode 100644 dts/doc/framework.remote_session.rst create mode 100644 dts/doc/framework.remote_session.ssh_session.rst create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst create mode 100644 dts/doc/framework.rst create mode 100644 dts/doc/framework.settings.rst create mode 100644 dts/doc/framework.test_result.rst create mode 100644 dts/doc/framework.test_suite.rst create mode 100644 dts/doc/framework.testbed_model.cpu.rst create mode 100644 dts/doc/framework.testbed_model.linux_session.rst create mode 100644 dts/doc/framework.testbed_model.node.rst create mode 100644 dts/doc/framework.testbed_model.os_session.rst create mode 100644 dts/doc/framework.testbed_model.port.rst create mode 100644 dts/doc/framework.testbed_model.posix_session.rst create mode 100644 dts/doc/framework.testbed_model.rst create mode 100644 dts/doc/framework.testbed_model.sut_node.rst create mode 100644 dts/doc/framework.testbed_model.tg_node.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst create mode 100644 dts/doc/framework.utils.rst create mode 100644 dts/doc/index.rst diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json new file mode 12 index 00..d89eb81b72 --- /dev/null +++ b/dts/doc/conf_yaml_schema.json @@ -0,0 +1 @@ +../framework/config/conf_yaml_schema.json \ No newline at end of file diff --git a/dts/doc/framework.config.rst b/dts/doc/framework.config.rst new file mode 100644 index 00..f765ef0e32 --- /dev/null +++ b/dts/doc/framework.config.rst @@ -0,0 +1,12 @@ +config - Configuration Package +== + +.. automodule:: framework.config + :members: +
[PATCH v3 3/3] dts: add API doc generation
The tool used to generate developer docs is Sphinx, which is already in use in DPDK. The same configuration is used to preserve style, but it's been augmented with doc-generating configuration. There's a change that modifies how the sidebar displays the content hierarchy that's been put into an if block to not interfere with regular docs. Sphinx generates the documentation from Python docstrings. The docstring format is the Google format [0] which requires the sphinx.ext.napoleon extension. The other extension, sphinx.ext.intersphinx, enables linking to object in external documentations, such as the Python documentation. There are two requirements for building DTS docs: * The same Python version as DTS or higher, because Sphinx imports the code. * Also the same Python packages as DTS, for the same reason. [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings Signed-off-by: Juraj Linkeš --- buildtools/call-sphinx-build.py | 33 +++- doc/api/doxy-api-index.md | 3 +++ doc/api/doxy-api.conf.in| 2 ++ doc/api/meson.build | 11 +++--- doc/guides/conf.py | 39 - doc/guides/meson.build | 1 + doc/guides/tools/dts.rst| 34 +++- dts/doc/meson.build | 27 +++ dts/meson.build | 16 ++ meson.build | 1 + 10 files changed, 148 insertions(+), 19 deletions(-) create mode 100644 dts/doc/meson.build create mode 100644 dts/meson.build diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py index 39a60d09fa..aea771a64e 100755 --- a/buildtools/call-sphinx-build.py +++ b/buildtools/call-sphinx-build.py @@ -3,37 +3,50 @@ # Copyright(c) 2019 Intel Corporation # +import argparse import sys import os from os.path import join from subprocess import run, PIPE, STDOUT from packaging.version import Version -# assign parameters to variables -(sphinx, version, src, dst, *extra_args) = sys.argv[1:] +parser = argparse.ArgumentParser() +parser.add_argument('sphinx') +parser.add_argument('version') +parser.add_argument('src') +parser.add_argument('dst') +parser.add_argument('--dts-root', default=None) +args, extra_args = parser.parse_known_args() # set the version in environment for sphinx to pick up -os.environ['DPDK_VERSION'] = version +os.environ['DPDK_VERSION'] = args.version +if args.dts_root: +os.environ['DTS_ROOT'] = args.dts_root # for sphinx version >= 1.7 add parallelism using "-j auto" -ver = run([sphinx, '--version'], stdout=PIPE, +ver = run([args.sphinx, '--version'], stdout=PIPE, stderr=STDOUT).stdout.decode().split()[-1] -sphinx_cmd = [sphinx] + extra_args +sphinx_cmd = [args.sphinx] + extra_args if Version(ver) >= Version('1.7'): sphinx_cmd += ['-j', 'auto'] # find all the files sphinx will process so we can write them as dependencies srcfiles = [] -for root, dirs, files in os.walk(src): +for root, dirs, files in os.walk(args.src): srcfiles.extend([join(root, f) for f in files]) +if not os.path.exists(args.dst): +os.makedirs(args.dst) + # run sphinx, putting the html output in a "html" directory -with open(join(dst, 'sphinx_html.out'), 'w') as out: -process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')], - stdout=out) +with open(join(args.dst, 'sphinx_html.out'), 'w') as out: +process = run( +sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')], +stdout=out +) # create a gcc format .d file giving all the dependencies of this doc build -with open(join(dst, '.html.d'), 'w') as d: +with open(join(args.dst, '.html.d'), 'w') as d: d.write('html: ' + ' '.join(srcfiles) + '\n') sys.exit(process.returncode) diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index a6a768bd7c..b49b24acce 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -241,3 +241,6 @@ The public API headers are grouped by topics: [experimental APIs](@ref rte_compat.h), [ABI versioning](@ref rte_function_versioning.h), [version](@ref rte_version.h) + +- **tests**: + [**DTS**](@dts_api_main_page) diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index e94c9e4e46..d53edeba57 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -121,6 +121,8 @@ SEARCHENGINE= YES SORT_MEMBER_DOCS= NO SOURCE_BROWSER = YES +ALIASES = "dts_api_main_page=@DTS_API_MAIN_PAGE@" + EXAMPLE_PATH= @TOPDIR@/examples EXAMPLE_PATTERNS= *.c EXAMPLE_RECURSIVE = YES diff --git a/doc/api/meson.build b/doc/api/meson.build index 5b50692df9..ffc75d7b5a 100644 --- a/doc/api/meson.build +++ b/doc/api/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Luca Boccassi +doc_api_build_dir = meson.current_build
Re: DTS testpmd and SCAPY integration
08/01/2024 13:10, Luca Vizzarro: > Your proposal sounds rather interesting. Certainly enabling DTS to > accept YAML-written tests sounds more developer-friendly and should > enable quicker test-writing. As this is an extra feature though – and a > nice-to-have, it should definitely be discussed in the DTS meetings as > Honnappa suggested already. I would not classify this idea as "nice-to-have". I would split this proposal in 2 parts: 1/ YAML is an implementation alternative. 2/ Being able to write a test with a small number of lines, reusing some commands from existing tools, should be our "must-have" common goal. Others have mentioned that YAML may not be suitable in complex cases, and that it would be an additional language for test writing. I personnaly think we should focus on a single path which is easy to read and maintain. For the configuration side, YAML is already used in DTS. For the test suite logic, do you think we can achieve the same simplicity with some Python code? We discussed how to progress with this proposal during the CI meeting last week. We need to check how it could look and what we can improve to reach this goal. Patrick proposes a meeting this Wednesday at 2pm UTC.
[PATCH 0/4] dts: error and usage improvements
As mentioned in my previous DTS docs improvement patch series, here are some usage improvements to DTS. The main purpose is to give the first-time user of DTS some more meaningful messages of its usage. Secondly, report back stderr to the user when remote commands fail. For example, if DTS tries to run any program which is not installed on the target node, it will just say that it failed with its return code. The only way to see the actual error message is through the DEBUG level of verbosity. Rightfully though, errors should be logged as ERROR. Best, Luca Luca Vizzarro (4): dts: constrain DPDK source flag dts: customise argparse error message dts: show help when DTS is ran without args dts: log stderr with failed remote commands doc/guides/tools/dts.rst | 8 +- dts/framework/exception.py| 10 ++- .../remote_session/remote_session.py | 2 +- dts/framework/settings.py | 83 ++- dts/framework/utils.py| 43 ++ 5 files changed, 104 insertions(+), 42 deletions(-) -- 2.34.1
[PATCH 1/4] dts: constrain DPDK source flag
DTS needs an input to gather the DPDK source code from. This is then built on the remote target. This commit makes sure that this input is more constrained, separating the Git revision ID – used to create a tarball using Git – and providing tarballed source code directly, while retaining mutual exclusion. This makes the code more readable and easier to handle for input validation, of which this commit introduces a basic one based on the pre-existing code. Moreover it ensures that these flags are explicitly required to be set by the user, dropping a default value. It also aids the user understand how to use the DTS in the scenario it is ran without any arguments set. Reviewed-by: Paul Szczepanek Signed-off-by: Luca Vizzarro --- doc/guides/tools/dts.rst | 8 +++-- dts/framework/settings.py | 64 --- dts/framework/utils.py| 43 -- 3 files changed, 79 insertions(+), 36 deletions(-) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index 846696e14e..6e2da317e8 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -215,12 +215,16 @@ DTS is run with ``main.py`` located in the ``dts`` directory after entering Poet .. code-block:: console (dts-py3.10) $ ./main.py --help - usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] [-v] [-s] [--tarball TARBALL] [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES] [--re-run RE_RUN] + usage: main.py [-h] (--tarball FILEPATH | --revision ID) [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] [-v] [-s] [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES] [--re-run RE_RUN] Run DPDK test suites. All options may be specified with the environment variables provided in brackets. Command line arguments have higher priority. options: -h, --helpshow this help message and exit + --tarball FILEPATH, --snapshot FILEPATH + Path to DPDK source code tarball to test. (default: None) + --revision ID, --rev ID, --git-ref ID + Git revision ID to test. Could be commit, tag, tree ID and vice versa. To test local changes, first commit them, then use their commit ID (default: None) --config-file CONFIG_FILE [DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets. (default: ./conf.yaml) --output-dir OUTPUT_DIR, --output OUTPUT_DIR @@ -229,8 +233,6 @@ DTS is run with ``main.py`` located in the ``dts`` directory after entering Poet [DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK. (default: 15) -v, --verbose [DTS_VERBOSE] Specify to enable verbose output, logging all messages to the console. (default: False) -s, --skip-setup [DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes. (default: None) - --tarball TARBALL, --snapshot TARBALL, --git-ref TARBALL - [DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, tag ID or tree ID to test. To test local changes, first commit them, then use the commit ID with this option. (default: dpdk.tar.xz) --compile-timeout COMPILE_TIMEOUT [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200) --test-cases TEST_CASES diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 609c8d0e62..2d0365e763 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -76,7 +76,8 @@ from pathlib import Path from typing import Any, TypeVar -from .utils import DPDKGitTarball +from .exception import ConfigurationError +from .utils import DPDKGitTarball, get_commit_id _T = TypeVar("_T") @@ -149,6 +150,26 @@ def __call__( return _EnvironmentArgument +def _parse_tarball(filepath: str) -> Path: +"""Validate if the filepath is valid and return a Path object.""" +path = Path(filepath) +if not path.exists() or not path.is_file(): +raise argparse.ArgumentTypeError( +"the file path provided is not a valid file") +return path + + +def _parse_revision_id(rev_id: str) -> str: +"""Retrieve effective commit ID from a revision ID. While validating it.""" + +try: +return get_commit_id(rev_id) +except ConfigurationError: +raise argparse.ArgumentTypeError( +"the Git revision ID supplied is invalid or ambiguous" +) + + @dataclass(slots=True) class Settings: """Default framework-wide user settings. @@ -167,7 +188,7 @@ class Settings: #: skip_setup: bool = False #: -dpdk_tarball_path: Path | str = "dpdk.tar.xz" +dpdk_tarball_path: Path | str = "" #: compile_timeout: float = 1200 #: @@ -186,6 +207,28 @@ def _get_parser() -> argparse.ArgumentParser: formatter_class=argparse.ArgumentDefaultsHelpFormatter,
[PATCH 3/4] dts: show help when DTS is ran without args
This commit changes the default behaviour of DTS, making it so that the user automatically sees the help and usage page when running it without any arguments set. Instead of being welcomed by an error message. Reviewed-by: Paul Szczepanek Signed-off-by: Luca Vizzarro --- dts/framework/settings.py | 6 ++ 1 file changed, 6 insertions(+) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index acfe5cad44..5809fd4e91 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -71,6 +71,7 @@ import argparse import os +import sys from collections.abc import Callable, Iterable, Sequence from dataclasses import dataclass, field from pathlib import Path @@ -315,6 +316,11 @@ def get_settings() -> Settings: The inputs are taken from the command line and from environment variables. """ + +if len(sys.argv) == 1: +_get_parser().print_help() +sys.exit(1) + parsed_args = _get_parser().parse_args() return Settings( config_file_path=parsed_args.config_file, -- 2.34.1
[PATCH 2/4] dts: customise argparse error message
This commit customises the arguments parsing class' error message, making it so the confusing usage is not displayed in these occurrences, but the user is redirected to use the help argument instead. Reviewed-by: Paul Szczepanek Signed-off-by: Luca Vizzarro --- dts/framework/settings.py | 13 +++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 2d0365e763..acfe5cad44 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -170,6 +170,15 @@ def _parse_revision_id(rev_id: str) -> str: ) +class ArgumentParser(argparse.ArgumentParser): +"""ArgumentParser with a custom error message.""" +def error(self, message): +print(f"{self.prog}: error: {message}\n", file=sys.stderr) +self.exit(2, + "For help and usage, " + "run the command with the --help flag.\n") + + @dataclass(slots=True) class Settings: """Default framework-wide user settings. @@ -200,8 +209,8 @@ class Settings: SETTINGS: Settings = Settings() -def _get_parser() -> argparse.ArgumentParser: -parser = argparse.ArgumentParser( +def _get_parser() -> ArgumentParser: +parser = ArgumentParser( description="Run DPDK test suites. All options may be specified with the environment " "variables provided in brackets. Command line arguments have higher priority.", formatter_class=argparse.ArgumentDefaultsHelpFormatter, -- 2.34.1
[PATCH 4/4] dts: log stderr with failed remote commands
Add the executed command stderr to RemoteCommandExecutionError. So that it is logged as an error, instead of just as debug. Reviewed-by: Paul Szczepanek Signed-off-by: Luca Vizzarro --- dts/framework/exception.py | 10 +++--- dts/framework/remote_session/remote_session.py | 2 +- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 658eee2c38..9fc3fa096a 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -130,20 +130,24 @@ class RemoteCommandExecutionError(DTSError): #: The executed command. command: str _command_return_code: int +_command_stderr: str -def __init__(self, command: str, command_return_code: int): +def __init__(self, command: str, command_return_code: int, command_stderr: str): """Define the meaning of the first two arguments. Args: command: The executed command. command_return_code: The return code of the executed command. +command_stderr: The stderr of the executed command. """ self.command = command self._command_return_code = command_return_code +self._command_stderr = command_stderr def __str__(self) -> str: -"""Include both the command and return code in the string representation.""" -return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}" +"""Include the command, its return code and stderr in the string representation.""" +return (f"Command '{self.command}' returned a non-zero exit code: " +f"{self._command_return_code}\n{self._command_stderr}") class RemoteDirectoryExistsError(DTSError): diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 2059f9a981..345439f2de 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -157,7 +157,7 @@ def send_command( ) self._logger.debug(f"stdout: '{result.stdout}'") self._logger.debug(f"stderr: '{result.stderr}'") -raise RemoteCommandExecutionError(command, result.return_code) +raise RemoteCommandExecutionError(command, result.return_code, result.stderr) self._logger.debug(f"Received from '{command}':\n{result}") self.history.append(result) return result -- 2.34.1
[PATCH 0/7] net/gve: RSS Support for GVE Driver
This patch series introduces RSS support for the GVE poll-mode driver. This series includes implementations of the following eth_dev_ops: 1) rss_hash_update 2) rss_hash_conf_get 3) reta_query 4) reta_update In rss_hash_update, the GVE driver supports the following RSS hash types: * RTE_ETH_RSS_IPV4 * RTE_ETH_RSS_NONFRAG_IPV4_TCP * RTE_ETH_RSS_NONFRAG_IPV4_UDP * RTE_ETH_RSS_IPV6 * RTE_ETH_RSS_IPV6_EX * RTE_ETH_RSS_NONFRAG_IPV6_TCP * RTE_ETH_RSS_NONFRAG_IPV6_UDP * RTE_ETH_RSS_IPV6_TCP_EX * RTE_ETH_RSS_IPV6_UDP_EX The hash key is 40B, and the lookup table has 128 entries. These values are not configurable in this implementation. In general, the DPDK driver expects the RSS hash configuration to be set with a key before the redriection table is set up. When the RSS hash is configured, a default redirection table is generated based on the number of queues. When the device is re-configured, the redirection table is reset to the default value based on the queue count. An important note is that the gVNIC device expects 32 bit integers for RSS redirection table entries, while the RTE API uses 16 bit integers. However, this is unlikely to be an issue, as these values represent receive queues, and the gVNIC device does not support anywhere near 64K queues. This series also updates the corresponding feature matrix ertries and documentation as it pertains to RSS support in the GVE driver. Joshua Washington (7): net/gve: fully expose RSS offload support in dev_info net/gve: RSS adminq command changes net/gve: add gve_rss library for handling RSS-related behaviors net/gve: RSS configuration update support net/gve: RSS redirection table update support net/gve: update gve.ini with RSS capabilities net/gve: update GVE documentation with RSS support doc/guides/nics/features/gve.ini | 3 + doc/guides/nics/gve.rst | 16 ++- drivers/net/gve/base/gve.h| 15 ++ drivers/net/gve/base/gve_adminq.c | 59 drivers/net/gve/base/gve_adminq.h | 21 +++ drivers/net/gve/gve_ethdev.c | 231 +- drivers/net/gve/gve_ethdev.h | 17 +++ drivers/net/gve/gve_rss.c | 206 ++ drivers/net/gve/gve_rss.h | 107 ++ drivers/net/gve/meson.build | 1 + 10 files changed, 667 insertions(+), 9 deletions(-) create mode 100644 drivers/net/gve/gve_rss.c create mode 100644 drivers/net/gve/gve_rss.h -- 2.43.0.429.g432eaa2c6b-goog
[PATCH 1/7] net/gve: fully expose RSS offload support in dev_info
--- drivers/net/gve/gve_ethdev.c | 4 +++- drivers/net/gve/gve_ethdev.h | 8 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index d162fd3864..6acdb4e13b 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -405,7 +405,7 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_mtu = priv->max_mtu; dev_info->min_mtu = RTE_ETHER_MIN_MTU; - dev_info->rx_offload_capa = 0; + dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_RSS_HASH; dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS | RTE_ETH_TX_OFFLOAD_UDP_CKSUM| @@ -442,6 +442,8 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_align = 1, }; + dev_info->flow_type_rss_offloads = GVE_RSS_OFFLOAD_ALL; + return 0; } diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 9893fcfee6..14c72ec91a 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -33,6 +33,14 @@ RTE_MBUF_F_TX_L4_MASK |\ RTE_MBUF_F_TX_TCP_SEG) +#define GVE_RSS_OFFLOAD_ALL ( \ + RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_IPV6_EX | \ + RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ + RTE_ETH_RSS_IPV6_TCP_EX) + /* A list of pages registered with the device during setup and used by a queue * as buffers */ -- 2.43.0.429.g432eaa2c6b-goog
[PATCH 2/7] net/gve: RSS adminq command changes
This change introduces admin queue changes that enable the configuration of RSS parameters for the GVE driver. --- drivers/net/gve/base/gve.h| 15 drivers/net/gve/base/gve_adminq.c | 59 +++ drivers/net/gve/base/gve_adminq.h | 29 +++ drivers/net/gve/gve_ethdev.h | 6 +++- 4 files changed, 108 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/base/gve.h b/drivers/net/gve/base/gve.h index f7b297e759..9c58fc4238 100644 --- a/drivers/net/gve/base/gve.h +++ b/drivers/net/gve/base/gve.h @@ -51,4 +51,19 @@ enum gve_state_flags_bit { GVE_PRIV_FLAGS_NAPI_ENABLED = 4, }; +enum gve_rss_hash_algorithm { + GVE_RSS_HASH_UNDEFINED = 0, + GVE_RSS_HASH_TOEPLITZ = 1, +}; + +struct gve_rss_config { + uint16_t hash_types; + enum gve_rss_hash_algorithm alg; + uint16_t key_size; + uint16_t indir_size; + uint8_t *key; + uint32_t *indir; +}; + + #endif /* _GVE_H_ */ diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index 343bd13d67..897c3dce03 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -389,6 +389,9 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv, case GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES: priv->adminq_dcfg_device_resources_cnt++; break; + case GVE_ADMINQ_CONFIGURE_RSS: + priv->adminq_cfg_rss_cnt++; + break; case GVE_ADMINQ_SET_DRIVER_PARAMETER: priv->adminq_set_driver_parameter_cnt++; break; @@ -938,3 +941,59 @@ int gve_adminq_get_ptype_map_dqo(struct gve_priv *priv, gve_free_dma_mem(&ptype_map_dma_mem); return err; } + +int gve_adminq_configure_rss(struct gve_priv *priv, +struct gve_rss_config *rss_config) +{ + struct gve_dma_mem indirection_table_dma_mem; + struct gve_dma_mem rss_key_dma_mem; + union gve_adminq_command cmd; + __be32 *indir = NULL; + u8 *key = NULL; + int err = 0; + int i; + + if (rss_config->indir_size) { + indir = gve_alloc_dma_mem(&indirection_table_dma_mem, + rss_config->indir_size * + sizeof(*rss_config->indir)); + if (!indir) { + err = -ENOMEM; + goto out; + } + for (i = 0; i < rss_config->indir_size; i++) + indir[i] = cpu_to_be32(rss_config->indir[i]); + } + + if (rss_config->key_size) { + key = gve_alloc_dma_mem(&rss_key_dma_mem, + rss_config->key_size * + sizeof(*rss_config->key)); + if (!key) { + err = -ENOMEM; + goto out; + } + memcpy(key, rss_config->key, rss_config->key_size); + } + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = cpu_to_be32(GVE_ADMINQ_CONFIGURE_RSS); + cmd.configure_rss = (struct gve_adminq_configure_rss) { + .hash_types = cpu_to_be16(rss_config->hash_types), + .halg = rss_config->alg, + .hkey_len = cpu_to_be16(rss_config->key_size), + .indir_len = cpu_to_be16(rss_config->indir_size), + .hkey_addr = cpu_to_be64(rss_key_dma_mem.pa), + .indir_addr = cpu_to_be64(indirection_table_dma_mem.pa), + }; + + err = gve_adminq_execute_cmd(priv, &cmd); + +out: + if (indir) + gve_free_dma_mem(&indirection_table_dma_mem); + if (key) + gve_free_dma_mem(&rss_key_dma_mem); + return err; +} + diff --git a/drivers/net/gve/base/gve_adminq.h b/drivers/net/gve/base/gve_adminq.h index f05362f85f..95f4960561 100644 --- a/drivers/net/gve/base/gve_adminq.h +++ b/drivers/net/gve/base/gve_adminq.h @@ -19,6 +19,7 @@ enum gve_adminq_opcodes { GVE_ADMINQ_DESTROY_TX_QUEUE = 0x7, GVE_ADMINQ_DESTROY_RX_QUEUE = 0x8, GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES = 0x9, + GVE_ADMINQ_CONFIGURE_RSS= 0xA, GVE_ADMINQ_SET_DRIVER_PARAMETER = 0xB, GVE_ADMINQ_REPORT_STATS = 0xC, GVE_ADMINQ_REPORT_LINK_SPEED= 0xD, @@ -377,6 +378,27 @@ struct gve_adminq_get_ptype_map { __be64 ptype_map_addr; }; +#define GVE_RSS_HASH_IPV4 BIT(0) +#define GVE_RSS_HASH_TCPV4 BIT(1) +#define GVE_RSS_HASH_IPV6 BIT(2) +#define GVE_RSS_HASH_IPV6_EX BIT(3) +#define GVE_RSS_HASH_TCPV6 BIT(4) +#define GVE_RSS_HASH_TCPV6_EX BIT(5) +#define GVE_RSS_HASH_UDPV4 BIT(6) +#define GVE_RSS_HASH_UDPV6 BIT(7) +#define GVE_RSS_HASH_UDPV6_EX
[PATCH 3/7] net/gve: add gve_rss library for handling RSS-related behaviors
This change includes a number of helper functions to facilitate RSS configuration on the GVE DPDK driver. These methods are declared in gve_rss.h. --- drivers/net/gve/base/gve_adminq.h | 10 +- drivers/net/gve/gve_ethdev.c | 2 +- drivers/net/gve/gve_ethdev.h | 4 +- drivers/net/gve/gve_rss.c | 206 ++ drivers/net/gve/gve_rss.h | 107 drivers/net/gve/meson.build | 1 + 6 files changed, 319 insertions(+), 11 deletions(-) create mode 100644 drivers/net/gve/gve_rss.c create mode 100644 drivers/net/gve/gve_rss.h diff --git a/drivers/net/gve/base/gve_adminq.h b/drivers/net/gve/base/gve_adminq.h index 95f4960561..24abd945cc 100644 --- a/drivers/net/gve/base/gve_adminq.h +++ b/drivers/net/gve/base/gve_adminq.h @@ -378,15 +378,6 @@ struct gve_adminq_get_ptype_map { __be64 ptype_map_addr; }; -#define GVE_RSS_HASH_IPV4 BIT(0) -#define GVE_RSS_HASH_TCPV4 BIT(1) -#define GVE_RSS_HASH_IPV6 BIT(2) -#define GVE_RSS_HASH_IPV6_EX BIT(3) -#define GVE_RSS_HASH_TCPV6 BIT(4) -#define GVE_RSS_HASH_TCPV6_EX BIT(5) -#define GVE_RSS_HASH_UDPV4 BIT(6) -#define GVE_RSS_HASH_UDPV6 BIT(7) -#define GVE_RSS_HASH_UDPV6_EX BIT(8) /* RSS configuration command */ struct gve_adminq_configure_rss { @@ -428,6 +419,7 @@ union gve_adminq_command { GVE_CHECK_UNION_LEN(64, gve_adminq_command); struct gve_priv; +struct gve_rss_config; struct gve_queue_page_list; int gve_adminq_alloc(struct gve_priv *priv); void gve_adminq_free(struct gve_priv *priv); diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 6acdb4e13b..936ca22cb9 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -442,7 +442,7 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_align = 1, }; - dev_info->flow_type_rss_offloads = GVE_RSS_OFFLOAD_ALL; + dev_info->flow_type_rss_offloads = GVE_RTE_RSS_OFFLOAD_ALL; return 0; } diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index aa8291f235..b5118e737a 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -33,7 +33,7 @@ RTE_MBUF_F_TX_L4_MASK |\ RTE_MBUF_F_TX_TCP_SEG) -#define GVE_RSS_OFFLOAD_ALL ( \ +#define GVE_RTE_RSS_OFFLOAD_ALL ( \ RTE_ETH_RSS_IPV4 | \ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ RTE_ETH_RSS_IPV6 | \ @@ -290,6 +290,8 @@ struct gve_priv { const struct rte_memzone *stats_report_mem; uint16_t stats_start_idx; /* start index of array of stats written by NIC */ uint16_t stats_end_idx; /* end index of array of stats written by NIC */ + + struct gve_rss_config rss_config; }; static inline bool diff --git a/drivers/net/gve/gve_rss.c b/drivers/net/gve/gve_rss.c new file mode 100644 index 00..b1cd0e16f2 --- /dev/null +++ b/drivers/net/gve/gve_rss.c @@ -0,0 +1,206 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Google LLC + */ + +#include "gve_rss.h" + +int +gve_generate_rss_reta(struct rte_eth_dev *dev, struct gve_rss_config* config) +{ + int i; + if (!config || !config->indir) { + return -EINVAL; + } + + for (i = 0; i < config->indir_size; i++) + config->indir[i] = i % dev->data->nb_rx_queues; + + return 0; +} + + +int +gve_init_rss_config(struct gve_rss_config *gve_rss_conf, + uint16_t key_size, uint16_t indir_size) +{ + int err; + + gve_rss_conf->alg = GVE_RSS_HASH_TOEPLITZ; + + gve_rss_conf->key_size = key_size; + gve_rss_conf->key = rte_zmalloc("rss key", + key_size * sizeof(*gve_rss_conf->key), + RTE_CACHE_LINE_SIZE); + if (!gve_rss_conf->key) { + return -ENOMEM; + } + + gve_rss_conf->indir_size = indir_size; + gve_rss_conf->indir = rte_zmalloc("rss reta", + indir_size * sizeof(*gve_rss_conf->indir), + RTE_CACHE_LINE_SIZE); + if (!gve_rss_conf->indir) { + err = -ENOMEM; + goto err_with_key; + } + + return 0; +err_with_key: + rte_free(gve_rss_conf->key); + return err; +} + +int +gve_init_rss_config_from_priv(struct gve_priv *priv, + struct gve_rss_config *gve_rss_conf) +{ + int err = gve_init_rss_config(gve_rss_conf, priv->rss_config.key_size, + priv->rss_config.indir_size); + if (err) + return err; + + gve_rss_conf->hash_types = priv->rss_config.hash_types; + gve_rss_conf->alg = priv->rss_config.alg; + memcpy(gve_rss_conf->key, priv->rss_config.key, + gve_rss_conf->key_size * sizeof(*gve_rss_conf->key)); + memcpy(gve_rss_conf->indir, priv->rss_config.
[PATCH 4/7] net/gve: RSS configuration update support
This patch adds support for updating the RSS hash key and hash fields in the GVE PMD through the implementation of rss_hash_update and rss_hash_conf_get. The RSS hash key for gVNIC is required to be 40 bytes. On initial configuration of the RSS hash key, the RSS redirection table will be set to a static default, using a round-robin approach for all queues. Note, however, that this patch does not include support for setting the redirection table explicitly. In dev_configure, if the static redirection table has been set, it will be updated to reflect the new queue count, if it has changed. The RSS key must be set before any other RSS configuration can happen. As such, an attempt to set the hash types before the key is configured will fail. --- drivers/net/gve/gve_ethdev.c | 132 ++- drivers/net/gve/gve_ethdev.h | 5 +- 2 files changed, 134 insertions(+), 3 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 936ca22cb9..76405120b2 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation + * Copyright(C) 2023 Google LLC */ #include "gve_ethdev.h" @@ -8,6 +9,7 @@ #include "base/gve_osdep.h" #include "gve_version.h" #include "rte_ether.h" +#include "gve_rss.h" static void gve_write_version(uint8_t *driver_version_register) @@ -88,12 +90,31 @@ gve_dev_configure(struct rte_eth_dev *dev) { struct gve_priv *priv = dev->data->dev_private; - if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) { dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + priv->rss_config.alg = GVE_RSS_HASH_TOEPLITZ; + } if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) priv->enable_rsc = 1; + // Reset RSS RETA in case number of queues changed. + if (priv->rss_config.indir) { + struct gve_rss_config update_reta_config; + gve_init_rss_config_from_priv(priv, &update_reta_config); + gve_generate_rss_reta(dev, &update_reta_config); + + int err = gve_adminq_configure_rss(priv, &update_reta_config); + if (err) + PMD_DRV_LOG(ERR, + "Could not reconfigure RSS redirection table."); + else + gve_update_priv_rss_config(priv, &update_reta_config); + + gve_free_rss_config(&update_reta_config); + return err; + } + return 0; } @@ -443,6 +464,8 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) }; dev_info->flow_type_rss_offloads = GVE_RTE_RSS_OFFLOAD_ALL; + dev_info->hash_key_size = GVE_RSS_HASH_KEY_SIZE; + dev_info->reta_size = GVE_RSS_INDIR_SIZE; return 0; } @@ -646,6 +669,107 @@ gve_xstats_get_names(struct rte_eth_dev *dev, return count; } + +static int +gve_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct gve_priv *priv = dev->data->dev_private; + struct gve_rss_config gve_rss_conf; + int rss_reta_size; + int err; + + if (gve_validate_rss_hf(rss_conf->rss_hf)) { + PMD_DRV_LOG(ERR, "Unsupported hash function."); + return -EINVAL; + } + + if (rss_conf->algorithm != RTE_ETH_HASH_FUNCTION_TOEPLITZ && + rss_conf->algorithm != RTE_ETH_HASH_FUNCTION_DEFAULT) { + PMD_DRV_LOG(ERR, "Device only supports Toeplitz algorithm."); + return -EINVAL; + } + + if (rss_conf->rss_key_len) { + if (rss_conf->rss_key_len != GVE_RSS_HASH_KEY_SIZE) { + PMD_DRV_LOG(ERR, + "Invalid hash key size. Only RSS hash key size " + "of %u supported", GVE_RSS_HASH_KEY_SIZE); + return -EINVAL; + } + + if (!rss_conf->rss_key) { + PMD_DRV_LOG(ERR, "RSS key must be non-null."); + return -EINVAL; + } + } else { + if (!priv->rss_config.key_size) { + PMD_DRV_LOG(ERR, "RSS key must be initialized before " + "any other configuration."); + return -EINVAL; + } + rss_conf->rss_key_len = priv->rss_config.key_size; + } + + rss_reta_size = priv->rss_config.indir ? + priv->rss_config.indir_size : + GVE_RSS_INDIR_SIZE; + err = gve_init_rss_config(&gve_rss_conf, rss_conf->rss_key_len, + rss_
[PATCH 5/7] net/gve: RSS redirection table update support
This patch introduces support for updating the RSS redirection table in the GVE PMD through the implementation of rss_reta_update and rss_reta_query. Due to an infrastructure limitation, the RSS hash key must be manually configured before the redirection table can be updated or queried. The redirection table is expected to be exactly 128 bytes. --- drivers/net/gve/gve_ethdev.c | 95 1 file changed, 95 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 76405120b2..aa9b3d8372 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -770,6 +770,97 @@ gve_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static int +gve_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) +{ + struct gve_priv *priv = dev->data->dev_private; + struct gve_rss_config gve_rss_conf; + int table_id; + int err; + int i; + + // RSS key must be set before the redirection table can be set. + if (!priv->rss_config.key || priv->rss_config.key_size == 0) { + PMD_DRV_LOG(ERR, "RSS hash key msut be set before the " + "redirection table can be updated."); + return -ENOTSUP; + } + + if (reta_size != GVE_RSS_INDIR_SIZE) { + PMD_DRV_LOG(ERR, "Redirection table must have %hu elements", + GVE_RSS_INDIR_SIZE); + return -EINVAL; + } + + err = gve_init_rss_config_from_priv(priv, &gve_rss_conf); + if (err) { + PMD_DRV_LOG(ERR, "Error allocating new RSS config."); + return err; + } + + table_id = 0; + for (i = 0; i < priv->rss_config.indir_size; i++) { + int table_entry = i % RTE_ETH_RETA_GROUP_SIZE; + if (reta_conf[table_id].mask & (1ULL << table_entry)) + gve_rss_conf.indir[i] = + reta_conf[table_id].reta[table_entry]; + + if (table_entry == RTE_ETH_RETA_GROUP_SIZE - 1) + table_id ++; + } + + err = gve_adminq_configure_rss(priv, &gve_rss_conf); + if (err) + PMD_DRV_LOG(ERR, "Problem configuring RSS with device."); + else + gve_update_priv_rss_config(priv, &gve_rss_conf); + + gve_free_rss_config(&gve_rss_conf); + return err; +} + +static int +gve_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) +{ + struct gve_priv *priv = dev->data->dev_private; + int table_id; + int i; + + if (!(dev->data->dev_conf.rxmode.offloads & + RTE_ETH_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, "RSS not configured."); + return -ENOTSUP; + } + + // RSS key must be set before the redirection table can be queried. + if (!priv->rss_config.key) { + PMD_DRV_LOG(ERR, "RSS hash key must be set before the " + "redirection table can be initialized."); + return -ENOTSUP; + } + + if (reta_size != priv->rss_config.indir_size) { + PMD_DRV_LOG(ERR, "RSS redirection table must have %d entries.", + priv->rss_config.indir_size); + return -EINVAL; + } + + table_id = 0; + for (i = 0; i < priv->rss_config.indir_size; i++) { + int table_entry = i % RTE_ETH_RETA_GROUP_SIZE; + if (reta_conf[table_id].mask & (1ULL << table_entry)) + reta_conf[table_id].reta[table_entry] + = priv->rss_config.indir[i]; + + if (table_entry == RTE_ETH_RETA_GROUP_SIZE - 1) + table_id ++; + } + + return 0; +} + static const struct eth_dev_ops gve_eth_dev_ops = { .dev_configure= gve_dev_configure, .dev_start= gve_dev_start, @@ -792,6 +883,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .xstats_get_names = gve_xstats_get_names, .rss_hash_update = gve_rss_hash_update, .rss_hash_conf_get= gve_rss_hash_conf_get, + .reta_update = gve_rss_reta_update, + .reta_query = gve_rss_reta_query, }; static const struct eth_dev_ops gve_eth_dev_ops_dqo = { @@ -816,6 +909,8 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .xstats_get_names = gve_xstats_get_names, .rss_hash_update = gve_rss_hash_update, .rss_hash_conf_get= gve_rss_hash_conf_get, + .reta_update = gve_rss_reta_update, + .reta_query = gve_rss_reta_query, }; static void -- 2.43.0.429.g432eaa2c6b-goog
[PATCH 6/7] net/gve: update gve.ini with RSS capabilities
--- doc/guides/nics/features/gve.ini | 3 +++ 1 file changed, 3 insertions(+) diff --git a/doc/guides/nics/features/gve.ini b/doc/guides/nics/features/gve.ini index 838edd456a..bc08648dbc 100644 --- a/doc/guides/nics/features/gve.ini +++ b/doc/guides/nics/features/gve.ini @@ -15,3 +15,6 @@ Linux= Y x86-32 = Y x86-64 = Y Usage doc= Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y -- 2.43.0.429.g432eaa2c6b-goog
[PATCH 7/7] net/gve: update GVE documentation with RSS support
--- doc/guides/nics/gve.rst | 16 ++-- 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/doc/guides/nics/gve.rst b/doc/guides/nics/gve.rst index 1c3eaf03ef..80991e70cf 100644 --- a/doc/guides/nics/gve.rst +++ b/doc/guides/nics/gve.rst @@ -70,6 +70,8 @@ Supported features of the GVE PMD are: - Link state information - Tx multi-segments (Scatter Tx) - Tx UDP/TCP/SCTP Checksum +- RSS hash configuration +- RSS redirection table update and query Currently, only GQI_QPL and GQI_RDA queue format are supported in PMD. Jumbo Frame is not supported in PMD for now. @@ -77,10 +79,12 @@ It'll be added in a future DPDK release. Also, only GQI_QPL queue format is in use on GCP since GQI_RDA hasn't been released in production. -Currently, setting MTU with value larger than 1460 is not supported. +RSS +^^^ -Currently, only "RSS hash" is force enabled -so that the backend hardware device calculated hash values -could be shared with applications. -But for RSS, there is no such API to config RSS hash function or RETA table. -So, limited RSS is supported only with default config/setting. +GVE RSS can be enabled and configured using the standard interfaces. The driver +does not support querying the initial RSS configuration. + +The RSS hash key must be exactly 40 bytes, and the redirection table must have +128 entries. The RSS hash key must be configured before the redirection table +can be set up. -- 2.43.0.429.g432eaa2c6b-goog
RE: [PATCH 0/8] optimize the firmware loading process
> -Original Message- > From: Ferruh Yigit > Sent: Monday, January 22, 2024 11:09 PM > To: Chaoyong He ; dev@dpdk.org > Cc: oss-drivers > Subject: Re: [PATCH 0/8] optimize the firmware loading process > > On 1/15/2024 2:54 AM, Chaoyong He wrote: > > This patch series aims to speedup the DPDK application start by > > optimize the firmware loading process in sereval places. > > We also simplify the port name in multiple PF firmware case to make > > the customer happy. > > > > <...> > > > net/nfp: add the elf module > > net/nfp: reload the firmware only when firmware changed > > Above commit adds elf parser capability and second one loads firmware when > build time is different. > > I can see this is an optimization effort, to understand FW status before > loading > FW, but relying on build time seems fragile. Does it help to add a new section > to store version information and evaluate based on this information? > We have a branch of firmware (several app type combined with NFD3/NFDk) with the same version information(monthly publish), so the version information can't help us, because we can't distinguish among them. But the build time is different for every firmware, and that's the reason we choose it.
Re: [PATCH v6 00/20] Remove uses of PMD logtype
On Fri, 19 Jan 2024 14:59:58 +0100 David Marchand wrote: > $ git grep 'RTE_LOG_DP(.*fmt' drivers/ | grep -v '\\n' | cut -d : -f 1 > | xargs grep -B1 -w RTE_LOG_DP > drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h-#define > rte_bbdev_dp_log(level, fmt, args...) \ > drivers/baseband/la12xx/bbdev_la12xx_pmd_logs.h:RTE_LOG_DP(level, > PMD, fmt, ## args) > -- > drivers/bus/cdx/cdx_logs.h-#define CDX_BUS_DP_LOG(level, fmt, args...) \ > drivers/bus/cdx/cdx_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/bus/fslmc/fslmc_logs.h-#define DPAA2_BUS_DP_LOG(level, fmt, args...) \ > drivers/bus/fslmc/fslmc_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/common/dpaax/dpaax_logs.h-#define DPAAX_DP_LOG(level, fmt, args...) \ > drivers/common/dpaax/dpaax_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h-#define > DPAA2_SEC_DP_LOG(level, fmt, args...) \ > drivers/crypto/dpaa2_sec/dpaa2_sec_logs.h:RTE_LOG_DP(level, PMD, > fmt, ## args) > -- > drivers/crypto/dpaa_sec/dpaa_sec_log.h-#define DPAA_SEC_DP_LOG(level, > fmt, args...) \ > drivers/crypto/dpaa_sec/dpaa_sec_log.h:RTE_LOG_DP(level, PMD, fmt, ## > args) > -- > drivers/event/dlb2/dlb2_log.h-#define DLB2_LOG_DBG(fmt, args...) \ > drivers/event/dlb2/dlb2_log.h:RTE_LOG_DP(DEBUG, PMD, fmt, ## args) > -- > drivers/event/dpaa2/dpaa2_eventdev_logs.h-#define > DPAA2_EVENTDEV_DP_LOG(level, fmt, args...) \ > drivers/event/dpaa2/dpaa2_eventdev_logs.h:RTE_LOG_DP(level, PMD, > fmt, ## args) > -- > drivers/event/dsw/dsw_evdev.h-#define DSW_LOG_DP(level, fmt, args...) > \ > drivers/event/dsw/dsw_evdev.h:RTE_LOG_DP(level, EVENTDEV, "[%s] > %s() line %u: " fmt,\ > -- > drivers/mempool/dpaa/dpaa_mempool.h-#define DPAA_MEMPOOL_DPDEBUG(fmt, > args...) \ > drivers/mempool/dpaa/dpaa_mempool.h:RTE_LOG_DP(DEBUG, PMD, fmt, ## args) > -- > drivers/mempool/dpaa2/dpaa2_hw_mempool_logs.h-#define > DPAA2_MEMPOOL_DP_LOG(level, fmt, args...) \ > drivers/mempool/dpaa2/dpaa2_hw_mempool_logs.h:RTE_LOG_DP(level, > PMD, fmt, ## args) > -- > drivers/net/dpaa/dpaa_ethdev.h-#define DPAA_DP_LOG(level, fmt, args...) \ > drivers/net/dpaa/dpaa_ethdev.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/net/dpaa2/dpaa2_pmd_logs.h-#define DPAA2_PMD_DP_LOG(level, > fmt, args...) \ > drivers/net/dpaa2/dpaa2_pmd_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/net/enetc/enetc_logs.h-#define ENETC_PMD_DP_LOG(level, fmt, args...) \ > drivers/net/enetc/enetc_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/net/enetfec/enet_pmd_logs.h-#define ENETFEC_DP_LOG(level, fmt, > args...) \ > drivers/net/enetfec/enet_pmd_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) > -- > drivers/net/pfe/pfe_logs.h-#define PFE_DP_LOG(level, fmt, args...) \ > drivers/net/pfe/pfe_logs.h:RTE_LOG_DP(level, PMD, fmt, ## args) Most of these are from the first patch (yours). Shall I fix those as well?
RE: DTS testpmd and SCAPY integration
> > 08/01/2024 13:10, Luca Vizzarro: > > Your proposal sounds rather interesting. Certainly enabling DTS to > > accept YAML-written tests sounds more developer-friendly and should > > enable quicker test-writing. As this is an extra feature though – and > > a nice-to-have, it should definitely be discussed in the DTS meetings > > as Honnappa suggested already. > > I would not classify this idea as "nice-to-have". > I would split this proposal in 2 parts: > 1/ YAML is an implementation alternative. > 2/ Being able to write a test with a small number of lines, > reusing some commands from existing tools, > should be our "must-have" common goal. > > Others have mentioned that YAML may not be suitable in complex cases, and > that it would be an additional language for test writing. > I personnaly think we should focus on a single path which is easy to read and > maintain. I think we are digressing from the plan we had put forward if we have to go down this path. We should understand what it means by going down the YAML format. Also, what would happen if there is another innovation in 3 months? We already have scatter-gather test suite ported to DPDK repo and has undergone review in the community. In the last meeting we went through a simple test case. Is it possible to write the scatter-gather test case in YAML and see how they compare? > For the configuration side, YAML is already used in DTS. > For the test suite logic, do you think we can achieve the same simplicity with > some Python code? > > We discussed how to progress with this proposal during the CI meeting last > week. > We need to check how it could look and what we can improve to reach this > goal. > Patrick proposes a meeting this Wednesday at 2pm UTC. >
RE: [PATCH v2] doc: update LTS maintenance to 3 years
Hi, > -Original Message- > From: Kevin Traynor > Sent: Wednesday, January 17, 2024 6:24 PM > To: dev@dpdk.org > Cc: bl...@debian.org; NBU-Contact-Thomas Monjalon (EXTERNAL) > ; david.march...@redhat.com; > christian.ehrha...@canonical.com; Xueming(Steven) Li > ; ferruh.yi...@amd.com; john.mcnam...@intel.com; > techbo...@dpdk.org; Kevin Traynor > Subject: [PATCH v2] doc: update LTS maintenance to 3 years > > The existing official policy was to maintain LTS releases for 2 years. > > 19.11 and 20.11 LTS releases were maintained for 3 years and there was > not significant issues caused by code divergence from main etc. > > Update the policy to indicate 3 years maintenance for LTS releases, but > note that it depends on community support. > > Signed-off-by: Kevin Traynor > Acked-by: Raslan Darawsheh Kindest regards Raslan Darawsheh