Re: [dpdk-dev] [PATCH v2] kni: use kni_ethtool_ops only with unknown drivers
Hi Stephen, I also do not see the point of the current implementation of ethtool support. That's why I sent this patch – it enables ethtool_ops for all devices, independent of the underlying driver. Right now only .get_link is supported, but I am thinking about implementation of a larger set of functions, using req/resp queue, like netdev_ops functions are working. Regarding the KNI itself, we use it as Linux mirror of physical port for: 1. Port configuration from Linux – such functions as set_mac, change_mtu, etc. And ethtool_ops will be used the same way. 2. Passing control-plane packets to Linux. Can virtio user be used the same way, as a mirror of physical port? Best regards, Igor On Sat, Dec 1, 2018 at 2:38 AM Stephen Hemminger wrote: > On Fri, 30 Nov 2018 22:47:50 +0300 > Igor Ryzhov wrote: > > > Current implementation of kni_ethtool_ops just uses corresponding > > ethtool_ops function of underlying driver for all functions except for > > .get_link. This commit sets kni->net_dev->ethtool_ops directly to the > > ethtool_ops of the corresponding driver. > > > > For unknown drivers (all but ixgbe and i40e) we still use > > kni_ethtool_ops with implemented .get_link function. > > > > Signed-off-by: Igor Ryzhov > > Why does KNI still support ethtool which: > 1. Only works on a subset of devices > 2. Requires a 3rd implmentation of the HW device (Linux, DPDK, and KNI) > > Then again why does KNI exist at all? What is missing from virtio user > which > is faster anyway. >
Re: [dpdk-dev] [dpdk-stable] [PATCH] eventdev: fix call to strerror in eth Rx adapter
-Original Message- > Date: Thu, 29 Nov 2018 08:53:30 + > From: Ferruh Yigit > To: Nikhil Rao , jerin.ja...@caviumnetworks.com > CC: dev@dpdk.org, sta...@dpdk.org > Subject: Re: [dpdk-stable] [PATCH] eventdev: fix call to strerror in eth Rx > adapter > User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 > Thunderbird/60.3.1 > > > On 11/29/2018 8:00 AM, Nikhil Rao wrote: > > strerror() input parameter should be > 0. > > > > Coverity issue: 302864 > > Fixes: 3810ae435783 ("eventdev: add interrupt driven queues to Rx adapter") > > CC: sta...@dpdk.org > > > > Signed-off-by: Nikhil Rao > > Reviewed-by: Ferruh Yigit Applied to dpdk-next-eventdev/master. Thanks.
Re: [dpdk-dev] [PATCH] mbuf: implement generic format for sched field
-Original Message- > Date: Fri, 23 Nov 2018 16:54:23 + > From: Jasvinder Singh > To: dev@dpdk.org > CC: cristian.dumitre...@intel.com, Reshma Pattan > Subject: [dpdk-dev] [PATCH] mbuf: implement generic format for sched field > X-Mailer: git-send-email 2.17.1 > > This patch implements the changes proposed in the deprecation > notes [1][2]. > > The opaque mbuf->hash.sched field is updated to support generic > definition in line with the ethdev TM and MTR APIs. The new generic > format contains: queue ID, traffic class, color. > > In addtion, following API functions of the sched library have > been modified with an additional parameter of type struct > rte_sched_port to accomodate the changes made to mbuf sched field. > (i) rte_sched_port_pkt_write() > (ii) rte_sched_port_pkt_read() > > The other libraries, sample applications and tests which use mbuf > sched field have been updated as well. > > [1] http://mails.dpdk.org/archives/dev/2018-February/090651.html > [2] https://mails.dpdk.org/archives/dev/2018-November/119051.html > > Signed-off-by: Jasvinder Singh > Signed-off-by: Reshma Pattan > --- > @@ -575,12 +575,10 @@ struct rte_mbuf { > */ > } fdir; /**< Filter identifier if FDIR enabled */ > struct { > - uint32_t lo; > - uint32_t hi; > - /**< The event eth Tx adapter uses this field > -* to store Tx queue id. > -* @see rte_event_eth_tx_adapter_txq_set() > -*/ > + uint32_t queue_id; /**< Queue ID. */ > + uint8_t traffic_class; /**< Traffic class > ID. */ > + uint8_t color; /**< Color. */ > + uint16_t reserved; /**< Reserved. */ > } sched; /**< Hierarchical scheduler */ +Nikhil. Currently rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get() implemented using hash.sched.queue_id. How about moving out from "sched" to "txadapter"? Something like below, $ git diff diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 3dbc6695e..b73bbef93 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -575,13 +575,20 @@ struct rte_mbuf { */ } fdir; /**< Filter identifier if FDIR enabled */ struct { - uint32_t lo; - uint32_t hi; + uint32_t queue_id; /**< Queue ID. */ + uint8_t traffic_class; /**< Traffic class ID. */ + uint8_t color; /**< Color. */ + uint16_t reserved; /**< Reserved. */ + } sched; /**< Hierarchical scheduler */ + struct { + uint32_t reserved1; + uint16_t reserved2; + uint16_t txq; /**< The event eth Tx adapter uses this field * to store Tx queue id. * @see rte_event_eth_tx_adapter_txq_set() */ - } sched; /**< Hierarchical scheduler */ + } txadapter; /**< Eventdev ethdev Tx adapter */ /**< User defined tags. See rte_distributor_process() */ uint32_t usr; } hash; /**< hash information */ > rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue) > { > - uint16_t *p = (uint16_t *)&pkt->hash.sched.hi; > + uint16_t *p = (uint16_t *)&pkt->hash.sched.queue_id; > p[1] = queue; > } > > @@ -320,7 +320,7 @@ rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, > uint16_t queue) > static __rte_always_inline uint16_t __rte_experimental > rte_event_eth_tx_adapter_txq_get(struct rte_mbuf *pkt) > { > - uint16_t *p = (uint16_t *)&pkt->hash.sched.hi; > + uint16_t *p = (uint16_t *)&pkt->hash.sched.queue_id; > return p[1]; > } >
Re: [dpdk-dev] [PATCH] eventdev: fix eth Tx adapter queue count checks
-Original Message- > Date: Thu, 29 Nov 2018 16:41:48 +0530 > From: Nikhil Rao > To: jerin.ja...@caviumnetworks.com > CC: dev@dpdk.org, Nikhil Rao , sta...@dpdk.org > Subject: [PATCH] eventdev: fix eth Tx adapter queue count checks > X-Mailer: git-send-email 1.8.3.1 > > > rte_event_eth_tx_adapter_queue_add() - add a check > that returns an error if the ethdev the zero Tx queues > configured. > > rte_event_eth_tx_adapter_queue_del() - remove the > checks for ethdev queue count, instead check for > queues added to the adapter which maybe different > from the current ethdev queue count. > > Fixes: a3bbf2e09756 ("eventdev: add eth Tx adapter implementation") > Cc: sta...@dpdk.org > Signed-off-by: Nikhil Rao > --- > lib/librte_eventdev/rte_event_eth_tx_adapter.c | 53 > +- > 1 file changed, 36 insertions(+), 17 deletions(-) > > diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c > b/lib/librte_eventdev/rte_event_eth_tx_adapter.c > index ccf8a75..8431656 100644 > --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c > +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c > @@ -59,6 +59,19 @@ > return -EINVAL; \ > } while (0) > > +#define TXA_CHECK_TXQ(dev, queue) \ > +do {\ > + if ((dev)->data->nb_tx_queues == 0) { \ > + RTE_EDEV_LOG_ERR("No tx queues configured"); \ > + return -EINVAL; \ > + } \ > + if (queue != -1 && (uint16_t)queue >= (dev)->data->nb_tx_queues) { \ The queue should be bracket i.e ((queue) != 1) to avoid any side effect > + RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16, \ > + (uint16_t)queue); \ > + return -EINVAL; \ > + } \ > +} while (0) > + > txa = txa_service_id_to_data(id); > - port_id = dev->data->port_id; > > tqi = txa_service_queue(txa, port_id, tx_queue_id); > if (tqi == NULL || !tqi->added) > @@ -999,11 +1027,7 @@ static int txa_service_queue_del(uint8_t id, > TXA_CHECK_OR_ERR_RET(id); > > eth_dev = &rte_eth_devices[eth_dev_id]; > - if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) { > - RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16, > - (uint16_t)queue); > - return -EINVAL; > - } > + TXA_CHECK_TXQ(eth_dev, queue); > > caps = 0; > if (txa_dev_caps_get(id)) > @@ -1034,11 +1058,6 @@ static int txa_service_queue_del(uint8_t id, > TXA_CHECK_OR_ERR_RET(id); > > eth_dev = &rte_eth_devices[eth_dev_id]; > - if (queue != -1 && (uint16_t)queue >= eth_dev->data->nb_tx_queues) { > - RTE_EDEV_LOG_ERR("Invalid tx queue_id %" PRIu16, > - (uint16_t)queue); > - return -EINVAL; > - } Shouldn't we need TXA_CHECK_TXQ here? If we need only one place, Do we need macro? > > caps = 0; > > -- > 1.8.3.1 >
Re: [dpdk-dev] [PATCH] eventdev: remove redundant timer adapter function prototypes
-Original Message- > Date: Thu, 29 Nov 2018 13:45:26 -0600 > From: Erik Gabriel Carrillo > To: jerin.ja...@caviumnetworks.com > CC: dev@dpdk.org, sta...@dpdk.org > Subject: [PATCH] eventdev: remove redundant timer adapter function > prototypes > X-Mailer: git-send-email 1.7.10 > > > Fixes: a6562f6d6f8e ("eventdev: introduce event timer adapter") > Cc: sta...@dpdk.org > > Signed-off-by: Erik Gabriel Carrillo Acked-by: Jerin Jacob Applied to dpdk-next-eventdev/master. Thanks.
Re: [dpdk-dev] [PATCH 1/1] app/eventdev: detect deadlock for timer event producer
-Original Message- > Date: Thu, 29 Nov 2018 13:18:51 -0600 > From: Erik Gabriel Carrillo > To: pbhagavat...@caviumnetworks.com > CC: jerin.ja...@caviumnetworks.com, dev@dpdk.org > Subject: [PATCH 1/1] app/eventdev: detect deadlock for timer event producer > X-Mailer: git-send-email 1.7.10 > > If timer events get dropped for some reason, the thread that launched > producer and worker cores will never exit, because the deadlock check > doesn't currently apply to the event timer adapter case. This commit > fixes this. Please add Fixes: With above changes, Acked-by: Jerin Jacob > > Signed-off-by: Erik Gabriel Carrillo > --- > app/test-eventdev/test_perf_common.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/app/test-eventdev/test_perf_common.c > b/app/test-eventdev/test_perf_common.c > index 8618775..f99a6a6 100644 > --- a/app/test-eventdev/test_perf_common.c > +++ b/app/test-eventdev/test_perf_common.c > @@ -327,7 +327,8 @@ perf_launch_lcores(struct evt_test *test, struct > evt_options *opt, > } > > if (new_cycles - dead_lock_cycles > dead_lock_sample && > - opt->prod_type == EVT_PROD_TYPE_SYNT) { > + (opt->prod_type == EVT_PROD_TYPE_SYNT || > +opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR)) { > remaining = t->outstand_pkts - processed_pkts(t); > if (dead_lock_remaining == remaining) { > rte_event_dev_dump(opt->dev_id, stdout); > -- > 2.6.4 >
Re: [dpdk-dev] [PATCH v2] kni: use kni_ethtool_ops only with unknown drivers
On Sat, 1 Dec 2018 14:12:54 +0300 Igor Ryzhov wrote: > Hi Stephen, > > I also do not see the point of the current implementation of ethtool > support. > That's why I sent this patch – it enables ethtool_ops for all devices, > independent of the underlying driver. > Right now only .get_link is supported, but I am thinking about > implementation of a larger set of functions, using req/resp queue, like > netdev_ops functions are working. > > Regarding the KNI itself, we use it as Linux mirror of physical port for: > 1. Port configuration from Linux – such functions as set_mac, change_mtu, > etc. And ethtool_ops will be used the same way. > 2. Passing control-plane packets to Linux. > > Can virtio user be used the same way, as a mirror of physical port? > > Best regards, > Igor In Linux if device does not supply get_link the base code does the right thing u32 ethtool_op_get_link(struct net_device *dev) { return netif_carrier_ok(dev) ? 1 : 0; } Doing set_mac, change_mtu and ethtool_ops in virtio_user should be possible but probably not implemented.
Re: [dpdk-dev] [RFC 0/3] tqs: add thread quiescent state library
> > On Fri, 30 Nov 2018 21:56:30 +0100 > Mattias Rönnblom wrote: > > > On 2018-11-30 03:13, Honnappa Nagarahalli wrote: > > >> > > >> Reinventing RCU is not helping anyone. > > > IMO, this depends on what the rte_tqs has to offer and what the > requirements are. Before starting this patch, I looked at the liburcu APIs. I > have to say, fairly quickly (no offense) I concluded that this does not > address > DPDK's needs. I took a deeper look at the APIs/code in the past day and I > still > concluded the same. My partial analysis (analysis of more APIs can be done, I > do not have cycles at this point) is as follows: > > > > > > The reader threads' information is maintained in a linked list[1]. This > linked list is protected by a mutex lock[2]. Any > additions/deletions/traversals > of this list are blocking and cannot happen in parallel. > > > > > > The API, 'synchronize_rcu' [3] (similar functionality to rte_tqs_check > > > call) > is a blocking call. There is no option provided to make it non-blocking. The > writer spins cycles while waiting for the grace period to get over. > > > > > > > Wouldn't the options be call_rcu, which rarely blocks, or defer_rcu() > > which never? call_rcu (I do not know about defer_rcu, have you looked at the implementation to verify your claim?) requires a separate thread that does garbage collection (this forces a programming model, the thread is even launched by the library). call_rcu() allows you to batch and defer the work to the garbage collector thread. In the garbage collector thread, when 'synchronize_rcu' is called, it spins for at least 1 grace period. Deferring and batching also have the side effect that memory is being held up for longer time. Why would the average application want to wait for the > > grace period to be over anyway? I assume when you say 'average application', you mean the writer(s) are on control plane. It has been agreed (in the context of rte_hash) that writer(s) can be on data plane. In this case, 'synchronize_rcu' cannot be called from data plane. If call_rcu has to be called, it adds additional cycles to push the pointers (or any data) to the garbage collector thread to the data plane. I kindly suggest you to take a look for more details in liburcu code and the rte_tqs code. Additionally, call_rcu function is more than 10 lines. > > > > > 'synchronize_rcu' also has grace period lock [4]. If I have multiple > > > writers > running on data plane threads, I cannot call this API to reclaim the memory in > the worker threads as it will block other worker threads. This means, there is > an extra thread required (on the control plane?) which does garbage > collection and a method to push the pointers from worker threads to the > garbage collection thread. This also means the time duration from delete to > free increases putting pressure on amount of memory held up. > > > Since this API cannot be called concurrently by multiple writers, each > writer has to wait for other writer's grace period to get over (i.e. multiple > writer threads cannot overlap their grace periods). > > > > "Real" DPDK applications typically have to interact with the outside > > world using interfaces beyond DPDK packet I/O, and this is best done > > via an intermediate "control plane" thread running in the DPDK application. > > Typically, this thread would also be the RCU writer and "garbage > > collector", I would say. > > Agree, that is one way to do it and it comes with its own issues as I described above. > > > > > > This API also has to traverse the linked list which is not very well > > > suited for > calling on data plane. > > > > > > I have not gone too much into rcu_thread_offline[5] API. This again needs > to be used in worker cores and does not look to be very optimal. > > > > > > I have glanced at rcu_quiescent_state [6], it wakes up the thread calling > 'synchronize_rcu' which seems good amount of code for the data plane. > > > > > > > Wouldn't the typical DPDK lcore worker call rcu_quiescent_state() > > after processing a burst of packets? If so, I would more lean toward > > "negligible overhead", than "a good amount of code". DPDK is being used in embedded and real time applications as well. There, processing a burst of packets is not possible due to low latency requirements. Hence it is not possible to amortize the cost. > > > > I must admit I didn't look at your library in detail, but I must still > > ask: if TQS is basically RCU, why isn't it called RCU? And why isn't > > the API calls named in a similar manner? I kindly request you to take a look at the patch. More than that, if you have not done already, please take a look at the liburcu implementation as well. TQS is not RCU (Read-Copy-Update). TQS helps implement RCU. TQS helps to understand when the threads have passed through the quiescent state. I am also not sure why the name liburcu has RCU in it. It does not do any Read-Copy-Update. > > > We used liburcu at Brocade
Re: [dpdk-dev] 19.02 Intel Roadmap
Hi, 23/11/2018 12:11, O'Driscoll, Tim: > As discussed at yesterday's Release Status Meeting, we need to update our > Roadmap page (http://core.dpdk.org/roadmap/) for 19.02. The features that we > plan to contribute are described below. We'll submit a patch to update the > roadmap page with this info. > > > vDPA Live Migration Software Fallback: The current vDPA library provides the > vDPA driver an interface to implement hardware-based dirty page logging. A > software fallback option will be added to handle situations where the > hardware does not log dirty pages. > > IPsec Library: An initial version of a DPDK IPsec library will be provided. > This will define data structures and APIs to prepare IPsec data for crypto > processing, APIs to handle ESP header encap/decap for tunnel and transport > modes, and capabilities such as initialising Security Associations. The > existing IPsec Security Gateway sample application (ipsec-secgw) will be > updated to use the new library. > > Compression Performance Test Tool: A performance test application will be > created which will allow performance testing of compression PMDs. > > I40E Queue Request: A new operation (VIRTCHNL_OP_REQUEST_QUEUES) will be > implemented to allow an I40E VF to request a specific number of queues from > the PF. What about the new PMD ice for Intel E810? https://patches.dpdk.org/cover/48285/ It is not planned to be part of DPDK 19.02?
Re: [dpdk-dev] 19.02 Intel Roadmap
On Sun, 02 Dec 2018 01:17:09 +0100 Thomas Monjalon wrote: > Hi, > > 23/11/2018 12:11, O'Driscoll, Tim: > > As discussed at yesterday's Release Status Meeting, we need to update our > > Roadmap page (http://core.dpdk.org/roadmap/) for 19.02. The features that > > we plan to contribute are described below. We'll submit a patch to update > > the roadmap page with this info. > > > > > > vDPA Live Migration Software Fallback: The current vDPA library provides > > the vDPA driver an interface to implement hardware-based dirty page > > logging. A software fallback option will be added to handle situations > > where the hardware does not log dirty pages. > > > > IPsec Library: An initial version of a DPDK IPsec library will be provided. > > This will define data structures and APIs to prepare IPsec data for crypto > > processing, APIs to handle ESP header encap/decap for tunnel and transport > > modes, and capabilities such as initialising Security Associations. The > > existing IPsec Security Gateway sample application (ipsec-secgw) will be > > updated to use the new library. > > > > Compression Performance Test Tool: A performance test application will be > > created which will allow performance testing of compression PMDs. > > > > I40E Queue Request: A new operation (VIRTCHNL_OP_REQUEST_QUEUES) will be > > implemented to allow an I40E VF to request a specific number of queues from > > the PF. > > > What about the new PMD ice for Intel E810? > https://patches.dpdk.org/cover/48285/ > > It is not planned to be part of DPDK 19.02? > > > As far as I know the hardware for ICE has not been released yet.
Re: [dpdk-dev] [PATCH 0/4] Allow using external memory without malloc
Hi Anatoly, Thursday, November 29, 2018 3:49 PM, Anatoly Burakov: > Subject: [PATCH 0/4] Allow using external memory without malloc > > Currently, the only way to use externally allocated memory is through > rte_malloc API's. While this is fine for a lot of use cases, it may not be > suitable > for certain other use cases like manual memory management, etc. > > This patchset adds another API to register memory segments with DPDK (so > that API's like ``rte_mem_virt2memseg`` could be relied on by PMD's and > such), but not create a malloc heap out of them. > > Aside from the obvious (not adding memory to a heap), the other major > difference between this API and the ``rte_malloc_heap_*`` external memory > functions is the fact that no DMA mapping is performed automatically. > > This really draws a line in the sand, and there are now two ways of doing > things - do everything automatically (using the ``rte_malloc_heap_*`` API's), > or do everything manually (``rte_extmem_*`` and future DMA mapping API > [1] that would replace ``rte_vfio_dma_map``). This way, the consistency of > API is kept, and flexibility is also allowed. > As you know I like the idea. One question though, do you see a use case for application to have externally allocated memory which needs to be registered to the DPDK subsystem however not being used for DMA? My only guess would be so helper libraries which requires the memory allocation from user (however it doesn't seems like a good API). If no use case, maybe it is better to merge between the two (rte_extmem_* and rte_dma_map) to have a single call for app to register and DMA map the memory. The rte_mem_virt2memseg is not something application needs to understand, it is used internally by PMDs or other libs. > [1] > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fma > ils.dpdk.org%2Farchives%2Fdev%2F2018- > November%2F118175.html&data=02%7C01%7Cshahafs%40mellanox.co > m%7C007a9234feaf42c82f6508d656015eb1%7Ca652971c7d2e4d9ba6a4d1492 > 56f461b%7C0%7C0%7C636790961244424277&sdata=YqwcPEEhJM3I7Toe > Ne%2BGcbeo%2FmPbYEnNFckoA7ES2Hg%3D&reserved=0 > > Note: at the time of this writing, there's no release notes > template, so no release notes updates in this patchset. > They will be added in later revisions. > > Anatoly Burakov (4): > malloc: separate creating memseg list and malloc heap > malloc: separate destroying memseg list and heap data > mem: allow registering external memory areas > mem: allow usage of non-heap external memory in multiprocess > > .../prog_guide/env_abstraction_layer.rst | 63 +++-- > lib/librte_eal/common/eal_common_memory.c | 116 > + > lib/librte_eal/common/include/rte_memory.h| 122 > ++ > lib/librte_eal/common/malloc_heap.c | 104 +++ > lib/librte_eal/common/malloc_heap.h | 15 ++- > lib/librte_eal/common/rte_malloc.c| 115 +++-- > lib/librte_eal/rte_eal_version.map| 4 + > 7 files changed, 434 insertions(+), 105 deletions(-) > > -- > 2.17.1
Re: [dpdk-dev] [PATCH 2/3] app/compress-perf: add performance measurement
Ok. Then to keep it simple can we keep input sz and max-num-segs-sgl at cmd line input. I don't think segsz is required to input then? Thanks Shally >-Original Message- >From: Jozwiak, TomaszX >Sent: 30 November 2018 20:13 >To: Verma, Shally ; Trahe, Fiona >; Daly, Lee >Cc: dev@dpdk.org; akhil.go...@nxp.com >Subject: RE: [dpdk-dev] [PATCH 2/3] app/compress-perf: add performance >measurement > >External Email > >Hi Shally, > >I'm about of sending V5 of compression-perf tool. > >Our performance testing shows that the number of sgls in a chain can be a >factor in the performance. >So we want to keep this on the cmd line for the performance tool. >There are alternatives, like setting the input size and segment size to get >the num segments desired, but I prefer >to have the option to specify the num segments explicitly. >We'll document that if the max-num-sgl-segs x seg_sz > input size then >segments number in the chain will be lower ( to store all the >data) >As regards adding the max_nb_segments_per_sgl into the rte_compressdev_info >struct we're investigating >another workaround to this limitation in QAT, so will leave this off the API >unless some other PMD needs it. >In the meantime we'll document the limitation in QAT. > >Please let me know your thoughts. > >-- >Tomek > >> -Original Message- >> From: Verma, Shally [mailto:shally.ve...@cavium.com] >> Sent: Wednesday, October 17, 2018 6:48 PM >> To: Trahe, Fiona ; Daly, Lee >> Cc: Jozwiak, TomaszX ; dev@dpdk.org; >> akhil.go...@nxp.com >> Subject: RE: [dpdk-dev] [PATCH 2/3] app/compress-perf: add performance >> measurement >> >> >> >> >-Original Message- >> >From: Trahe, Fiona >> >Sent: 17 October 2018 22:15 >> >To: Verma, Shally ; Daly, Lee >> > >> >Cc: Jozwiak, TomaszX ; dev@dpdk.org; >> >akhil.go...@nxp.com; Trahe, Fiona >> >Subject: RE: [dpdk-dev] [PATCH 2/3] app/compress-perf: add performance >> >measurement >> > >> >External Email >> > >> >> -Original Message- >> >> From: Verma, Shally [mailto:shally.ve...@cavium.com] >> >> Sent: Wednesday, October 17, 2018 8:43 AM >> >> To: Trahe, Fiona ; Daly, Lee >> >> >> >> Cc: Jozwiak, TomaszX ; dev@dpdk.org; >> >> akhil.go...@nxp.com >> >> Subject: RE: [dpdk-dev] [PATCH 2/3] app/compress-perf: add >> >> performance measurement >> >> >> >> >> >> >> >> >-Original Message- >> >> >From: Trahe, Fiona >> >> >Sent: 17 October 2018 20:04 >> >> >To: Daly, Lee ; Verma, Shally >> >> > >> >> >Cc: Jozwiak, TomaszX ; dev@dpdk.org; >> >> >akhil.go...@nxp.com; Trahe, Fiona >> >> >> >> >Subject: RE: [dpdk-dev] [PATCH 2/3] app/compress-perf: add >> >> >performance measurement >> >> > >> >> >External Email >> >> > >> >> >Hi Shally, Lee, >> >> > >> >> >> -Original Message- >> >> >> From: Daly, Lee >> >> >> Sent: Monday, October 15, 2018 8:10 AM >> >> >> To: Verma, Shally >> >> >> Cc: Jozwiak, TomaszX ; dev@dpdk.org; >> >> >> Trahe, Fiona ; akhil.go...@nxp.com >> >> >> Subject: RE: [dpdk-dev] [PATCH 2/3] app/compress-perf: add >> >> >> performance measurement >> >> >> >> >> >> Thanks for your input Shally see comments below. >> >> >> >> >> >> >> >> >> I will be reviewing these changes while Tomasz is out this week. >> >> >> >> >> >> > -Original Message- >> >> >> > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Verma, >> >> >> > Shally >> >> >> > Sent: Friday, October 12, 2018 11:16 AM >> >> >> > To: Jozwiak, TomaszX ; >> dev@dpdk.org; >> >> >> > Trahe, Fiona ; akhil.go...@nxp.com; De >> >> >> > Lara Guarch, Pablo >> >> >> > Cc: d...@dpdk.org; l...@dpdk.org; gua...@dpdk.org >> >> >> > Subject: Re: [dpdk-dev] [PATCH 2/3] app/compress-perf: add >> >> >> > performance measurement >> >> >> > >> >> >/// >> >> > >> >> >> >Also, why do we need --max-num- >> >> >> > sgl-segs as an input option from user? Shouldn't input_sz and >> >> >> >seg_sz internally decide on num-segs? >> >> >> > Or is it added to serve some other different purpose? >> >> >> Will have to get back to you on this one, seems illogical to get >> >> >> this input from user, But I will have to do further investigation to >> >> >> find if >> there was a different purpose. >> >> > >> >> >[Fiona] Some PMDs have a limit on how many links can be in an sgl >> >> >chain, e.g. in QAT case the PMD allocates a pool of internal >> >> >structures of a suitable size during device initialisation, this is not >> >> >a hard >> limit but can be configured in .config to give the user control over the >> memory resources allocated. >> >> >This perf-tool max-num-sgl-segs is so the user can pick a value <= >> whatever the PMD's max is. >> >> >> >> Then also, I believe this could be taken care internally by an app. >> >> App can choose convenient number of sgl segs as per PMD capability >> >> and input sz and chunk sz selected by user. >> >> Just my thoughts. >> >[Fiona] Then we'd need to add this capability to the API, e.g. add >> >uint16_t max_nb_segments_per_sgl into the rte_compressdev_info struct. >> >Speci
Re: [dpdk-dev] 19.02 Intel Roadmap
Hi ,all, > As far as I know the hardware for ICE has not been released yet. BTW, I see submission and acceptance of ICE patches to the Linux kernel netdev mailing list and repo: see: https://www.spinics.net/lists/netdev/msg488187.html Acceptance: https://elixir.bootlin.com/linux/latest/source/drivers/net/ethernet/intel/ice Regards, Rami Rosen