-----Original Message----- > Date: Mon, 19 Jun 2017 15:35:22 +0530 > From: Jerin Jacob <jerin.ja...@caviumnetworks.com> > To: "Rao, Nikhil" <nikhil....@intel.com> > CC: Gage Eads <gage.e...@intel.com>, dev@dpdk.org, tho...@monjalon.net, > bruce.richard...@intel.com, harry.van.haa...@intel.com, > hemant.agra...@nxp.com, nipun.gu...@nxp.com, narender.vang...@intel.com > Subject: Re: [dpdk-dev] [RFC] eventdev: add event adapter for ethernet Rx > queues > User-Agent: Mutt/1.8.3 (2017-05-23) > > -----Original Message----- > > Date: Wed, 24 May 2017 10:00:22 +0530 > > From: "Rao, Nikhil" <nikhil....@intel.com> > > To: Jerin Jacob <jerin.ja...@caviumnetworks.com>, Gage Eads > > <gage.e...@intel.com> > > CC: dev@dpdk.org, tho...@monjalon.net, bruce.richard...@intel.com, > > harry.van.haa...@intel.com, hemant.agra...@nxp.com, nipun.gu...@nxp.com, > > narender.vang...@intel.com, nikhil....@intel.com > > Subject: Re: [RFC] eventdev: add event adapter for ethernet Rx queues > > User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 > > Thunderbird/38.7.2 > > > > Hi Jerin, > > Hi Nikhil, > > > > > Comments inline. > > > > Also, another function needed is > > bool rte_event_eth_rx_adapter_multithread_capable(void). > > > > This would be used to set the "multithread_capable" service core > > configuration parameter. > > OK. > > I was thinking like, in order to effectively use adapter scheme, it should > use ops scheme like rte_flow or rte_tm[1] where the same API can be > can be used for both HW and SW. If we see, Both RFC[2], We have a lot of > similarities. I think, We can base the eth_rx_adapter model based on your SW > requirement RFC and introduce capability wherever it is not applicable for HW > or > vice versa. > > See below as a example[3]. Can you take of the same in v1 of this > series? if you don't have the bandwidth then I can try. Let me know.
Nikhil, Now that Harry send first version of service core patch? Are you planning for new version with above comments? > Thoughts? > > [1] > http://dpdk.org/dev/patchwork/patch/25275/ > http://dpdk.org/dev/patchwork/patch/25276/ > > [2] > http://dpdk.org/ml/archives/dev/2017-May/065341.html > > > /* adapter has inbuilt port, no need to create producer port */ > #define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT (1ULL << 0) > /* adapter does not need service function */ > #define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1) > > struct rte_event_eth_rx_adap_info { > char name[32]; > uint32_t adapter_cap; > /**< Ethdev RX adapter capabilities(RTE_EVENT_ETHDEV_CAP_)*/ > } > > > struct rte_event_eth_rx_adap_cfg { > uint8_t rx_event_port_id; > /**< Event port identifier, the adapter enqueues mbuf events to this > * port, Ignored when RTE_EVENT_ETHDEV_CAP_INBUILT_PORT > */ > > } > > struct rte_eth_rx_event_adapter_queue_config { > uint32_t rx_queue_flags; > /**< Flags for handling received packets */ > uint16_t servicing_weight; > /**< Relative polling frequency of ethernet receive queue, if this > * is set to zero, the Rx queue is interrupt driven > * Ignored if RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC set > */ > struct rte_event ev; > /**< > * The values from the following event fields will be used when > * enqueuing mbuf events: > * - event_queue_id: Targeted event queue ID for received packets. > * - event_priority: Event priority of packets from this Rx queue in > * the event queue relative to other events. > * - sched_type: Scheduling type for packets from this Rx queue. > * - flow_id: If the RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID bit > * is set in rx_queue_flags, this flow_id is used for all > * packets received from this queue. Otherwise the flow > ID > * is set to the RSS hash. > */ > }; > > int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, uint8_t > eth_port_id); > int rte_event_eth_rx_adapter_get_info(uint8_t id, struct > rte_event_eth_rx_adap_info *info); > int rte_event_eth_rx_adapter_configure(uint8_t id, struct > rte_event_eth_rx_adap_config *cfg); > int rte_event_eth_rx_adapter_queue_add(uint8_t id, int32_t rx_queue_id, const > struct rte_eth_rx_event_adapter_queue_config *config); > int rte_event_eth_rx_adapter_queue_del(uint8_t id, int32_t rx_queue_id) > int rte_event_eth_rx_adapter_run(); > int rte_event_eth_rx_adapter_free(uint8_t id); > > > > > > > Thanks, > > Nikhil > > > > On 5/11/2017 10:08 PM, Jerin Jacob wrote: > > > -----Original Message----- > > >> Date: Tue, 9 May 2017 15:38:46 -0500 > > >> From: Gage Eads <gage.e...@intel.com> > > >> To: dev@dpdk.org > > >> CC: nikhil....@intel.com, jerin.ja...@caviumnetworks.com, > > >> tho...@monjalon.net, bruce.richard...@intel.com, > > >> harry.van.haa...@intel.com, hemant.agra...@nxp.com, nipun.gu...@nxp.com, > > >> narender.vang...@intel.com > > >> Subject: [RFC] eventdev: add event adapter for ethernet Rx queues > > >> X-Mailer: git-send-email 2.7.4 > > >> > > >> From: Nikhil Rao <nikhil....@intel.com> > > > > > > Hi Nikhil and Gage, > > > > > > Thanks for the RFC. A few questions and comments below. > > > Looks like SW has more constraints on event producer side, after we > > > finalize on this RFC(I guess only a few minor changes are only required). > > > I will align other[1] RFC based on _your_ RFC as we need to > > > converge on name space and we can't duplicate configs like struct > > > rte_event_dev_producer_conf etc > > > > > > [1] > > > http://dpdk.org/ml/archives/dev/2017-May/065341.html > > > > > > > > > > > >> + */ > > >> + > > >> +#ifdef __cplusplus > > >> +extern "C" { > > >> +#endif > > >> + > > >> +#include <stdint.h> > > >> +#include <rte_mbuf.h> > > >> +#include <rte_eventdev.h> > > >> + > > >> +/* struct rte_eth_rx_event_adapter_queue_config flags definitions */ > > >> +#define RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID 0x1 > > >> +/*< This flag indicates the flow identifier is valid */ > > >> + > > >> +struct rte_eth_rx_event_adapter_config { > > > > > > Since this code is going to be at lib/librte_eventdev, We must start all > > > public symbols and file name with rte_event_*. > > > > > > example: > > > May be this structure can be changed as rte_event_eth_rx_adapter_config > > > > OK. > > > > > > > > > > >> + uint8_t event_dev_id; > > >> + /**< Event device identifier */ > > >> + uint8_t rx_event_port_id; > > >> + /**< Event port identifier, the adapter enqueues mbuf events to > > >> this > > >> + * port > > >> + */ > > >> +}; > > >> + > > >> +struct rte_eth_rx_event_adapter_queue_config { > > >> + uint32_t rx_queue_flags; > > >> + /**< Flags for handling received packets */ > > > > > > Better to add references with @see > > > example: > > > @see RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID > > > > OK. > > > > > > > >> + uint16_t servicing_weight; > > >> + /**< Relative polling frequency of ethernet receive queue, if > > >> this > > >> + * is set to zero, the Rx queue is interrupt driven > > >> + */ > > >> + struct rte_event ev; > > >> + /**< > > >> + * The values from the following event fields will be used when > > >> + * enqueuing mbuf events: > > >> + * - event_queue_id: Targeted event queue ID for received > > >> packets. > > >> + * - event_priority: Event priority of packets from this Rx > > >> queue in > > >> + * the event queue relative to other events. > > >> + * - sched_type: Scheduling type for packets from this Rx > > >> queue. > > >> + * - flow_id: If the > > >> RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID bit > > >> + * is set in rx_queue_flags, this flow_id is used > > >> for all > > >> + * packets received from this queue. Otherwise the > > >> flow ID > > >> + * is set to the RSS hash. > > > > > > This scheme is good. I was duplicating the elements in "struct > > > rte_event_dev_producer_conf" > > > > > > IMO, We need to set ev.event_type == RTE_EVENT_TYPE_ETHDEV implicitly in > > > library. > > > You can mention that here as a info. > > OK. > > > > > > > >> + */ > > >> +}; > > >> + > > >> +struct rte_eth_rx_event_adapter_run_args { > > >> + uint8_t id; > > >> + /**< Adapter identifier */ > > >> + unsigned int max_nb_rx; > > >> + /**< The adapter can return early if it has processed at least > > >> + * max_nb_rx mbufs. This isn't treated as a requirement; > > >> batching may > > >> + * cause the adapter to process more than max_nb_rx mbufs. > > >> + */ > > >> +}; > > >> + > > >> +struct rte_eth_rx_event_adapter_stats { > > >> + uint64_t rx_poll_count; > > >> + /**< Receive queue poll count across both polled and interrupt > > >> mode > > >> + * queues > > >> + */ > > >> + uint64_t rx_packets; > > >> + /**< Received packet count */ > > >> + uint64_t rx_enq_fail; > > >> + /**< Eventdev enqueue failed count */ > > >> + uint64_t rx_enq_retry; > > >> + /**< Eventdev enqueue retry count */ > > >> +}; > > >> + > > >> +/** > > >> + * Create a new ethernet Rx event adapter with the specified identifier. > > >> + * > > >> + * @param adapter_id > > >> + * Event adapter identifier. > > >> + * @param config > > >> + * Event adapter config parameters. > > >> + * @return > > >> + * - 0: Success > > >> + * - <0: Error code on failure > > >> + */ > > >> +int rte_eth_rx_event_adapter_create( > > >> + uint8_t id, > > >> + const struct rte_eth_rx_event_adapter_config *config); > > >> + > > > > > > One adapter creates one service function. right? > > > It is good to mention the mapping.It is missing in the doc. > > > > Yes, in this case, the application creates a service per adapter, it may > > create multiple > > Rx event adapters with each adapter handling a subset of Rx queues. As per > > Harry's > > patch, only DPDK internal components are expected to request service cores, > > once Harry posts > > an updated patch, I will make any necesssary changes and post the next > > version of this > > patch. > > > > >> +/** > > >> + * Free an event adapter > > >> + * > > >> + * @param id > > >> + * Adapter identifier. > > >> + * @return > > >> + * - 0: Success > > >> + * - <0: Error code on failure > > >> + */ > > >> +int rte_eth_rx_event_adapter_free(uint8_t id); > > >> + > > >> +/** > > >> + * Add eth device to the event adapter > > >> + * > > >> + * @param id > > >> + * Adapter identifier. > > >> + * @param eth_dev_id > > >> + * Port identifier of the Ethernet device. > > >> + * @return > > >> + * - 0: Success > > >> + * - <0: Error code on failure > > >> + */ > > >> +int rte_eth_rx_event_adapter_dev_add(uint8_t id, uint8_t eth_dev_id); > > > > > > rte_eth_event_rx_queue_add() also have eth_dev_id.What is the > > > significance of eth_dev_id here. Looks like eth_dev_id is a duplicate > > > info. > > > > > > if it is duplicate or it can be avoided then I propose to reduce the > > > number > > > of APIs for easiness of application programming(i.e removing > > > rte_eth_rx_event_adapter_dev_add, > > > rte_eth_rx_event_adapter_dev_del) > > OK. > > > > > > > > You can also mention the following for better clarify. If following is > > > true.If not, What do you think about, co-existence of poll and event mode? > > > > Yes, its true. > > > > > The rte_eth_rx_burst() result is undefined if application invokes on > > > bounded ethdev_port and rx_queue_id. > > > > > >> + > > >> +/** > > >> + * Delete eth device from an event adapter > > >> + * > > >> + * @param id > > >> + * Adapter identifier. > > >> + * @param eth_dev_id > > >> + * Port identifier of the Ethernet device. > > >> + * @return > > >> + * - 0: Success > > >> + * - <0: Error code on failure > > >> + */ > > >> +int rte_eth_rx_event_adapter_dev_del(uint8_t id, uint8_t eth_dev_id); > > >> + > > >> +/** > > >> + * Add receive queue to event adapter > > >> + * > > >> + * @param id > > >> + * Adapter identifier. > > >> + * @param eth_dev_id > > >> + * Port identifier of Ethernet device. > > >> + * @param rx_queue_id > > >> + * Ethernet device receive queue index. > > >> + * @param config > > >> + * Additonal configuration structure. > > >> + * @return > > >> + * - 0: Success, Receive queue added correctly. > > >> + * - <0: Error code on failure. > > >> + */ > > >> +int rte_eth_event_rx_queue_add( > > >> + uint8_t id, > > >> + uint8_t eth_dev_id, > > >> + uint16_t rx_queue_id, > > > > > > How about changing it as int32_t rx_queue_id and -1 to denote all Rx > > > queues configured for given eth_dev_id are added. This will avoid the > > > case where application needs to call this API one by one when application > > > interested in all the queues. > > > > Sounds good. > > > > >> + const struct rte_eth_rx_event_adapter_queue_config *config); > > >> + > > > > > > Don't we need rte_eth_event_rx_queue_del() for tear down? > > > > > Yes.