On 6/4/2018 10:41 AM, Jerin Jacob wrote:
-----Original Message-----
Date: Fri, 1 Jun 2018 23:47:00 +0530
From: "Rao, Nikhil" <nikhil....@intel.com>
To: Jerin Jacob <jerin.ja...@caviumnetworks.com>
CC: hemant.agra...@nxp.com, dev@dpdk.org, narender.vang...@intel.com,
  abhinandan.guj...@intel.com, gage.e...@intel.com, nikhil....@intel.com
Subject: Re: [RFC] eventdev: event tx adapter APIs
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
  Thunderbird/52.8.0


Hi Jerin,


The workers invoke rte_event_enqueue_burst() to their local port not to the
extra port as you described. The queue ID specified when
enqueuing is linked to the the adapter's port, the adapter reads these
events and transmits mbufs on the
ethernet port and queue specified in these mbufs. The diagram below
illustrates what I just described.

+------+
|      |   +----+
|Worker+-->+port+--+
|      |   +----+  |                                         +----+
+------+           |                                     +-->+eth0|
                    |  +---------+            +-------+   |   +----+
                    +--+         |   +----+   |       +---+   +----+
                       |  Queue  +-->+port+-->+Adapter|------>+eth1|
                    +--+         |   +----+   |       +---+   +----+
+------+           |  +---------+            +-------+   |   +----+
|      |   +----+  |                                     +-->+eth2|
|Worker+-->+port+--+                                         +----+
|      |   +----+
+------+


Makes sense. One suggestion here, Since we have ALL type queue and
normal queues, Can we move the queue change or sched_type change code
from the application and move that down to function pointer abstraction(any
way adapter knows which queues to enqueue for), that way we can have same
final stage code for ALL type queues and Normal queues.

Yes, I see the queue/sched type change approach followed in pipeline_worker_tx.c, a queue id can be provided in rte_event_eth_tx_adapter_conf

+struct rte_event_eth_tx_adapter_conf {
+       uint8_t event_port_id;
+       /**< Event port identifier, the adapter dequeues mbuf events from this
+        * port.
+        */
+       uint16_t tx_metadata_off;
+       /**<  Offset of struct rte_event_eth_tx_adapter_meta in the private
+        * area of the mbuf
+        */
+       uint32_t max_nb_tx;
+       /**< The adapter can return early if it has processed at least
+        * max_nb_tx mbufs. This isn't treated as a requirement; batching may
+        * cause the adapter to process more than max_nb_tx mbufs.
+        */
+};

</sniped>

The worker core will receive events pointing to mbufs that need to be
transmitted to different
ports/queues, as described above. The port and the queue will be populated
in the mbuf and the
API can be as below

uint16_t rte_event_eth_tx_adapter_enqueue(uint8_t instance_id, uint8_t 
event_port_id, const struct rte_event ev[], uint16_t nb_events);

Let me know if that works for you.

Yes. That API works for me. I think, we can leverage "struct
rte_eventdev" area for adding new function pointer. Just like
enqueue_new_burst, enqueue_forward_burst variants, we can add one more
there, so that we can reuse that hot cacheline for all fastpath function 
pointer case.
That would translate to adding "uint8_t dev_id" on the above API.
The dev_id can be derived from the instance_id, does that work ?

I need some clarification on the configuration API/flow. The eventdev_pipeline sample app checks if DEV_TX_OFFLOAD_MT_LOCKFREE flag is set on all ethernet devices and if so, uses the pipeline_worker_tx path as opposed to the "consumer" function, if we were to use the adapter to replace some of the sample code then it seems like the RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is hardware assist for the pipeline worker tx mode, the adapter would support 2 modes (consumer and worker_tx, borrowing terminology from the sample), worker_tx would only be supported if the eventdev supports RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT (at least in the first version)

Thanks,
Nikhil

Reply via email to