>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.ja...@caviumnetworks.com]
>  Sent: Wednesday, May 10, 2017 9:12 AM
>  To: Van Haaren, Harry <harry.van.haa...@intel.com>
>  Cc: dev@dpdk.org; Eads, Gage <gage.e...@intel.com>; Richardson, Bruce
>  <bruce.richard...@intel.com>
>  Subject: Re: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>  
>  -----Original Message-----
>  > Date: Fri, 21 Apr 2017 10:51:37 +0100
>  > From: Harry van Haaren <harry.van.haa...@intel.com>
>  > To: dev@dpdk.org
>  > CC: jerin.ja...@caviumnetworks.com, Harry van Haaren
>  > <harry.van.haa...@intel.com>, Gage Eads <gage.e...@intel.com>, Bruce
>  > Richardson <bruce.richard...@intel.com>
>  > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>  > X-Mailer: git-send-email 2.7.4
>  >
>  > This commit adds a sample app for the eventdev library.
>  > The app has been tested with DPDK 17.05-rc2, hence this release (or
>  > later) is recommended.
>  >

<snip>

>  
>  > +          ev[i].op = RTE_EVENT_OP_NEW;
>  > +          ev[i].sched_type = queue_type;
>  
>  The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY !=
>  RTE_SCHED_TYPE_ORDERED. So, we cannot assign .sched_type as
>  queue_type.
>  
>  I think, one option could be to avoid translation in application is to
>  - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES,
>  RTE_EVENT_QUEUE_CFG_*_ONLY
>  - Introduce a new RTE_EVENT_DEV_CAP_ to denote
>  RTE_EVENT_QUEUE_CFG_ALL_TYPES cap ability
>  - add sched_type in struct rte_event_queue_conf. If capability flag is
>    not set then implementation takes sched_type value for the queue.
>  
>  Any thoughts?

I'm not sure this change is needed. We could create a sched_type[] array, 
indexed by queue ID, for assigning the event's sched type.

With the proposed approach, the sched_type assignment would still be needed for 
"all types"-capable PMDs, so I'm not sure it buys us anything in the 
application.

>  
>  
>  > +          ev[i].queue_id = qid;
>  > +          ev[i].event_type = RTE_EVENT_TYPE_CPU;
>  
>  IMO, RTE_EVENT_TYPE_ETHERNET is the better option here as it is producing
>  the Ethernet packets/events.
>  
>  > +          ev[i].sub_event_type = 0;
>  > +          ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
>  > +          ev[i].mbuf = mbufs[i];
>  > +          RTE_SET_USED(prio_idx);
>  > +  }
>  > +
>  > +  const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev,
>  > +nb_rx);
>  
>  For producer pattern i.e a burst of RTE_EVENT_OP_NEW, OcteonTX can do
>  burst operation unlike FORWARD case(which is one event at a time).Earlier, I
>  thought I can abstract the producer pattern in PMD, but it looks like we are
>  going with application driven producer model based on latest RFC.So I think,
>  we can add one flag to rte_event_enqueue_burst to denote all the events are
>  of type RTE_EVENT_OP_NEW as hint.SW driver can ignore this.
>  
>  I can send a patch for the same.
>  
>  Any thoughts?
>  

Makes sense, though I'm a little hesitant about putting this sort of 
PMD-specific hint in the enqueue API. Perhaps we can use the impl_opaque field, 
or have the PMD inspect the event_type (if TYPE_ETHDEV, assume all packets in 
the burst are NEWs)?

>  
>  > +  if (nb_tx != nb_rx) {
>  > +          for (i = nb_tx; i < nb_rx; i++)
>  > +                  rte_pktmbuf_free(mbufs[i]);
>  > +  }
>  > +  enqueue_cnt[0] += nb_tx;
>  > +
>  > +  if (unlikely(prod_stop))
>  
>  I think, No one updating the prod_stop
>  
>  > +          done = 1;
>  > +
>  > +  return 0;
>  > +}
>  > +
>  > +static inline void
>  > +schedule_devices(uint8_t dev_id, unsigned lcore_id) {
>  > +  if (rx_core[lcore_id] && (rx_single ||
>  > +      rte_atomic32_cmpset(&rx_lock, 0, 1))) {
>  
>  This pattern(rte_atomic32_cmpset) makes application can inject only "one
>  core" worth of packets. Not enough for low-end cores. May be we need
>  multiple producer options. I think, new RFC is addressing it.
>  

Right, in the "wimpy" core case one can partition the Rx queues across multiple 
adapters, and assign the adapters to different service cores. The proposal 
doesn't explicitly state this, but rte_eth_rx_event_adapter_run() is not 
intended to be MT-safe -- so the service core implementation would need 
something along the lines of the cmpset if a service is affinitized to multiple 
service cores.

Thanks,
Gage

Reply via email to