On Wed, Feb 08, 2017 at 06:02:26PM +0000, Nipun Gupta wrote:
> 
> 
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.ja...@caviumnetworks.com]
> > Sent: Wednesday, February 08, 2017 15:53
> > To: Harry van Haaren <harry.van.haa...@intel.com>
> > Cc: dev@dpdk.org; Bruce Richardson <bruce.richard...@intel.com>; David
> > Hunt <david.h...@intel.com>; Nipun Gupta <nipun.gu...@nxp.com>; Hemant
> > }
> > 
> > on_each_cores_linked_to_queue2(stage2)
> > while(1)
> > {
> >                 /* STAGE 2 processing */
> >                 nr_events = rte_event_dequeue_burst(ev,..);
> >                 if (!nr_events);
> >                     continue;
> > 
> >                 sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq 
> > number
> > update in critical section */
> > 
> >                 /* move to next stage(ORDERED) */
> >                 ev.event_type = RTE_EVENT_TYPE_CPU;
> >                 ev.sub_event_type = 3;
> >                 ev.sched_type = RTE_SCHED_TYPE_ORDERED;
> >                 ev.flow_id =  sa;
> 
> [Nipun] Queue1 has flow_id as an 'sa' with sched_type as 
> RTE_SCHED_TYPE_ATOMIC and
> Queue2 has same flow_id but with sched_type as RTE_SCHED_TYPE_ORDERED.
> Does this mean that same flow_id be associated with separate RTE_SCHED_TYPE_* 
> as sched_type?
> 
> My understanding is that one flow can either be parallel or atomic or ordered.
> The rte_eventdev.h states that sched_type is associated with flow_id, which 
> also seems legitimate:

Yes. flow_id per _event queue_.

>               uint8_t sched_type:2;
>               /**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
>                * associated with flow id on a given event queue
>                * for the enqueue and dequeue operation.
>                */
> 
> >                 ev.op = RTE_EVENT_OP_FORWARD;
> >                 ev.queue_id = 3;
> >                 /* move to stage 3(event queue 3) */
> >                 rte_event_enqueue_burst(ev,..);
> > }
> > 
> > on_each_cores_linked_to_queue3(stage3)
> > while(1)
> > {
> >                 /* STAGE 3 processing */
> >                 nr_events = rte_event_dequeue_burst(ev,..);
> >                 if (!nr_events);
> >                     continue;
> > 
> >                 sa_specific_ordered_processing(sa /*ev.flow_id */);/* 
> > packets
> > encryption in parallel */
> > 
> >                 /* move to next stage(ATOMIC) */
> >                 ev.event_type = RTE_EVENT_TYPE_CPU;
> >                 ev.sub_event_type = 4;
> >                 ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> >             output_tx_port_queue =
> > find_output_tx_queue_and_tx_port(ev.mbuff);
> >                 ev.flow_id =  output_tx_port_queue;
> >                 ev.op = RTE_EVENT_OP_FORWARD;
> >                 ev.queue_id = 4;
> >                 /* move to stage 4(event queue 4) */
> >                 rte_event_enqueue_burst(ev,...);
> > }
> > 
> > on_each_cores_linked_to_queue4(stage4)
> > while(1)
> > {
> >                 /* STAGE 4 processing */
> >                 nr_events = rte_event_dequeue_burst(ev,..);
> >                 if (!nr_events);
> >                     continue;
> > 
> >             rte_eth_tx_buffer();
> > }
> > 
> > 2) flow-based event pipelining
> > =============================
> > 
> > - No need to partition queues for different stages
> > - All the cores can operate on all the stages, Thus enables
> > automatic multicore scaling, true dynamic load balancing,
> > - Fairly large number of SA(kind of 2^16 to 2^20) can be processed in 
> > parallel
> > Something existing IPSec application has constraints on
> > http://dpdk.org/doc/guides-16.04/sample_app_ug/ipsec_secgw.html
> > 
> > on_each_worker_cores()
> > while(1)
> > {
> >     rte_event_dequeue_burst(ev,..)
> >     if (!nr_events);
> >             continue;
> > 
> >     /* STAGE 1 processing */
> >     if(ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> >             sa = find_it_from_packet(ev.mbuf);
> >             /* move to next stage2(ATOMIC) */
> >             ev.event_type = RTE_EVENT_TYPE_CPU;
> >             ev.sub_event_type = 2;
> >             ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> >             ev.flow_id =  sa;
> >             ev.op = RTE_EVENT_OP_FORWARD;
> >             rte_event_enqueue_burst(ev..);
> > 
> >     } else if(ev.event_type == RTE_EVENT_TYPE_CPU &&
> > ev.sub_event_type == 2) { /* stage 2 */
> 
> [Nipun] I didn't got that in this case on which event queue (and eventually
> its associated event ports) will the RTE_EVENT_TYPE_CPU type events be 
> received on?

Yes. The same queue which received the event.

> 
> Adding on to what Harry also mentions in other mail, If same code is run in 
> the case you
> mentioned in '#1 - queue_id based event pipelining', after specifying the 
> ev.queue_id
> with appropriate value then also #1 would be good. Isn't it?

See my earlier email

> 
> > 
> >             sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq
> > number update in critical section */
> >             /* move to next stage(ORDERED) */
> >             ev.event_type = RTE_EVENT_TYPE_CPU;
> >             ev.sub_event_type = 3;
> >             ev.sched_type = RTE_SCHED_TYPE_ORDERED;
> >             ev.flow_id =  sa;
> >             ev.op = RTE_EVENT_OP_FORWARD;
> >             rte_event_enqueue_burst(ev,..);
> > 
> >     } else if(ev.event_type == RTE_EVENT_TYPE_CPU &&
> > ev.sub_event_type == 3) { /* stage 3 */
> > 
> >             sa_specific_ordered_processing(sa /* ev.flow_id */);/* like
> > encrypting packets in parallel */
> >             /* move to next stage(ATOMIC) */
> >             ev.event_type = RTE_EVENT_TYPE_CPU;
> >             ev.sub_event_type = 4;
> >             ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> >             output_tx_port_queue =
> > find_output_tx_queue_and_tx_port(ev.mbuff);
> >             ev.flow_id =  output_tx_port_queue;
> >             ev.op = RTE_EVENT_OP_FORWARD;
> >             rte_event_enqueue_burst(ev,..);
> > 
> >     } else if(ev.event_type == RTE_EVENT_TYPE_CPU &&
> > ev.sub_event_type == 4) { /* stage 4 */
> >             rte_eth_tx_buffer();
> >     }
> > }
> > 
> > /Jerin
> > Cavium
> 

Reply via email to