[dpdk-dev] [PATCH] maintainers: add co-maintainer for flow API

2019-12-29 Thread Ori Kam
I volunteer to be co maintainer for the rte_flow lib.

Signed-off-by: Ori Kam 
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9b5c80f..ed3e71e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -371,6 +371,7 @@ F: doc/guides/prog_guide/switch_representation.rst
 
 Flow API
 M: Adrien Mazarguil 
+M: Ori Kam 
 T: git://dpdk.org/next/dpdk-next-net
 F: app/test-pmd/cmdline_flow.c
 F: doc/guides/prog_guide/rte_flow.rst
-- 
1.8.3.1



Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev

2019-12-29 Thread Pavan Nikhilesh Bhagavatula


>> -Original Message-
>> From: dev  On Behalf Of
>> pbhagavat...@marvell.com
>> Sent: Wednesday, December 4, 2019 8:14 PM
>> To: jer...@marvell.com; Marko Kovacevic
>; Ori
>> Kam ; Bruce Richardson
>> ; Radu Nicolau
>;
>> Akhil Goyal ; Tomasz Kantecki
>> ; Sunil Kumar Kori
>;
>> Pavan Nikhilesh 
>> Cc: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev
>setup
>> based on eventdev
>>
>> From: Sunil Kumar Kori 
>>
>> Add ethernet port Rx/Tx queue setup for event device which are later
>> used for setting up event eth Rx/Tx adapters.
>>
>> Signed-off-by: Sunil Kumar Kori 
>> ---
>>  examples/l3fwd/l3fwd.h   |  10 +++
>>  examples/l3fwd/l3fwd_event.c | 129
>> ++-
>>  examples/l3fwd/l3fwd_event.h |   2 +-
>>  examples/l3fwd/main.c|  15 ++--
>>  4 files changed, 144 insertions(+), 12 deletions(-)
>>
>
>
>
>> +
>> +local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
>> +
>>  dev_info.flow_type_rss_offloads;
>> +if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
>> +port_conf-
>>rx_adv_conf.rss_conf.rss_hf) {
>> +printf("Port %u modified RSS hash function "
>> +   "based on hardware support,"
>> +   "requested:%#"PRIx64"
>> configured:%#"PRIx64"\n",
>> +   port_id,
>> +   port_conf->rx_adv_conf.rss_conf.rss_hf,
>> +
>local_port_conf.rx_adv_conf.rss_conf.rss_hf);
>> +}
>
>We are using 1 queue, but using RSS hash function?

rte_event::flow_id which uniquely identifies a given flow is generated using
RSS Hash function on the required fields in the packet.

>
>> +
>> +ret = rte_eth_dev_configure(port_id, 1, 1,
>&local_port_conf);
>> +if (ret < 0)
>> +rte_exit(EXIT_FAILURE,
>> + "Cannot configure device: err=%d,
>> port=%d\n",
>> + ret, port_id);
>> +
>
>We should be using number of RX queues as per the config option
>provided in the arguments.
>L3fwd is supposed to support multiple queue. Right?

The entire premise of using event device is to showcase packet scheduling to 
cores
without the need for splitting packets across multiple queues.

Queue config is ignored when event mode is selected.
 
>
>Regards,
>Nipun
>

Regards,
Pavan.


[dpdk-dev] segmentation fault in rte_eal_wait_lcore - regarding.

2019-12-29 Thread Perugu Hemasai Chandra Prasad
Hi All,
 I have a segmentation fault issue regarding ret_eal_wait_lcore and
rte_eal_mp_wait_lcore(),
I am running the code in a few logical cores using rte_eal_remote _launch()
function which has a  while (1) (infinite loop) in all the functions
launched by remote launch and when i am terminating the code with 'ctrl+c'
i am getting segmentation fault at  rte_eal_mp_wait_lcore(), then I
replaced that with rte_eal_wait_lcore() for the respective logical cores
even then i am getting a segmentation fault, can anyone please clarify this
issue?

Thanks & Regards,
Hemasai


[dpdk-dev] l2fwd-event not fully functional with 'dsw'

2019-12-29 Thread Liron Himi
Hi Mattias,

Recently we tried to run the new l2fwd-event examples using the 'dsw' as the 
evendev.
We noticed that only 4096 packets were sent back to the ethdev.
Only when we changed both 'dequeue_depth' and 'enqueue_depth' to 128 instead of 
32, it started to work.

Do you have any objections for modifying  the 'dsw' default configuration to 
'128'?

Regards,
Liron



Re: [dpdk-dev] [PATCH v6 1/6] lib/eal: implement the family of rte bit operation APIs

2019-12-29 Thread Gavin Hu
Hi Stephen, Honnappa,

> -Original Message-
> From: Stephen Hemminger 
> Sent: Tuesday, December 24, 2019 12:37 AM
> To: Honnappa Nagarahalli 
> Cc: Joyce Kong ; tho...@monjalon.net;
> david.march...@redhat.com; m...@smartsharesystems.com;
> jer...@marvell.com; bruce.richard...@intel.com; ravi1.ku...@amd.com;
> rm...@marvell.com; shsha...@marvell.com; xuanziya...@huawei.com;
> cloud.wangxiao...@huawei.com; zhouguoy...@huawei.com; Phil Yang
> ; Gavin Hu ; nd ;
> dev@dpdk.org
> Subject: Re: [PATCH v6 1/6] lib/eal: implement the family of rte bit operation
> APIs
> 
> On Mon, 23 Dec 2019 05:04:12 +
> Honnappa Nagarahalli  wrote:
> 
> > 
> >
> > >
> > > On Sat, 21 Dec 2019 16:07:23 +
> > > Honnappa Nagarahalli  wrote:
> > >
> > > > Converting these into macros will help remove the size based duplication
> of
> > > APIs. I came up with the following macro:
> > > >
> > > > #define RTE_GET_BIT(nr, var, ret, memorder) \ ({ \
> > > > if (sizeof(var) == sizeof(uint32_t)) { \
> > > > uint32_t mask1 = 1U << (nr)%32; \
> > > > ret = __atomic_load_n(&var, (memorder)) & mask1;\
> > > > } \
> > > > else {\
> > > > uint64_t mask2 = 1UL << (nr)%64;\
> > > > ret = __atomic_load_n(&var, (memorder)) & mask2;\
> > > > } \
> > > > })
> > >
> > > Macros are more error prone. Especially because this is in exposed header
> file
> > That's another question I have. Why do we need to have these APIs in a
> public header file? These will add to the ABI burden as well. These APIs 
> should
> be in a common-but-not-public header file. I am also not sure how helpful
> these APIs are for applications as these APIs seem to have considered
> requirements only from the PMDs.
> 
> Why do we have to wrap every C atomic builtin? What value is there in that?

The wrapping is aimed to reduce code duplication, on average 3 lines cut down 
to 1 line for a single core.
Overall I am thinking this bitops APIs are targeted for use by PMDs only, 
applications can use C11 freely.
The initial thought for the new APIs came from the idea of consolidating the 
scattered bit operations all over the PMDs. It is unwise to expanding to 
applications or libraries, as different memory orderings are required and 
complexity generate. 

If the use cases are limited to PMDs, a 'volatile' or a compiler barrier is 
sufficient therefore the number of APIs can be saved by half. 
http://inbox.dpdk.org/dev/vi1pr08mb53766c30b5cda00fb9fce9678f...@vi1pr08mb5376.eurprd08.prod.outlook.com/

Any thoughts and comments are welcome!



Re: [dpdk-dev] [PATCH v5 1/4] net/i40e: cleanup Tx buffers

2019-12-29 Thread Yang, Qiming



-Original Message-
From: Di, ChenxuX 
Sent: Friday, December 27, 2019 11:45
To: dev@dpdk.org
Cc: Yang, Qiming ; Xing, Beilei ; 
Di, ChenxuX 
Subject: [PATCH v5 1/4] net/i40e: cleanup Tx buffers

Add support to the i40e driver for the API rte_eth_tx_done_cleanup to force 
free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di 
---
 drivers/net/i40e/i40e_ethdev.c|   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c  | 122 ++
 drivers/net/i40e/i40e_rxtx.h  |   1 +
 4 files changed, 125 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c 
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
.mac_addr_set = i40e_set_default_mac_addr,
.mtu_set  = i40e_dev_mtu_set,
.tm_ops_get   = i40e_tm_ops_get,
+   .tx_done_cleanup  = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */ diff --git 
a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
.rss_hash_conf_get= i40evf_dev_rss_hash_conf_get,
.mtu_set  = i40evf_dev_mtu_set,
.mac_addr_set = i40evf_set_default_mac_addr,
+   .tx_done_cleanup  = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 
17dc8c78f..9e4b0b678 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,128 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
}
 }
 
+int i40e_tx_done_cleanup(void *q, uint32_t free_cnt) {
+   struct i40e_tx_queue *txq = (struct i40e_tx_queue *)q;
+   struct i40e_tx_entry *sw_ring;
+   volatile struct i40e_tx_desc *txr;
+   uint16_t tx_first; /* First segment analyzed. */
+   uint16_t tx_id;/* Current segment being processed. */
+   uint16_t tx_last;  /* Last segment in the current packet. */
+   uint16_t tx_next;  /* First segment of the next packet. */
+   int count;
+
+   if (txq == NULL)
+   return -ENODEV;
+
+   count = 0;
+   sw_ring = txq->sw_ring;
+   txr = txq->tx_ring;
+
+   /*
+* tx_tail is the last sent packet on the sw_ring. Goto the end
+* of that packet (the last segment in the packet chain) and
+* then the next segment will be the start of the oldest segment
+* in the sw_ring. This is the first packet that will be
+* attempted to be freed.
+*/
+
+   /* Get last segment in most recently added packet. */
+   tx_last = sw_ring[txq->tx_tail].last_id;
+
+   /* Get the next segment, which is the oldest segment in ring. */
+   tx_first = sw_ring[tx_last].next_id;
+
+   /* Set the current index to the first. */
+   tx_id = tx_first;
+
+   /*
+* Loop through each packet. For each packet, verify that an
+* mbuf exists and that the last segment is free. If so, free
+* it and move on.
+*/
+   while (1) {
+   tx_last = sw_ring[tx_id].last_id;
+
+   if (sw_ring[tx_last].mbuf) {
+   if ((txr[tx_last].cmd_type_offset_bsz &
+   rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
+   rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) 
{
+   /* Get the start of the next packet. */
+   tx_next = sw_ring[tx_last].next_id;
+
+   /*
+* Loop through all segments in a
+* packet.
+*/
+   do {
+   
rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+   sw_ring[tx_id].mbuf = NULL;
+   sw_ring[tx_id].last_id = tx_id;
+
+   /* Move to next segment. */
+   tx_id = sw_ring[tx_id].next_id;
+
+   } while (tx_id != tx_next);
+
+   /*
+* Increment the number of packets
+* freed.
+*/
+   count++;
+
+   if (unlikely(count == (int)free_cnt))
+   break;
+   } else {

If the else is just break, then this if can be deleted.
Please follow clean code, remove unn

Re: [dpdk-dev] [PATCH v5 2/4] net/ice: cleanup Tx buffers

2019-12-29 Thread Yang, Qiming



-Original Message-
From: Di, ChenxuX 
Sent: Friday, December 27, 2019 11:45
To: dev@dpdk.org
Cc: Yang, Qiming ; Xing, Beilei ; 
Di, ChenxuX 
Subject: [PATCH v5 2/4] net/ice: cleanup Tx buffers

Add support to the ice driver for the API rte_eth_tx_done_cleanup to force free 
consumed buffers on Tx ring.

Signed-off-by: Chenxu Di 
---
 drivers/net/ice/ice_ethdev.c |   1 +
 drivers/net/ice/ice_rxtx.c   | 123 +++
 drivers/net/ice/ice_rxtx.h   |   1 +
 3 files changed, 125 insertions(+)

...

+   /*
+* Loop through each packet. For each packet, verify that an
+* mbuf exists and that the last segment is free. If so, free
+* it and move on.
+*/
+   while (1) {
+   tx_last = sw_ring[tx_id].last_id;
+
+   if (sw_ring[tx_last].mbuf) {
+   if ((txr[tx_last].cmd_type_offset_bsz &
+   rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) ==
+   rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) {
+   /* Get the start of the next packet. */
+   tx_next = sw_ring[tx_last].next_id;
+
+   /*
+* Loop through all segments in a
+* packet.
+*/
+   do {
+   
rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+   sw_ring[tx_id].mbuf = NULL;
+   sw_ring[tx_id].last_id = tx_id;
+
+   /* Move to next segment. */
+   tx_id = sw_ring[tx_id].next_id;
+
+   } while (tx_id != tx_next);
+
+   /*
+* Increment the number of packets
+* freed.
+*/
+   count++;
+
+   if (unlikely(count == (int)free_cnt))
+   break;
+   } else {
+   /*
+* mbuf still in use, nothing left to
+* free.
+*/
+   break;

Same comment as patch 1

+   }

.


Re: [dpdk-dev] [PATCH v5 3/4] net/ixgbe: cleanup Tx buffers

2019-12-29 Thread Yang, Qiming



-Original Message-
From: Di, ChenxuX 
Sent: Friday, December 27, 2019 11:45
To: dev@dpdk.org
Cc: Yang, Qiming ; Xing, Beilei ; 
Di, ChenxuX 
Subject: [PATCH v5 3/4] net/ixgbe: cleanup Tx buffers

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup to force 
free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di 
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c   | 121 +++
 drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
 3 files changed, 125 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..0091405db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
.tm_ops_get   = ixgbe_tm_ops_get,
+   .tx_done_cleanup  = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
.reta_query   = ixgbe_dev_rss_reta_query,
.rss_hash_update  = ixgbe_dev_rss_hash_update,
.rss_hash_conf_get= ixgbe_dev_rss_hash_conf_get,
+   .tx_done_cleanup  = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */ diff --git 
a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 
fa572d184..8d8e0655c 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,127 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
}
 }
 
+int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt) {
+   struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)q;
+   struct ixgbe_tx_entry *sw_ring;
+   volatile union ixgbe_adv_tx_desc *txr;
+   uint16_t tx_first; /* First segment analyzed. */
+   uint16_t tx_id;/* Current segment being processed. */
+   uint16_t tx_last;  /* Last segment in the current packet. */
+   uint16_t tx_next;  /* First segment of the next packet. */
+   int count;
+
+   if (txq == NULL)
+   return -ENODEV;
+
+   count = 0;
+   sw_ring = txq->sw_ring;
+   txr = txq->tx_ring;
+
+   /*
+* tx_tail is the last sent packet on the sw_ring. Goto the end
+* of that packet (the last segment in the packet chain) and
+* then the next segment will be the start of the oldest segment
+* in the sw_ring. This is the first packet that will be
+* attempted to be freed.
+*/
+
+   /* Get last segment in most recently added packet. */
+   tx_last = sw_ring[txq->tx_tail].last_id;
+
+   /* Get the next segment, which is the oldest segment in ring. */
+   tx_first = sw_ring[tx_last].next_id;
+
+   /* Set the current index to the first. */
+   tx_id = tx_first;
+
+   /*
+* Loop through each packet. For each packet, verify that an
+* mbuf exists and that the last segment is free. If so, free
+* it and move on.
+*/
+   while (1) {
+   tx_last = sw_ring[tx_id].last_id;
+
+   if (sw_ring[tx_last].mbuf) {
+   if (txr[tx_last].wb.status &
+   IXGBE_TXD_STAT_DD) {
+   /* Get the start of the next packet. */
+   tx_next = sw_ring[tx_last].next_id;
+
+   /*
+* Loop through all segments in a
+* packet.
+*/
+   do {
+   
rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+   sw_ring[tx_id].mbuf = NULL;
+   sw_ring[tx_id].last_id = tx_id;
+
+   /* Move to next segment. */
+   tx_id = sw_ring[tx_id].next_id;
+
+   } while (tx_id != tx_next);
+
+   /*
+* Increment the number of packets
+* freed.
+*/
+   count++;
+
+   if (unlikely(count == (int)free_cnt))
+   break;
+   } else {
+   /*
+* mbuf still in use, nothing left to
+* free.
+*/
+   break;

same
+   }
+   } else {
+   /*
+* There are multiple reasons to be here:
+* 1) All the packets on the ri

Re: [dpdk-dev] [PATCH] net/ice: correct VSI context

2019-12-29 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Saturday, December 14, 2019 2:14 PM
> To: Wu, Jingjing ; dev@dpdk.org; Zhang, Qi Z 
> 
> Cc: sta...@dpdk.org
> Subject: [PATCH] net/ice: correct VSI context
> 
> There'll always be a MDD event triggered when adding
> a FDIR rule. The root cause is 'LAN enable' is not
> configured during control VSI setup.
> Besides, correct FDIR fields for both main VSI and
> control VSI.
> 
> Fixes: 84dc7a95a2d3 ("net/ice: enable flow director engine")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Beilei Xing 
Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev

2019-12-29 Thread Nipun Gupta
Hi Pavan,

> -Original Message-
> From: Pavan Nikhilesh Bhagavatula 
> Sent: Sunday, December 29, 2019 9:12 PM
> To: Nipun Gupta ; Jerin Jacob Kollanukkaran
> ; Marko Kovacevic ; Ori
> Kam ; Bruce Richardson
> ; Radu Nicolau ;
> Akhil Goyal ; Tomasz Kantecki
> ; Sunil Kumar Kori 
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> 
> >> -Original Message-
> >> From: dev  On Behalf Of
> >> pbhagavat...@marvell.com
> >> Sent: Wednesday, December 4, 2019 8:14 PM
> >> To: jer...@marvell.com; Marko Kovacevic
> >; Ori
> >> Kam ; Bruce Richardson
> >> ; Radu Nicolau
> >;
> >> Akhil Goyal ; Tomasz Kantecki
> >> ; Sunil Kumar Kori
> >;
> >> Pavan Nikhilesh 
> >> Cc: dev@dpdk.org
> >> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev
> >setup
> >> based on eventdev
> >>
> >> From: Sunil Kumar Kori 
> >>
> >> Add ethernet port Rx/Tx queue setup for event device which are later
> >> used for setting up event eth Rx/Tx adapters.
> >>
> >> Signed-off-by: Sunil Kumar Kori 
> >> ---
> >>  examples/l3fwd/l3fwd.h   |  10 +++
> >>  examples/l3fwd/l3fwd_event.c | 129
> >> ++-
> >>  examples/l3fwd/l3fwd_event.h |   2 +-
> >>  examples/l3fwd/main.c|  15 ++--
> >>  4 files changed, 144 insertions(+), 12 deletions(-)
> >>
> >
> >
> >
> >> +
> >> +  local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
> >> +
> >>dev_info.flow_type_rss_offloads;
> >> +  if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
> >> +  port_conf-
> >>rx_adv_conf.rss_conf.rss_hf) {
> >> +  printf("Port %u modified RSS hash function "
> >> + "based on hardware support,"
> >> + "requested:%#"PRIx64"
> >> configured:%#"PRIx64"\n",
> >> + port_id,
> >> + port_conf->rx_adv_conf.rss_conf.rss_hf,
> >> +
> >local_port_conf.rx_adv_conf.rss_conf.rss_hf);
> >> +  }
> >
> >We are using 1 queue, but using RSS hash function?
> 
> rte_event::flow_id which uniquely identifies a given flow is generated using
> RSS Hash function on the required fields in the packet.

Okay. Got it.

> 
> >
> >> +
> >> +  ret = rte_eth_dev_configure(port_id, 1, 1,
> >&local_port_conf);
> >> +  if (ret < 0)
> >> +  rte_exit(EXIT_FAILURE,
> >> +   "Cannot configure device: err=%d,
> >> port=%d\n",
> >> +   ret, port_id);
> >> +
> >
> >We should be using number of RX queues as per the config option
> >provided in the arguments.
> >L3fwd is supposed to support multiple queue. Right?
> 
> The entire premise of using event device is to showcase packet scheduling to
> cores
> without the need for splitting packets across multiple queues.
> 
> Queue config is ignored when event mode is selected.

For atomic queues, we have single queue providing packets to a single core at a 
time till processing on that core is completed, irrespective of the flows on 
that hardware queue.
And multiple queues are required to distribute separate packets on separate 
cores, with these atomic queues maintaining the ordering and not scheduling on 
other core, until processing core has completed its job.
To have this solution generic, we should also take config parameter - (port, 
number of queues) to enable multiple ethernet RX queues.

Regards,
Nipun

> 
> >
> >Regards,
> >Nipun
> >
> 
> Regards,
> Pavan.