<snip>
>>
>> It's not a quantifier of each port, It across the eventdevice that the
>number of
>> event queues is equal to number of ethdevices used.
>>
>> This is to prevent event queues being overcrowded i.e. in case there
>is only one
>> event queue and multiple Eth devices then SW/HW will have a
>bottleneck in
>> enqueueing all the mbufs to a single event queue.
>Yes you are correct, but here is my confusion
>
>1. let us assume there are 2 ports, ie: port 0 <==> port 1.
>2. We have 4 workers and 2 event queue (2 ports, so Q-0 for port-0 and
>Q-11 for port-1)
>3. The event mode (SW or HW) is parallel.
>4. In this case we need to rely of Q-2 which can absorb the events for
>single-event TX adapter for L2FWD_EVENT_TX_ENQ.
>5. But for L2FWD_EVENT_TX_DIRECT (which is to send packets out
>directly) this is not right form as multiple parallel workers may
>concurrently send out same destination port queue (as there is 1 TX
>configured).
>
>Can you help me understand, how this is worked around?

We only select TX_DIRECT when the eventdev coupled with ethdev has 
RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
It is the PMDs responsibility to synchronize concurrent Tx queues access
across queues if it exposes the above capability.

In case of octeontx2 we expose RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
only when both eventdev and ethdev are octeontx2 pmds. 
Since octeontx2 has DEV_TX_OFFLOAD_MT_LOCKFREE multicore Tx queue atomicity is
taken care by HW.

>
>>
>> >3. With this work for vdev ports, if no are we adding check for the
>> >same in `main` function?
>>
>> I have verified the functionality for --vdev=event_sw0 and it seems to
>work fine.
>Thanks, so whether it is physical or virtual ethernet device all packets
>has to come to worker cores for 'no-mac-updating'.
>
>>
>> >
>> >snipped

Reply via email to