On 2019-05-07 11:52, Elo, Matias (Nokia - FI/Espoo) wrote:
Hi,

The SW eventdev rx adapter has an internal enqueue buffer 
'rx_adapter->event_enqueue_buffer', which stores packets received from the NIC 
until at least BATCH_SIZE (=32) packets have been received before enqueueing them 
to eventdev. For example in case of validation testing, where often a small number 
of specific test packets is sent to the NIC, this causes a lot of problems. One 
would always have to transmit at least BATCH_SIZE test packets before anything can 
be received from eventdev. Additionally, if the rx packet rate is slow this also 
adds a considerable amount of additional delay.

Looking at the rx adapter API and sw implementation code there doesn’t seem to be a 
way to disable this internal caching. In my opinion this “functionality" makes 
testing sw rx adapter so cumbersome that either the implementation should be 
modified to enqueue the cached packets after a while (some performance penalty) or 
there should be some method to disable caching. Any opinions how this issue could be 
fixed?


The rx adaptor's service function will be called repeatedly, at a very high frequency (especially in near-idle situations). One potential scheme is to, by means of a counter, keeping track of the number of calls since the last packet was received from the NIC, and flush the buffers a number of idle (zero-NIC-dequeue) calls.

In that case, you maintain good performance, while not introducing too much latency.

The DSW Event Device takes this approach to flushing its internal buffers.

Another way would be to use a timer. Either an adapter-internal TSC timestamp for buffer age, or a rte_timer timer. rdtsc is not for free, so I would lean toward the first option.

Reply via email to