Hi Bruce!
Thank you for reviewing, sorry didn't write clearly as possible.
I was trying to say more than "The performance improved". I didn't call out RFC
2544 since many
people may not know much about it. I was also trying to convey what was
observed and the
conclusion derived from the observation without getting too big.
When the NIC processing loop rate is around 400,000/sec the entry and exit
savings are not easily
observable when the average data rate variation from test to test is higher
than the packet rate
gain. If RFC 2544 zero loss convergence is set too fine, the time it takes to
make a complete test
increases substantially (I set my convergence about 0.25% of line rate) at 60
seconds per
measurement point. Unless the current convergence data rate is close to zero
loss for the
next point, a small improvement is not going to show up as higher zero loss
rate. However the
test has a series of measurements, which has average latency and packet loss.
Also since the
test equipment uses a predefined sequence algorithm that cause the same data
rate to
to a high degree of accuracy be generated for each test, the results for same
data rates can be
compared across tests. If someone repeats the tests, I am pointing to the
particular data to
look at. One 60 second measurement itself does not give sufficient accuracy to
make a
conclusion, but information correlated across multiple measurements gives basis
for a
correct conclusion.
For l3fwd, to be stable with i40e requires the queues to be increased (I use
2k) and the
Packet count to also be increased. This then gets 100% zero loss line rate with
64 byte
Packets for 2 10 GbE connections (given the correct Fortville firmware). This
makes it
good to verify the correct NIC firmware but does not work well for testing
since the
data is network limited. I have my own stable packet processing code which I
used for
testing. I have multiple programs, but during the optimization cycle, hit line
rate and
had to move to a 5 tuple processing program for a higher load to proceed. I
have a
doc that covers this setup and the optimization results, but cannot be shared.
Someone
making their on measurements needs to have made sufficient tests to understand
the
stability of their test environment.
Mike
-----Original Message-----
From: Richardson, Bruce
Sent: Wednesday, October 28, 2015 3:45 AM
To: Polehn, Mike A
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [Patch] Eth Driver: Optimization for improved NIC
processing rates
On Tue, Oct 27, 2015 at 08:56:31PM +0000, Polehn, Mike A wrote:
> Prefetch of interface access variables while calling into driver RX and TX
> subroutines.
>
> For converging zero loss packet task tests, a small drop in latency
> for zero loss measurements and small drop in lost packet counts for
> the lossy measurement points was observed, indicating some savings of
> execution clock cycles.
>
Hi Mike,
the commit log message above seems a bit awkward to read. If I understand it
correctly, would the below suggestion be a shorter, clearer equivalent?
Prefetch RX and TX queue variables in ethdev before driver function call
This has been measured to produce higher throughput and reduced latency
in RFC 2544 throughput tests.
Or perhaps you could suggest yourself some similar wording. It would also be
good to clarify with what applications the improvements were seen - was it
using testpmd or l3fwd or something else?
Regards,
/Bruce