Hi Stuart, Thanks for getting back to me, On Sun, 4 Aug 2019 at 00:53, Stuart Henderson <s...@spacehopper.org> wrote:
> You can't easily look at a driver in isolation, the network performance > you see involves various parts of the network stack common to all drivers. > Quite a few of the network drivers are rather similar on several OS, > but performance can differ hugely due to other aspects of the system. > I get what your saying, what i was trying to do was try to test in so far as possible the driver in terms of the output path (from iperf3 process) down through the kernel to the driver and out the nic) and the reverse as as separate test and if the machine hardware and OS setup was kept more or less constant and just swap in the various types of NIC cards, we might get a picture of how efficently the driver can pass packets from the nic to the kernel and vice versa (in a separate tests) > > as far as I know I need to provide developers > > output of dmesg command, > > debug output in the event of a crash (sendbug) > > > > compare performance under identical hardware conditions with different > > Operating systems eg using iperf3 / tcpbench > > (btw iperf3 is not a particularly good benchmark; on a fast network it > often ends up measuring the speed of fetching time from the clock as > much as anything else.. iperf 2.x or tcpbench usually give more useful > results. so is iperf3 too disruptive to the system under test that it cant yield useful results at all ? i Also note if it's forwarding performance you're interested in, > make sure you measure the right thing - run the packet generators/sinks > on other fast machines routing or bridging through the machine under test, > don't run them on the machine under test itself). I was trying to narrow the test down to Driver performance inbound path to a process on the machine Driver performance outbound path from a process on the machine if the above tests are invalid in of them selves Ill try doing a forwarding setup with an OpenBSD machine with 2 NICs one NIC with a known driver and the other with the driver under test, (this is easier with virtual machines) (I was trying to avoid trying to minimise the moving parts in the test) (i would be worreid that the known good driver its self could be a bottleneck in the test) > > Is it useful for devs to have users to collect and diff this data and > > present it > > to devs, > > > Probably the thing that will help most is to keep an eye out for requests > of testing of all sorts of diffs, test them and report back. Obviously > anything related to drivers you use, but also diffs that say things like > "unlock XX" or mention words like KERNEL_LOCK, NET_LOCK, mp-safe, etc. > are all often on the path to increasing performance. Ill try to do this in a more timely fashion > > Take a look at what Hrvoje Popovski has been doing in the way of testing > (just look over the tech@ list archives, you'll find many examples), he > has put in a lot of effort and the tests he's been doing are really useful. Ill take a look at this Hrvoje has sent me advice on packet generators Ill try them out -- Kindest regards, Tom Smyth.