在 2017/2/7 下午2:46, Eric Dumazet 写道:
On Mon, Feb 6, 2017 at 10:21 PM, panxinhui <xin...@linux.vnet.ibm.com> wrote:

hi all
        I do some netperf tests and get some benchmark results.
I also attach my test script and netperf-result(Excel)

There are two machine. one runs netserver and the other runs netperf
benchmark. 1000Mbps network is connected with them.

#ip link infomation
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UNKNOWN mode DEFAULT group default qlen 1000
     link/ether ba:68:9c:14:32:02 brd ff:ff:ff:ff:ff:ff

According to the results, there is not much performance gap with each other.
And as we are only testing the throughput, the pvqspinlock shows the
overhead of its pv stuff. but qspinlock shows a little improvement than
spinlock. My simple summary in this testcase is
qspinlock > spinlock > pvqspinlock.

when run 200 concurrent netperf, I paste the total throughput here.

        concurrent runners| total throughput | variance
-------------------------------------------
spinlock        | 199 | 66882.8 | 89.93
-------------------------------------------
qspinlock       | 199 | 66350.4 | 72.0239
-------------------------------------------
pvqspinlock     | 199 | 64740.5 | 85.7837

You could see more data in nerperf.xlsx

thanks
xinhui


Hi xinhui

1Gbit NIC is too slow for this use case. I would try a 10Gbit NIC at least...

Alternatively, you could use loopback interface.  (netperf -H 127.0.0.1)

tc qd add dev lo root pfifo limit 10000


great, thanks
xinhui

Reply via email to