On Thu, Jul 28, 2016 at 03:01:36PM +0800, Xin Long wrote: > On Wed, Jul 27, 2016 at 9:54 AM, kernel test robot > <xiaolong...@intel.com> wrote: > > > > FYI, we noticed a -37.2% regression of netperf.Throughput_Mbps due to > > commit: > > > > commit a6c2f792873aff332a4689717c3cd6104f46684c ("sctp: implement prsctp > > TTL policy") > > https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master > > > > in testcase: netperf > > on test machine: 4 threads Ivy Bridge with 8G memory > > with following parameters: > > > > ip: ipv4 > > runtime: 300s > > nr_threads: 200% > > cluster: cs-localhost > > send_size: 10K > > test: SCTP_STREAM_MANY > > cpufreq_governor: performance > > > > > > > > Disclaimer: > > Results have been estimated based on internal Intel analysis and are > > provided > > for informational purposes only. Any difference in system hardware or > > software > > design or configuration may affect actual performance. > > > It doesn't make much sense to me. the codes I added cannot be > triggered without enable any pr policies. and I also did the tests in
It seems these pr policies has to be turned on by user space, i.e. netperf in this case? I checked netperf's source code, it doesn't seem set any option related to SCTP PR POLICY but I'm new to network code so I could be wrong or missing something. > my local environment, the result looks normal to me compare to > prior version. Can you share your number? We run netperf like this: netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 The full log of the run is attached for your reference. > > Recently the sctp performance is not stable, as during these patches, > netperf cannot get the result, but return ENOTCONN. which may > also affect the testing. anyway we've fixed the -ENOTCONN issue > already in the latest version. I tested commit 96b585267f55, which is Linus' git tree HEAD on 08/03, I guess the fix you mentioned should already be in there? But unfortunately, the throughput of netperf is still at low number(we did the test 5 times): $ cat */netperf.json { "netperf.Throughput_Mbps": [ 2470.6974999999998 ] }{ "netperf.Throughput_Mbps": [ 2486.7675 ] }{ "netperf.Throughput_Mbps": [ 2478.945 ] }{ "netperf.Throughput_Mbps": [ 2429.465 ] }{ "netperf.Throughput_Mbps": [ 2476.9150000000004 ] Considering what you have said that the patch shouldn't make a difference, the performance drop is really confusing. Any idea what could be the cause? Thanks. Regards, Aaron
2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & 2016-08-04 16:12:43 netperf -4 -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K -H 127.0.0.1 & SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2373.19 51.14 51.14 7.061 7.061 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2374.59 51.14 51.11 7.057 7.053 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2633.32 51.14 51.11 6.364 6.360 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2429.70 51.14 51.11 6.897 6.893 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2397.80 51.14 51.11 6.989 6.985 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2610.31 51.14 51.11 6.420 6.416 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2377.36 51.14 51.11 7.049 7.045 SCTP 1-TO-MANY STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 127.0.0.1 () port 0 AF_INET : demo Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 212992 212992 10240 300.00 2569.31 51.14 51.11 6.523 6.518