masayuki2009 commented on PR #7830: URL: https://github.com/apache/nuttx/pull/7830#issuecomment-1345841439
>This is not caused by the commit of https://github.com/apache/nuttx/pull/7616. This issue existed before https://github.com/apache/nuttx/pull/7616 was merged. @anchao Hmm, I do not think so. With the latest upstream, `iperf -s` in TCP mode achieves ~120Mbps. ``` + /home/ishikawa/opensource/QEMU/qemu-7.1/build/qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -chardev stdio,id=con,mux=on -serial chardev:con -global virtio-mmio.force-legacy=false -netdev user,id=u1,hostfwd=tcp:127.0.0.1:10023-10.0.2.15:23,hostfwd=tcp:127.0.0.1:15001-10.0.2.15:5001 -device virtio-net-device,netdev=u1,bus=virtio-mmio-bus.0 -mon chardev=con,mode=readline -kernel ./nuttx telnetd [7:100] NuttShell (NSH) NuttX-10.4.0 nsh> ps PID GROUP CPU PRI POLICY TYPE NPX STATE EVENT SIGMASK STACK USED FILLED COMMAND 0 0 0 0 FIFO Kthread N-- Assigned 00000000 008144 001088 13.3% CPU0 IDLE 1 1 1 0 FIFO Kthread N-- Running 00000000 008144 000320 3.9% CPU1 IDLE 2 2 2 0 FIFO Kthread N-- Running 00000000 008144 000320 3.9% CPU2 IDLE 3 3 3 0 FIFO Kthread N-- Running 00000000 008144 000080 0.9% CPU3 IDLE 4 4 --- 192 RR Kthread --- Waiting Semaphore 00000000 008096 000496 6.1% hpwork 0x402d55d0 5 5 --- 100 RR Kthread --- Waiting Semaphore 00000000 008096 000528 6.5% lpwork 0x402d5600 6 6 0 100 RR Task --- Running 00000000 008112 002288 28.2% nsh_main 7 7 --- 100 RR Task --- Waiting Semaphore 00000010 008112 001216 14.9% telnetd nsh> uname -a NuttX 10.4.0 4a84555d1c Dec 12 2022 12:55:55 arm64 qemu-armv8a nsh> iperf -s IP: 10.0.2.15 mode=tcp-server sip=10.0.2.15:5001,dip=0.0.0.0:5001, interval=3, time=0 accept: 10.0.2.2,53868 Interval Transfer Bandwidth 0.00- 3.01 sec 26281460 Bytes inf Mbits/sec 3.01- 6.02 sec 72277300 Bytes inf Mbits/sec 6.02- 9.03 sec 117957780 Bytes inf Mbits/sec closed by the peer: 10.0.2.2,59578 iperf exit nsh> On Ubuntu: $ iperf -c localhost -p 15001 -i 1 -t 10 ------------------------------------------------------------ Client connecting to localhost, TCP port 15001 TCP window size: 3.26 MByte (default) ------------------------------------------------------------ [ 3] local 127.0.0.1 port 59578 connected with 127.0.0.1 port 15001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 3.34 MBytes 28.0 Mbits/sec [ 3] 1.0- 2.0 sec 14.8 MBytes 124 Mbits/sec [ 3] 2.0- 3.0 sec 15.0 MBytes 126 Mbits/sec [ 3] 3.0- 4.0 sec 14.4 MBytes 121 Mbits/sec [ 3] 4.0- 5.0 sec 14.2 MBytes 120 Mbits/sec [ 3] 5.0- 6.0 sec 14.9 MBytes 125 Mbits/sec [ 3] 6.0- 7.0 sec 14.8 MBytes 124 Mbits/sec [ 3] 7.0- 8.0 sec 13.6 MBytes 114 Mbits/sec [ 3] 8.0- 9.0 sec 15.0 MBytes 126 Mbits/sec [ 3] 9.0-10.0 sec 14.9 MBytes 125 Mbits/sec [ 3] 0.0-10.1 sec 135 MBytes 112 Mbits/sec ``` However, with this PR, `iperf -s` in TCP mode is much slower. ``` + /home/ishikawa/opensource/QEMU/qemu-7.1/build/qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -chardev stdio,id=con,mux=on -serial chardev:con -global virtio-mmio.force-legacy=false -netdev user,id=u1,hostfwd=tcp:127.0.0.1:10023-10.0.2.15:23,hostfwd=tcp:127.0.0.1:15001-10.0.2.15:5001 -device virtio-net-device,netdev=u1,bus=virtio-mmio-bus.0 -mon chardev=con,mode=readline -kernel ./nuttx telnetd [7:100] NuttShell (NSH) NuttX-10.4.0 nsh> uname -a NuttX 10.4.0 a13c876080 Dec 12 2022 12:57:33 arm64 qemu-armv8a nsh> ps PID GROUP CPU PRI POLICY TYPE NPX STATE EVENT SIGMASK STACK USED FILLED COMMAND 0 0 0 0 FIFO Kthread N-- Assigned 00000000 008144 001040 12.7% CPU0 IDLE 1 1 1 0 FIFO Kthread N-- Running 00000000 008144 000320 3.9% CPU1 IDLE 2 2 2 0 FIFO Kthread N-- Running 00000000 008144 000320 3.9% CPU2 IDLE 3 3 3 0 FIFO Kthread N-- Running 00000000 008144 000080 0.9% CPU3 IDLE 4 4 --- 192 RR Kthread --- Waiting Semaphore 00000000 008096 000496 6.1% hpwork 0x402d55d0 5 5 --- 100 RR Kthread --- Waiting Semaphore 00000000 008096 000528 6.5% lpwork 0x402d5600 6 6 0 100 RR Task --- Running 00000000 008112 002240 27.6% nsh_main 7 7 --- 100 RR Task --- Waiting Semaphore 00000010 008112 001216 14.9% telnetd nsh> iperf -s IP: 10.0.2.15 mode=tcp-server sip=10.0.2.15:5001,dip=0.0.0.0:5001, interval=3, time=0 accept: 10.0.2.2,42298 Interval Transfer Bandwidth 0.00- 3.01 sec 110424 Bytes 0.29 Mbits/sec 3.01- 6.02 sec 215274 Bytes 0.27 Mbits/sec 6.02- 9.03 sec 320124 Bytes 0.27 Mbits/sec 9.03- 12.04 sec 424974 Bytes 0.27 Mbits/sec 12.04- 15.05 sec 529824 Bytes 0.27 Mbits/sec 15.05- 18.06 sec 634674 Bytes 0.27 Mbits/sec 18.06- 21.07 sec 739524 Bytes 0.27 Mbits/sec 21.07- 24.08 sec 844374 Bytes 0.27 Mbits/sec 24.08- 27.09 sec 949224 Bytes 0.27 Mbits/sec 27.09- 30.10 sec 1054074 Bytes 0.27 Mbits/sec 30.10- 33.11 sec 1179737 Bytes 0.33 Mbits/sec 33.11- 36.12 sec 1368553 Bytes 0.50 Mbits/sec 36.12- 39.13 sec 1471220 Bytes 0.27 Mbits/sec On Ubuntu: $ iperf -c localhost -p 15001 -i 1 -t 10 ------------------------------------------------------------ Client connecting to localhost, TCP port 15001 TCP window size: 3.26 MByte (default) ------------------------------------------------------------ [ 3] local 127.0.0.1 port 42298 connected with 127.0.0.1 port 15001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 3.34 MBytes 28.0 Mbits/sec [ 3] 1.0- 2.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 2.0- 3.0 sec 4.71 MBytes 39.6 Mbits/sec [ 3] 3.0- 4.0 sec 512 KBytes 4.19 Mbits/sec [ 3] 4.0- 5.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 5.0- 6.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 6.0- 7.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 7.0- 8.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 8.0- 9.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 9.0-10.0 sec 0.00 Bytes 0.00 bits/sec [ 3] 0.0-10.2 sec 8.56 MBytes 7.05 Mbits/sec ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@nuttx.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org