yf13 commented on PR #12320:
URL: https://github.com/apache/nuttx/pull/12320#issuecomment-2106171697
@acassis, @xiaoxiang781216 and @anchao, I tried `lio_listio_2_1` locally
with QEMU 6.2, the behavior looks like similar:
Upstream version:
```
ABC
NuttShell (NSH) NuttX-12.4.0
nsh>
nsh>
nsh> cat /proc/version
NuttX version 12.4.0 a8f81e4051 May 12 2024 16:40:00 rv-virt/ltp
nsh> ps
PID GROUP PRI POLICY TYPE NPX STATE EVENT SIGMASK
STACK USED FILLED COMMAND
0 0 0 FIFO Kthread - Ready 0000000000000000
002032 000844 41.5% Idle_Task
1 1 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c6a8 0x8008c6bc
2 2 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c6a8 0x8008c6d0
3 3 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c6a8 0x8008c6e4
4 4 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c6a8 0x8008c6f8
5 5 100 RR Task - Running 0000000000000000
001992 001740 87.3%! nsh_main
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966108 19244 32946864 19340 32946864 32
1
nsh> hello
Hello, World!!
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966108 19244 32946864 21516 32946864 32
1
nsh> ltp_interfaces_lio_listio_2_1
lio_listio/2-1.c PASSED
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966108 19444 32946664 814812 32661664 34
3
nsh> ltp_interfaces_lio_listio_2_1
lio_listio/2-1.c Error lio_listio() waited for list completion
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966108 19892 32946216 814812 32406744 42
4
nsh> ltp_interfaces_lio_listio_2_1
lio_listio/2-1.c Error lio_listio() waited for list completion
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966108 20340 32945768 814812 32406296 50
4
nsh> poweroff
```
Patched version:
```
ABC
NuttShell (NSH) NuttX-12.4.0
nsh>
nsh>
nsh> cat /proc/version
NuttX version 12.4.0 23030d3556 May 12 2024 16:32:51 rv-virt/ltp
nsh> ps
PID GROUP PRI POLICY TYPE NPX STATE EVENT SIGMASK
STACK USED FILLED COMMAND
0 4 0 FIFO Kthread - Ready 0000000000000000
002032 000780 38.3% Idle_Task 0x8008c3e8 0x8008c438
1 4 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c3e8 0x8008c438
2 4 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c3e8 0x8008c438
3 4 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c3e8 0x8008c438
4 4 100 RR Kthread - Waiting Semaphore 0000000000000000
001984 000540 27.2% lpwork 0x8008c3e8 0x8008c438
5 5 100 RR Task - Running 0000000000000000
001992 001740 87.3%! nsh_main
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966044 16236 32949808 16332 32949808 32
1
nsh> hello
Hello, World!!
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966044 16236 32949808 18508 32949808 32
1
nsh> ltp_interfaces_lio_listio_2_1
lio_listio/2-1.c PASSED
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966044 16436 32949608 811804 32664608 34
3
nsh> ltp_interfaces_lio_listio_2_1
lio_listio/2-1.c Error lio_listio() waited for list completion
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966044 16884 32949160 811804 32409688 42
4
nsh> ltp_interfaces_lio_listio_2_1
lio_listio/2-1.c Error lio_listio() waited for list completion
nsh> free
total used free maxused maxfree nused
nfree
Umem: 32966044 17332 32948712 811804 32409240 50
4
nsh> poweroff
```
The `rv-virt/ltp` is based on `rv-virt/nsh` by adding the following:
```
CONFIG_ARCH_SETJMP_H=y
CONFIG_FS_AIO=y
CONFIG_FS_TMPFS=y
CONFIG_LIBC_LOCALE=y
CONFIG_SCHED_LPWORK=y
CONFIG_TESTING_LTP=y
```
Please correct if I am not using LTP correctly as it is my first time to try
it today.
Also we see the memory savings is 19244-16236=3008 with this config due to
added number of kthreads.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]