hujun260 opened a new pull request, #3246:
URL: https://github.com/apache/nuttx-apps/pull/3246

   ## Summary
   add a reference count to the TCB to prevent it from being deleted.
   
   To replace the large lock with smaller ones and reduce the large locks 
related to the TCB, in many scenarios, we only need to ensure that the TCB 
won't be released instead of locking, thus reducing the possibility of lock 
recursion.
   
   should merge with https://github.com/apache/nuttx/pull/17468
   ## Impact
   
   tcb release
   
   ## Testing
   
   
   esp32s3-devkit:nsh
   
   user_main: scheduler lock test
   sched_lock: Starting lowpri_thread at 97
   sched_lock: Set lowpri_thread priority to 97
   sched_lock: Starting highpri_thread at 98
   sched_lock: Set highpri_thread priority to 98
   sched_lock: Waiting...
   sched_lock: PASSED No pre-emption occurred while scheduler was locked.
   sched_lock: Starting lowpri_thread at 97
   sched_lock: Set lowpri_thread priority to 97
   sched_lock: Starting highpri_thread at 98
   sched_lock: Set highpri_thread priority to 98
   sched_lock: Waiting...
   sched_lock: PASSED No pre-emption occurred while scheduler was locked.
   sched_lock: Finished
   
   End of test memory usage:
   VARIABLE BEFORE AFTER
   ======== ======== ========
   arena 5d8bc 5d8bc
   ordblks 7 6
   mxordblk 548a0 548a0
   uordblks 5014 5014
   fordblks 588a8 588a8
   
   Final memory usage:
   VARIABLE BEFORE AFTER
   ======== ======== ========
   arena 5d8bc 5d8bc
   ordblks 1 6
   mxordblk 59238 548a0
   uordblks 4684 5014
   fordblks 59238 588a8
   user_main: Exiting
   ostest_main: Exiting with status 0
   nsh> u
   nsh: u: command not found
   nsh>
   nsh>
   nsh>
   nsh> uname -a
   NuttX 12.11.0 ef91333e3ac-dirty Dec 10 2025 16:11:04 xtensa esp32s3-devkit
   nsh>
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to