keykur111 opened a new issue, #17206:
URL: https://github.com/apache/nuttx/issues/17206

   ### Description
   
   Hello Together,
   
   I am trying to calculate the scheduling latency for preemptive task/pthread 
in below way. _(code is not actual implementation. it is for understanding 
purpose)_
   **Code explanation**:
   1. creating 2 tasks/pthread
   2. below two values i am calulating between task_0 -> task_1 switch , 
task_1-> main , main -> task_0
   
   ```
   Context_switch= time between "task_0 switch_out" and "task_1 switch_in"
   Scheduling_overhead= time between "task_1 switch_in" and "first line of 
task_1"
   ```
   ```
   int tm_latency_thread_0_entry(int argc, FAR char *argv[])
   {
        while (1) {
                TimingCounters_arr[0].taskbegin_ctr = get_timing();
                tm_thread_suspend(0);
        }
   }
   
   int tm_latency_thread_1_entry(int argc, FAR char *argv[])
   {
        while (1) {
                TimingCounters_arr[1].taskbegin_ctr = get_timing();
                tm_thread_suspend(1);
        }
     return 0;
   }
   main task (prio=100)
   {
        tm_thread_create(tm_latency_thread_0_entry,prio=99);
        tm_thread_create(tm_latency_thread_1_entry,prio=98);
        while (1) {
                tm_thread_sleep(5);
                TimingCounters_arr[2].taskbegin_ctr = get_timing();
                print_results();
        }
     return 0;
   }
   ```
   3. For calculating switch_out and switch_in timing i am using Scheduling 
mode modified sched_note. so instead of logging the values i can save the cpu 
cycle in my variables
   
   ```
   extern void sched_note_resume(FAR struct tcb_s *tcb)
   {
        TimingCounters_arr[new_task].switchin_ctr = get_timing();
   }
   extern void sched_note_suspend(FAR struct tcb_s *tcb)
   {
        TimingCounters_arr[last_task].switchout_ctr = get_timing();
   }
   ```
   Code flows like 
   `main goes to sleep 5 second ->"main(switch_out)"-> task_0(switch_in) ->  
task_0(task_begin) -> task_0(suspend) ->  "task_0(switch_out)"-> 
task_1(switch_in) ->  task_1(task_begin) -> task_1(suspend) ->  
"task_1(switch_out)"-> main(switch_in) ->  main(task_begin) -> print the 
results. -> main goes to sleep 5 second -....`
   
   4. task switches happens using preemptive scheduling and to suspend the high 
prio task i am using binary semaphore.
   
   **Issue**: 
   compiled with -Os(default) I am getting normal values like 
   ```
   Task 0 ContextSwitch : cycles =  45 time = 562ns
   Task 0 Scheduling Overhead : cycles =  189 time = 2362ns
   
   Task 1 ContextSwitch : cycles =  31 time = 387ns
   Task 1 Scheduling Overhead : cycles =  158 time = 1975ns
   
   Main Task ContextSwitch : cycles =  31 time = 387ns
   Main Task Scheduling Overhead : cycles =  196 time = 2450ns
   ```
   with the same code compiled with -O2 I am getting higher values like 
   ```
   Task 0 ContextSwitch : cycles =  31 time = 387ns
   Task 0 Scheduling Overhead : cycles =  7057 time = 88212ns
   
   Task 1 ContextSwitch : cycles =  31 time = 387ns
   Task 1 Scheduling Overhead : cycles =  14710 time = 183875ns
   
   Main Task ContextSwitch : cycles =  31 time = 387ns
   Main Task Scheduling Overhead : cycles =  199 time = 2487ns
   ```
   as seen above the scheduling overhead is increasing too much ,this happens 
approx .. 8 out of 10 iterations. also upon debugging it doesn't happen when i 
do step wise debug (it only happens when it free run compiled with -O2).
   
   **Build,test environment:**
   Target board: S32K148_EVB
   NuttX Version:12.10.0-RC0
   Skipping the [runtime crash](https://github.com/apache/nuttx/issues/17170) 
by excluding the compilation of s32k drivers files from -O2 flags. 
   
   ### Verification
   
   - [x] I have verified before submitting the report.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to