On 12/2/13, 2:33 AM, Pekka Enberg wrote:
On 12/02/2013 02:23 AM, David Ahern wrote:
'perf sched timehist' provides an analysis of scheduling events.

Example usage:
     perf sched record -- sleep 1
     perf sched timehist

By default it shows the individual schedule events, including the time
between
sched-in events for the task, the task scheduling delay (time between
wakeup
and actually running) and run time for the task:

            time cpu  task name[tid/pid]    b/n time sch delay  run time
   ------------- ---- -------------------- --------- --------- ---------
    79371.874569 [11] gcc[31949]               0.014     0.000     1.148
    79371.874591 [10] gcc[31951]               0.000     0.000     0.024
    79371.874603 [10] migration/10[59]         3.350     0.004     0.011
    79371.874604 [11] <idle>                   1.148     0.000     0.035
    79371.874723 [05] <idle>                   0.016     0.000     1.383
    79371.874746 [05] gcc[31949]               0.153     0.078     0.022

What does "b/n time" mean?

Time between schedule-in events.


I'm seeing rather large "sch delay" numbers on Fedora 19's default
kernel that seem like a data error of some sort:

     4500.663117 [02] fix_client[7216]         0.007
18446744073708.033     1.786

hmm.... it means tprev < t_ready-to-run and both are non-zero. ie., wakeup time is after the time of the schedule event prior to fix_client getting scheduled. Is this a multi-socket box? I can add a sanity check and set to 0 if it happens.


Scheduling delays don't seem to be part of "-s" summary output. Why?

Recently added feature to show the scheduling delay. At this point I have too many features locally and need to get the basic command into perf before adding more -- like stats for scheduling delay.

David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to