On 4/13/19 2:01 AM, Linhaifeng wrote:
> Sorry, the value 131081408 is just for example. Actually the result is like 
> this:
>  sqrt 2019-04-10 23:53:50: 43968
>  sqrt 2019-04-10 23:53:51: 44060
>  sqrt 2019-04-10 23:53:52: 49012
>  sqrt 2019-04-10 23:53:53: 38172
>  sqrt 2019-04-10 23:53:54: 131081408
>  sqrt 2019-04-10 23:53:55: 43600
>  sqrt 2019-04-10 23:53:56: 46704
>  sqrt 2019-04-10 23:53:57: 46880
>  sqrt 2019-04-10 23:53:58: 44332
>  ……
>  sqrt 2019-04-10 02:17:15: 136345876
>  ……
>  sqrt 2019-04-10 04:40:35: 136335644
>  ……

Hi,

Still it would be interesting to look at the two raw values used to compute the 
delta to better diagnose what is going on.

-Will
> 
> -----Original Message-----
> From: William Cohen [mailto:wco...@redhat.com] 
> Sent: 2019年4月12日 22:06
> To: Linhaifeng <haifeng....@huawei.com>; linux-perf-us...@vger.kernel.org; 
> linux-kernel@vger.kernel.org
> Subject: Re: perf tools:Is there any tools to found out the max latency by 
> irq or cpu idle
> 
> On 4/11/19 8:57 PM, Linhaifeng wrote:
>> Hi,
>> I have a single thread application like this:
>>
>> While (1) {
>>     start = rdtsc();
>>     sqrt (1024);      
>>   end = rdtsc();
>>   cycles = end – start;
>>   printf("cycles: %d-%02d-%02d %02d:%02d:%02d: %lu\n", 
>>       1900+timeinfo->tm_year, 1+timeinfo->tm_mon, timeinfo->tm_mday, 
>> timeinfo->tm_hour, timeinfo->tm_min, timeinfo->tm_sec,
>>       cycles);
>> }
>> It print the cycles of sqrt every second and run with taskset –c 1 ./sqrt.
>> The result of test is:
>>
>> sqrt 2019-04-10 23:53:50: 43968
>> sqrt 2019-04-10 23:53:51: 44060
>> sqrt 2019-04-10 23:53:52: 49012
>> sqrt 2019-04-10 23:53:53: 38172
>> sqrt 2019-04-10 23:53:54: 131081408
>> sqrt 2019-04-10 23:53:55: 43600
>> sqrt 2019-04-10 23:53:56: 46704
>> sqrt 2019-04-10 23:53:57: 46880
>> sqrt 2019-04-10 23:53:58: 44332
>> ……
>> sqrt 2019-04-10 02:17:15: 131081408
>> ……
>> sqrt 2019-04-10 04:40:35: 131081408
>> ……
>>
>> Every 2hour23min there would be a large cycles. I use perf sched not found 
>> any sched_switch events.
> 
> Hi,
> 
> The fact that it is the same value 131081408 every 2 hours 23 minutes looks 
> suspect. One would expect some variation in the counts.  It looks like there 
> is some rollover or overflow issue.  It would be helpful to print out the 
> start and end values to see what is going on with the raw rdstc values.  
> Maybe the thread is being moved between processors and the TSC are out of 
> sync.  What particular processor model was this running on? Was this running 
> on physical hardware or inside a kvm guest? 
> 
> According to the Intel 64 and IA-32 Architecture Software Devloper's Manual 
> Volume 3 (325384-sdm-vol-3abcd.pdf):
> 
> The RDTSC instruction reads the time-stamp counter and is guaranteed to 
> return a monotonically increasing unique value whenever executed, except for 
> a 64-bit counter wraparound. Intel guarantees that the time-stamp counter 
> will not wraparound within 10 years after being reset. The period for counter 
> wrap is longer for Pentium 4, Intel Xeon, P6 family, and Pentium processors.
> 
> -Will Cohen
> 
> 
>>
>> L2GW_2680:/home/fsp/zn # perf sched record -C 6-11 -o perf.sched ^C[ 
>> perf record: Woken up 64 times to write data ] [ perf record: Captured 
>> and wrote 204.878 MB perf.sched (1911189 samples) ]
>>
>> L2GW_2680:/home/fsp/zn # perf sched latency -i perf.sched
>>
>> ----------------------------------------------------------------------
>> -------------------------------------------
>>   Task                  |   Runtime ms  | Switches | Average delay ms 
>> | Maximum delay ms | Maximum delay at       |
>> ----------------------------------------------------------------------
>> -------------------------------------------
>> ----------------------------------------------------------------------
>> -------------------------------------------
>>   TOTAL:                |      0.000 ms |        0 |
>> ---------------------------------------------------
>>
>>
>>
>> Is there any other tools of perf to found out the max latency by irq or cpu 
>> idle ?
>>
>>
> 

Reply via email to