On Wed, May 25, 2011 at 12:53 PM, J. William Campbell <jwilliamcampb...@comcast.net> wrote: > On 5/24/2011 5:17 PM, Graeme Russ wrote: >> >> On Wed, May 25, 2011 at 5:19 AM, Wolfgang Denk<w...@denx.de> wrote: >>> >>> Dear Graeme Russ, >>> >>> In message<4ddbe22d.6050...@gmail.com> you wrote: >>>>>> >>>>>> Why must get_timer() be used to perform "meaningful time measurement?" >>>>> >>>>> Excellent question! It was never intended to be used as such. >>>> >>>> Because get_timer() as it currently stands can as it is assumed to >>>> return >>>> milliseconds >>> >>> Yes, but without any guarantee for accuracy or resolution. >>> This is good enough for timeouts, but nothing for time measurements. >> >> Out of curiosity, are there any platforms that do not use their most >> accurate source(*) as the timebase for get_timer()? If a platform is using >> it's most accurate, commonly available, source for get_timer() the the >> whole accuracy argument is moot - You can't get any better anyway so >> why sweat the details. > > Hi All, > Well, it is not quite that simple. The "accuracy" of the 1 ms > interrupt rate is controlled in all cases I know about by the resolution of > the programmable divider used to produce it. It appears that the x86 uses a > 1.19318 MHz crystal oscillator to produce the nominal 1 ms timer tick. > (There is a typo in line 30 of arch/x86/lib/pcat_timer.c that says 1.9318. I
Thanks, I will fix that (although pcat_timer.c is not used by any current x86 board) > couldn't make any of the numbers work until I figured this out). The tick is > produced by dividing the 1.19318 rate999.313 by 1194, which produces an > interrupt rate of 999.3 Hz, or about 0.068% error. However, the performance > counter on an x86 is as exact as the crystal frequency of the CPU is. FWIW, > you can read the performance counter with rdtsc on a 386/486 and the CYCLES Hmm, I hadn't thought of that > and CYCLES2 registers on later Intel/AMD chips. So yes, there is at least > one example of a cpu that does not use it's most accurate (or highest > resolution) time source. >> >> (*)I'm actually referring to what is commonly available for that platform, >> and not where a board has a high precision/accuracy source in addition to >> the common source. >> >> As a followup question, how many platforms use two completely independent >> sources for udelay() and get_timer() - x86 does, but I plan to change this >> so the interrupt kicks the new prescaler which can be done at>> 1ms >> period >> and udelay() and get_timer() will use the same tick source and therefore >> have equivalent accuracy. > > Are you sure of this? From what I see in arch/x86/lib/pcat_timer.c, the Well, the only x86 board is an sc520 and does not used pcat_timer.c :) Look in arch/x86/cpu/sc520/ and you might get a better picture > timer 0 is programmed to produce the 1 kHz rate timer tick and is also read > repeatedly in __udelay to produce the delay value. They even preserve the > 1194 inaccuracy, for some strange reason. I see that the sc520 does appear > to use different timers for the interrupt source, and it would appear that Ah, you did ;) > it may be "exact", but I don't know what the input to the prescaler is so I > can't be sure. Is the input to the prescaler really 8.3 MHz exactly? Also, > is the same crystal used for the input to the prescaler counter and the > "software timer millisecond count". If not, then we may have different > accuracies in this case as well. Yes, they are both derived from the onboard xtal which can be either 33.000MHz or 33.333MHz (there is a system register that must be written to to tell the sc520 what crystal is installed) > Also of note, it appears that the pcat_timer.c, udelay is not available > intil interrupts are enabled. That is technically non-compliant, although it > obviously seems not to matter. OK, it looks like x86 needs a bit of a timer overhaul - Thanks for the heads-up Regards, Graeme _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot