On 5 January 2015 at 14:22, One Thousand Gnomes <gno...@lxorguk.ukuu.org.uk> wrote: >> This is not the case on ARM. Here's an example where we use a hardware >> timer for the delay loop: >> >> Calibrating delay loop (skipped), value calculated using timer frequency.. >> 6.00 BogoMIPS (lpj=12000) >> >> which is nowhere near the precision of the CPU clock rate. So, when >> we have a hardware timer based delay implementation, the bogomips >> value which the kernel has access to (and thus the loops_per_jiffy >> value) is totally ... bogus. > > So multiply it by a fudge factor so it looks similar and in proportion.
It's not that simple. The CPU frequency and the timer frequency are independent. One SoC may have 6MHz timer, another SoC with a similar CPU frequency may use a 24MHz timer (or maybe a much smaller one). So a fudge factor is not really any better than choosing a constant bogomips value at build time (say 2000). If the always constant bogomips is not desirable, we are left with having to do a dummy calibration to guess some value to be reported to user as BogoMIPS. Even if we do this, on modern ARM processors it rarely matches the CPU frequency, only on older/simpler CPUs. With newer ones, the simple two instructions loop is never 2 cycles (it may even differ on the same CPU depending on the kernel build, e.g. code alignment affecting the branch prediction; I recall on ARM11MPCore the average was half-cycle multiple because the loop took different number of cycles on consecutive iterations). (I'm not debating whether we should fix this problem or not, just trying to find the right solution; reverting the original patch doesn't entirely fix newer CPUs) -- Catalin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/