It would appear the new clockevent API has a one-nanosecond resolution. It certainly looks sufficiently fine-grained, but I'm afraid it's too coarse for some applications.
In our application, we need periodic clock interrupts at about 100 kHz. If the (programmable) frequency must be rounded to the nearest nanosecond, we have a cumulative error of 100,000 * 0.5 ns/s = 50 µs/s We need to maintain the cumulative error within, say, 1 ms/day, or 11 ns/s. (The error is not measured against real time, but between different parts of our hardware that are run off of the same clock.) For our needs, we have built our own "clockevent" system that has a nominal one-femtosecond precision. The nanosecond resolution would be sufficient if there was a way to "nudge" the next interrupt by a nanosecond from the interrupt handler. Marko -- Marko Rauhamaa mailto:[EMAIL PROTECTED] http://pacujo.net/marko/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/