On Mon, 15 Apr 2013, Arjan van de Ven wrote: > to put the "cost" into perspective; programming a timer in one-shot mode > is some math on the cpu (to go from kernel time to hardware time), > which is a multiply and a shift (or a divide), and then actually > programming the hardware, which is at the cost of (approximately) a cachemiss > or two > (so give or take in the "hundreds" of cycles) > at least on moderately modern hardware (e.g. last few years)
Well these are PCI transactions which are bound to be high latency reaching may be more than microscond in total. A timer interrupt may last 2-4 microsecond at best without PCI transactions. > not cheap. But also not INSANE expensive... and it breaks-even already if you > only > save one or two cache misses elsewhere. Ok then maybe go dynticks if we can save at least one timer tick? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/