Thomas Gleixner <t...@linutronix.de> writes: > On Tue, 12 Jul 2016, Nicolai Stange wrote: >> Another issue is that ->min_delta_ns and ->max_delta_ns are measured in >> raw clock time while the delta in clockevents_program_event() would now >> be interpreted as being in monotonic clock time: >> clc = ((unsigned long long) delta * dev->mult_mono) >> dev->shift; > > Does that really matter much? > >> Ideally, I'd like to get rid of ->min_delta_ns and ->max_delta_ns >> alltogether and consistently use the ->min_delta_ticks and >> ->max_delta_ticks instead. AFAICS, ->min_delta_ns is really needed only >> for setting dev->next_event in clockevents_program_min_delta(). >> dev->next_event is read only from __clockevents_update_freq() for >> reprogramming purposes and thus, assuming 0 for ->delta_min_ns in >> clockevents_program_min_delta() would probably work: a reprogramming >> would invoke clockevents_program_min_delta() once again. > > I completely fail to parse the above paragraph. > >> The downside of this approach is that a quick grep reveals 40 clockevent >> device drivers whose initialization code would need to get touched in >> order to convert them from min_delta_ns/max_delta_ns to >> min_delta_ticks/max_delta_ticks. >> >> So, the question is whether I should do all of this or whether the >> doubled timer interrupts aren't annoying enough to justify such a big >> change? > > Can you provide an initial patch which does the adjustment w/o all the related > churn so we can see how intrusive that gets?
Please see the RFC tagged series at http://lkml.kernel.org/g/20160713130017.8202-1-nicsta...@gmail.com I tried to answer/address your above questions in the cover letter. Note that I split the x86 TSC related patches off: http://lkml.kernel.org/g/20160713130344.8319-1-nicsta...@gmail.com Thanks, Nicolai