On Thu, Sep 20, 2012 at 7:31 AM, Steven Rostedt <rost...@goodmis.org> wrote: > On Mon, 2012-09-17 at 16:49 -0700, Andy Lutomirski wrote: > >> I haven't looked in any great detail, but the approach looks sensible >> and should slow down the vsyscall code. >> >> That being said, as long as you're playing with this, here are a >> couple thoughts: >> >> 1. The TSC-reading code does this: >> >> ret = (cycle_t)vget_cycles(); >> >> last = VVAR(vsyscall_gtod_data).clock.cycle_last; >> >> if (likely(ret >= last)) >> return ret; >> >> I haven't specifically benchmarked the cost of that branch, but I >> suspect it's a fairly large fraction of the total cost of >> vclock_gettime. IIUC, the point is that there might be a few cycles >> worth of clock skew even on systems with otherwise usable TSCs, and we >> don't want a different CPU to return complete garbage if the cycle >> count is just below cycle_last. >> >> A different formulation would avoid the problem: set cycle_last to, >> say, 100ms *before* the time of the last update_vsyscall, and adjust >> the wall_time, etc variables accordingly. That way a few cycles (or >> anything up to 100ms) or skew won't cause an overflow. Then you could >> kill that branch. >> > > I'm curious... If the task gets preempted after reading ret, and doesn't > get to run again for another 200ms, would that break it?
Only if cycle_last changes while preempted (or from a different CPU). That case is covered by the seqlock in do_realtime and do_monotonic. --Andy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/