On Tue, 2005-08-02 at 11:37 -0700, [EMAIL PROTECTED] wrote: > Sadly, running my test case (running 1-4 tasks, each bound to a cpu, each > pounding > on gettimeofday(2)) I'm still seeing significant time spent spinning in this > loop. > Things are better: worst case time was down to just over 2ms from 34ms ... > which > is a significant improvement ... but 2ms still sounds ugly.
I'll settle for an order of magnitude improvement for a first pass :) > I'm still seeing the asymmetric behavior where cpu3 sees the really high > times, > while cpu0,1,2 are seeing peaks of 170us, which is still not pretty. How does cpu3's ITC synchronization compare to the others? I suspect there's some reasonable way that we could do a backoff w/i the do {} while() loop, but I'm not sure what it is. For a while, I was toying around with the idea of how to convert: if (lcycle && time_after(lcycle, now)) return lcycle; into: if (lcycle && time_after(lcycle + min_delta, now)) return lcycle; But I don't know how that min_delta would be determined. A "close enough" factor like this would still prevent jitter, but would introduce a minimum granularity increment of gettimeofday(). I think this might be a reasonable performance vs absolute accuracy trade-off. What happens to your worst case time if you (just for a test) hard code a min_delta of something around 20-50? There could be some kind of boot/run time tunable to set this min_delta if there's no good way to calculate it. It should be trivial to add something like this to the fsys path as well and shouldn't disrupt the nojitter path at all. Thanks, Alex -- Alex Williamson HP Linux & Open Source Lab - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/