On Wed, Jun 12, 2019 at 02:58:21PM +0200, Jason A. Donenfeld wrote: > Hi Peter, > > Thanks for the explanation. > > On Wed, Jun 12, 2019 at 2:29 PM Peter Zijlstra <pet...@infradead.org> wrote: > > Either local_clock() or cpu_clock(cpu). The sleep hooks are not > > something the consumer has to worry about. > > Alright. Just so long as it *is* tracking sleep, then that's fine. If > it isn't some important aspects of the protocol will be violated.
The scheduler also cares about how long a task has been sleeping, so yes, that's automagic. > > If an architecture doesn't provide a sched_clock(), you're on a > > seriously handicapped arch. It wraps in ~500 days, and aside from > > changing jiffies_lock to a latch, I don't think we can do much about it. > > Are you sure? The base definition I'm looking at uses jiffies: > > unsigned long long __weak sched_clock(void) > { > return (unsigned long long)(jiffies - INITIAL_JIFFIES) > * (NSEC_PER_SEC / HZ); > } > > On a CONFIG_HZ_1000 machine, jiffies wraps in ~49.7 days: > >>> ((1<<32)-1)/1000/(60*60*24) > 49.710269618055555 Bah, I must've done the math wrong (or assumed HZ=100). > Why not just use get_jiffies_64()? The lock is too costly on 32bit? Deadlocks when you do get_jiffies_64() from within an update. What would be an easier update is forcing everyone to use the GENERIC_SCHED_CLOCK fallback or something like that. OTOH, changing jiffies_lock to a latch shouldn't be rocket science either. > > (the scheduler too expects sched_clock() to not wrap short of the u64 > > and so having those machines online for 500 days will get you 'funny' > > results) > > Ahh. So if, on the other hand, the whole machine explodes at the wrap > mark, I guess my silly protocol is the least of concerns, and so this > shouldn't matter? That was my thinking...