* Mathieu Desnoyers ([EMAIL PROTECTED]) wrote: > * Christoph Lameter ([EMAIL PROTECTED]) wrote: > > On Tue, 21 Aug 2007, Mathieu Desnoyers wrote: > > > > > - Changed smp_rmb() for barrier(). We are not interested in read order > > > across cpus, what we want is to be ordered wrt local interrupts only. > > > barrier() is much cheaper than a rmb(). > > > > But this means a preempt disable is required. RT users do not want that. > > Without preemption the processor can be moved after c has been determined. > > That is why the smp_rmb() is there. > > preemption is required if we want to use cmpxchg_local anyway. > > We may have to find a way to use preemption while being able to give an > upper bound on the preempt disabled execution time. I think I got a way > to do this yesterday.. I'll dig in my patches. >
Yeah, I remember having done so : moving the preempt disable nearer to the cmpxchg, checking if the cpuid has changed between the raw_smp_processor_id() read and the preempt_disable done later, redo if it is different. It makes the slow path faster, but makes the fast path more complex, therefore I finally dropped the patch. And we talk about ~10 cycles for the slow path here, I doubt it's worth the complexity added to the fast path. -- Mathieu Desnoyers Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/