On Tue, Aug 20, 2013 at 03:52:00PM +0100, Ezequiel Garcia wrote: > On Tue, Aug 20, 2013 at 09:32:13AM -0500, Matt Sealey wrote: > > On Mon, Aug 19, 2013 at 11:59 AM, Ezequiel Garcia > > <ezequiel.gar...@free-electrons.com> wrote: > > > On Mon, Aug 12, 2013 at 07:29:42PM +0100, Will Deacon wrote: > > >> I suggest adding an iowmb after the writel if you really need this > > >> ordering > > >> to be enforced (but this may have a significant performance impact, > > >> depending on your SoC). > > > > > > I don't want to argue with you, given I have zero knowledge about this > > > ordering issue. However let me ask you a question. > > > > > > In arch/arm/include/asm/spinlock.h I'm seeing this comment: > > > > > > ""ARMv6 ticket-based spin-locking. > > > A memory barrier is required after we get a lock, and before we > > > release it, because V6 CPUs are assumed to have weakly ordered > > > memory."" > > > > > > and also: > > > > > > static inline void arch_spin_unlock(arch_spinlock_t *lock) > > > { > > > smp_mb(); > > > lock->tickets.owner++; > > > dsb_sev(); > > > } > > > > > > So, knowing this atomic API should work for every ARMv{N}, and not being > > > very > > > sure what the call to dsb_sev() does. Would you care to explain how the > > > above > > > is *not* enough to guarantee a memory barrier before the spin unlocking? > > > > arch_spin_[un]lock as an API is not guaranteed to use a barrier before > > or after doing anything, even if this particular implementation does.
[...] > Of course. I agree completely. Well, even if the barrier was guaranteed by the API, it's still not sufficient to ensure ordering between two different memory types. For example, on Cortex-A9 with PL310, you would also need to perform an outer_sync() operation before the unlock. Will -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/