On Wed, Apr 30, 2014 at 06:40:38PM -0400, Wang, Xiaoming wrote:
> loops_per_jiffy*Hz  is not always 1 second exactly
> it depends on the realization of _delay() .
> delay_tsc is used as _delay() in arch/x86/lib/delay.c
> It makes loop loops_per_jiffy larger than exception
> and causes one thread can not obtain the spin lock for
> a long time which may trigger HARD LOCKUP in this case.
> So we use cpu_clock() which is more accurate.
> 
> Signed-off-by: Chuansheng Liu <chuansheng....@intel.com>
> Signed-off-by: xiaoming wang <xiaoming.w...@intel.com>
> ---
>  kernel/locking/spinlock_debug.c |    9 ++++++---
>  1 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
> index 0374a59..471d26c 100644
> --- a/kernel/locking/spinlock_debug.c
> +++ b/kernel/locking/spinlock_debug.c
> @@ -105,10 +105,13 @@ static inline void debug_spin_unlock(raw_spinlock_t 
> *lock)
>  
>  static void __spin_lock_debug(raw_spinlock_t *lock)
>  {
> -     u64 i;
> -     u64 loops = loops_per_jiffy * HZ;
> +     u64 t;
> +     u64 one_second = 1000000000;
> +     u32 this_cpu = raw_smp_processor_id();
> +
> +     t = cpu_clock(this_cpu);
>  
> -     for (i = 0; i < loops; i++) {
> +     while (cpu_clock(this_cpu) - t < one_second) {
>               if (arch_spin_trylock(&lock->raw_lock))
>                       return;
>               __delay(1);

Yep, and now you've broken support for archs that fall back to jiffies
for cpu_clock :-), jiffies need not progress if you've got IRQs
disabled.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to