On 11/17, Paul E. McKenney wrote: > > int srcu_read_lock(struct srcu_struct *sp) > { > int idx; > + struct srcu_struct_array *sap; > > preempt_disable(); > idx = sp->completed & 0x1; > - barrier(); /* ensure compiler looks -once- at sp->completed. */ > - per_cpu_ptr(sp->per_cpu_ref, smp_processor_id())->c[idx]++; > - srcu_barrier(); /* ensure compiler won't misorder critical section. */ > + sap = rcu_dereference(sp->per_cpu_ref); > + if (likely(sap != NULL)) { > + barrier(); /* ensure compiler looks -once- at sp->completed. */ > + per_cpu_ptr(rcu_dereference(sap), > + smp_processor_id())->c[idx]++; > + smp_mb(); > + preempt_enable(); > + return idx; > + } > + if (mutex_trylock(&sp->mutex)) { > + preempt_enable(); > + if (sp->per_cpu_ref == NULL) > + sp->per_cpu_ref = alloc_srcu_struct_percpu(); > + if (sp->per_cpu_ref == NULL) { > + atomic_inc(&sp->hardluckref); > + mutex_unlock(&sp->mutex); > + return -1; > + } > + mutex_unlock(&sp->mutex); > + return srcu_read_lock(sp); > + } > preempt_enable(); > - return idx; > + atomic_inc(&sp->hardluckref); > + return -1; > }
This is a real nitpick, but in theory we have a possibility for the livelock. Suppose that synchronize_srcu() takes sp->mutex and fails to allocate sp->per_cpu_ref. If we have a flow of srcu_read_lock/srcu_read_unlock, this loop in synchronize_srcu() while (srcu_readers_active_idx(sp, idx)) schedule_timeout_interruptible(1); may spin unpredictably long, because we use the same sp->hardluckref for accounting. Oleg. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/