On Thu, Mar 27, 2014 at 07:06:03PM +0800, Jianyu Zhan wrote: > Presently, after we fail the first try to walk the pcpu_slot list > to find a chunk for allocating, we just drop the pcpu_lock spinlock, > and go allocating a new chunk. Then we re-gain the pcpu_lock and > anchoring our hope on that during this period, some guys might have > freed space for us(we still hold the pcpu_alloc_mutex during this > period, so only freeing or reclaiming could happen), we do a fully > rewalk of the pcpu_slot list. > > However if nobody free space, this fully rewalk may seem too silly, > and we would eventually fall back to the new chunk. > > And since we hold pcpu_alloc_mutex, only freeing or reclaiming path > could touch the pcpu_slot(which just need holding a pcpu_lock), we > could maintain a pcpu_slot_stat bitmap to record that during the period > we don't have the pcpu_lock, if anybody free space to any slot we > interest in. If so, we just just go inside these slots for a try; > if not, we just do allocation using the newly-allocated fully-free > new chunk.
The patch probably needs to be refreshed on top of percpu/for-3.15. Hmmm... I'm not sure whether the added complexity is worthwhile. It's a fairly cold path. Can you show how helpful this optimization is? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/