On Tue, Feb 11, 2025 at 09:21:38AM +0100, Sebastian Andrzej Siewior wrote:

> So with LAZY_PREEMPT (not that one that was merged upstream, its
> predecessor) we had a counter similar to the preemption counter. On each
> rt_spin_lock() the counter was incremented, on each rt_spin_unlock() the
> counter was decremented. Once the counter hit zero and the lazy preempt
> flag was set (which was only set on schedule requests by SCHED_OTHER
> tasks), we scheduled.
> This improved the performance as we didn't schedule() while holding a
> spinlock_t and then bump into the same lock in the next task.
> 
> We don't follow this behaviour exactly today.

I think I send some hackery Mike's way to implement that at some point.

IIRC it wasn't an obvious win. Anyway its not too hard to do.

Reply via email to