Hi, On Sat, 2024-04-20 at 13:22 +0100, Anton Ivanov wrote: > On 19/04/2024 14:47, Benjamin Berg wrote: > > Hi, > > > > On Wed, 2024-04-03 at 07:27 +0100, anton.iva...@cambridgegreys.com > > wrote: > > > From: Anton Ivanov <anton.iva...@cambridgegreys.com> > > > > > > 1. Preemption requires saving/restoring FPU state. This patch > > > adds support for it using GCC intrinsics as well as appropriate > > > storage space in the thread structure. We reuse the space > > > which is already allocated for the userspace threads in the > > > thread_info structure. > > > > Do we need to store the FPU state even? > > Everybody else does :)
As far as I can tell, not really. The way I understand the x86 codeis that it tries to leave the FPU untouched in the hope that it will return to the same userspace task. However, if the kernel starts using the FPU, then it saves the FPU registers into the current task struct and sets a flag that the FPU state needs to be restored when returning to userspace. See TIF_NEED_FPU_LOAD, switch_fpu_prepare and kernel_fpu_begin_mask. We have multiple things confirming this for me: * storing only happens if "current" is not a KTHREAD or USER_WORKER (i.e. proper userspace, we e.g. in a syscall, not a kernel thread) * we set TIF_NEED_FPU_LOAD, which is documented with "load FPU on return to userspace" * the in_kernel_fpu per-CPU boolean is set and must be unset later * __schedule must only be called with preemption disabled * switch_fpu_prepare does not save FPU registers for KTHREAD or USER_WORKER either So, for me, all the effort is purely an optimization for the common case that a userspace process is switching into kernel space for a syscall (or pagefault), and we are coming back to the same process afterwards without scheduling. In that case, we can just leave the entire FPU registers untouched and everything is great. But, that case is irrelevant for us, because we always store the FPU registers of userspace processes anyway (equivalent to just setting the TIF_NEED_FPU_LOAD flag). And, for kernel tasks we never do that, but neither does x86. So, yeah, the more I look at it, the more certain I am that we should basically just do: #define kernel_fpu_begin preempt_disable #define kernel_fpu_end preempt_enable And, of course, add the preempt_* in various locations as you already have done. Benjamin > > > After all, we cannot simply > > overwrite userspace FPU registers in UML. > > Correct, but you can have kernel threads as well. > > > > Seriously wondering about that. It seems to me it would be > > sufficient > > to ensure that only one kernel task is in a > > kern_fpu_begin()/kern_fpu_end() section at any point in time. > > So what happens if you have a task which wants fpu and cannot get it > at > this point? > > > > > Now, I don't know how preempt_disable()/preempt_enable() behaves > > exactly. But I would assume it makes such a guarantee and then we > > can > > simply map kern_fpu_begin to preempt_disable and kern_fpu_end to > > preempt_enable. > > > > > 2. irq critical sections need preempt_disable()/preempt_enable(). > > Yes. Otherwise RAID, crypto, ipsec can mess up things. > > > > > > > 3. TLB critical sections need preempt_disable()/preempt_enable(). > > I think I added those. > > > > > > > 4. UML TLB flush is also invoked during a fork. This happens > > > with interrupts and preempt disabled which disagrees with the > > > standard mm locking via rwsem. The mm lock for this code path > > > had to be replaced with an rcu. > > > > Hopefully that flush during fork will be gone soon :-) > > Hurrah!!! > > > > > Benjamin > > > > > [...] > > > > >