On January 13, 2021 10:08:26 AM UTC, Emmanuel Vadot <m...@bidouilliste.com> 
wrote:
>On Tue, 12 Jan 2021 15:16:55 +0200
>Konstantin Belousov <kostik...@gmail.com> wrote:
>
>> On Tue, Jan 12, 2021 at 11:43:00AM +0000, Emmanuel Vadot wrote:
>> > The branch main has been updated by manu:
>> > 
>> > URL: 
>> > https://cgit.FreeBSD.org/src/commit/?id=11d62b6f31ab4e99df6d0c6c23406b57eaa37f41
>> > 
>> > commit 11d62b6f31ab4e99df6d0c6c23406b57eaa37f41
>> > Author:     Emmanuel Vadot <m...@freebsd.org>
>> > AuthorDate: 2021-01-12 11:02:38 +0000
>> > Commit:     Emmanuel Vadot <m...@freebsd.org>
>> > CommitDate: 2021-01-12 11:31:00 +0000
>> > 
>> >     linuxkpi: add kernel_fpu_begin/kernel_fpu_end
>> >     
>> >     With newer AMD GPUs (>=Navi,Renoir) there is FPU context usage in the
>> >     amdgpu driver.
>> >     The `kernel_fpu_begin/end` implementations in drm did not even allow 
>> > nested
>> >     begin-end blocks.
>> 
>> Does Linux allow more then one thread to execute kernel_fpu_begin ?
>
> I actually have no idea, adding Greg to cc.

Looks like they save the context into the current thread state, so yes? (drm 
doesn't need that)

Also they seem to do something FPU_KERN_NOCTX like (??) because they disable 
preemption inside these blocks.
(Where does our NOCTX actually store the state?)

Apparently the API shouldn't actually support nesting these sections, because 
kernel_fpu_end unconditionally enables preemption. But that nesting would not 
just panic like it did on our old impl. (Indeed in newest linux versions 
there's a WARN on nesting – only a warning though, not a panic!)
And amdgpu relies on that not panicking.

Looks like amdgpu will narrow the FPU sections down now, they are struggling 
with FPU stuff upstream too:
https://www.spinics.net/lists/kernel/msg3793776.html
_______________________________________________
dev-commits-src-all@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/dev-commits-src-all
To unsubscribe, send any mail to "dev-commits-src-all-unsubscr...@freebsd.org"

Reply via email to