From: Yandong Zhao
It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.
This means that raw_cpu_read() is not suf
From: Yandong Zhao
It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.
This means that raw_cpu_read() is not suf
From: Yandong Zhao
It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.
This means that raw_cpu_read() is not suf
From: Yandong Zhao
Operations for contexts where we do not want to do any checks for
preemptions. Unless strictly necessary, always use this_cpu_read()
instead. Because of the kernel_neon_busy here we have to make sure
that it is the current cpu.
Signed-off-by: Yandong Zhao
---
arch/arm64/in
From: Yandong Zhao
Dear Dave,
The scenario for this bug is:
The A process is sched out when the CPU0 executes the function
raw_cpu_read(kernel_neon_busy) and just gets the address of
kernel_neon_busy without reading.
The B process starts running kernel_neon_begin() on CPU0, and the variable
kern
From: Yandong Zhao
may_use_simd() can be called in any case and access kernel_neon_busy,
for example: BUG_ON(!may_use_simd()). This patch ensures that
migration will not occur during program access to kernel_neon_busy.
Signed-off-by: Yandong Zhao
---
arch/arm64/include/asm/simd.h | 16 ---
6 matches
Mail list logo