On Thu, 2015-10-29 at 11:44 +1100, Anton Blanchard wrote:
>
> +extern void msr_check_and_set(unsigned long bits);
> +extern bool strict_msr_control;
> +extern void __msr_check_and_clear(unsigned long bits);
> +static inline void msr_check_and_clear(unsigned long bits)
> +{
> + if (strict_ms
Add a boot option that strictly manages the MSR unavailable bits.
This catches kernel uses of FP/Altivec/SPE that would otherwise
corrupt user state.
Signed-off-by: Anton Blanchard
---
Documentation/kernel-parameters.txt | 6 ++
arch/powerpc/include/asm/reg.h | 9 +
arch/pow
Add a boot option that strictly manages the MSR unavailable bits.
This catches kernel uses of FP/Altivec/SPE that would otherwise
corrupt user state.
Signed-off-by: Anton Blanchard
---
Documentation/kernel-parameters.txt | 6 ++
arch/powerpc/include/asm/reg.h | 9 +
arch/pow