On 10/26/2018 10:21 AM, Waiman Long wrote: > On 10/17/2018 01:59 PM, Tim Chen wrote: >> Reorganize the spculation control MSR update code. Currently it is limited >> to only dynamic update of the Speculative Store Bypass Disable bit. >> This patch consolidates the logic to check for AMD CPUs that may or may >> not use this MSR to control SSBD. This prepares us to add logic to update >> other bits in the SPEC_CTRL MSR cleanly. >> >> Originally-by: Thomas Lendacky <thomas.lenda...@amd.com> >> Signed-off-by: Tim Chen <tim.c.c...@linux.intel.com> >> --- >> arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++---------- >> 1 file changed, 29 insertions(+), 10 deletions(-) >> >> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c >> index 8aa4960..789f1bada 100644 >> --- a/arch/x86/kernel/process.c >> +++ b/arch/x86/kernel/process.c >> @@ -397,25 +397,45 @@ static __always_inline void >> amd_set_ssb_virt_state(unsigned long tifn) >> >> static __always_inline void spec_ctrl_update_msr(unsigned long tifn) >> { >> - u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn); >> + u64 msr = x86_spec_ctrl_base; >> + >> + /* >> + * If X86_FEATURE_SSBD is not set, the SSBD >> + * bit is not to be touched. >> + */ >> + if (static_cpu_has(X86_FEATURE_SSBD)) >> + msr |= ssbd_tif_to_spec_ctrl(tifn); >> >> wrmsrl(MSR_IA32_SPEC_CTRL, msr); >> } >> >> -static __always_inline void __speculation_ctrl_update(unsigned long tifn) >> +static __always_inline void __speculation_ctrl_update(unsigned long tifp, >> + unsigned long tifn) > > I think it will be more intuitive to pass in (tifp ^ tifn) as bitmask of > changed TIF bits than tifp alone as you are only interested in the > changed bits anyway.
I will need to then pass the bitmask plus tifn. I still need to pass two parameters and it is a bit more work for the caller to generate the bit mask. It is not obvious to me that it is better. Please also document the input parameters as it is > hard to know what they are by reading the function alone. Sure. > > Cheers, > Longman >> { >> - if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) >> - amd_set_ssb_virt_state(tifn); >> - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) >> - amd_set_core_ssb_state(tifn); >> - else >> + bool updmsr = false; >> + >> + /* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */ >> + if ((tifp ^ tifn) & _TIF_SSBD) { >> + if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) >> + amd_set_ssb_virt_state(tifn); >> + else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) >> + amd_set_core_ssb_state(tifn); >> + else if (static_cpu_has(X86_FEATURE_SSBD)) >> + updmsr = true; >> + } >> + >> + if (updmsr) >> spec_ctrl_update_msr(tifn); >> } >> >> void speculation_ctrl_update(unsigned long tif) >> { >> + /* >> + * On this path we're forcing the update, so use ~tif as the >> + * previous flags. >> + */ >> preempt_disable(); >> - __speculation_ctrl_update(tif); >> + __speculation_ctrl_update(~tif, tif); >> preempt_enable(); >> } >> >> @@ -451,8 +471,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct >> task_struct *next_p, >> if ((tifp ^ tifn) & _TIF_NOCPUID) >> set_cpuid_faulting(!!(tifn & _TIF_NOCPUID)); >> >> - if ((tifp ^ tifn) & _TIF_SSBD) >> - __speculation_ctrl_update(tifn); >> + __speculation_ctrl_update(tifp, tifn); >> } >> >> /* > >