On 05.08.2019 09:36, Chao Gao wrote:
> On Fri, Aug 02, 2019 at 03:40:55PM +0000, Jan Beulich wrote:
>> On 01.08.2019 12:22, Chao Gao wrote:
>>> --- a/xen/arch/x86/microcode.c
>>> +++ b/xen/arch/x86/microcode.c
>>> @@ -189,12 +189,20 @@ static DEFINE_SPINLOCK(microcode_mutex);
>>>    
>>>    DEFINE_PER_CPU(struct cpu_signature, cpu_sig);
>>>    
>>> -struct microcode_info {
>>> -    unsigned int cpu;
>>> -    uint32_t buffer_size;
>>> -    int error;
>>> -    char buffer[1];
>>> -};
>>> +/*
>>> + * Return a patch that covers current CPU. If there are multiple patches,
>>> + * return the one with the highest revision number. Return error If no
>>> + * patch is found and an error occurs during the parsing process. Otherwise
>>> + * return NULL.
>>> + */
>>> +static struct microcode_patch *microcode_parse_blob(const char *buf,
>>> +                                                    uint32_t len)
>>
>> Btw - you'd have less issues with line length if you omitted the
>> "microcode_" prefix from static functions.
>>
>>> @@ -250,49 +251,88 @@ bool microcode_update_cache(struct microcode_patch 
>>> *patch)
>>>        return true;
>>>    }
>>>    
>>> -static int microcode_update_cpu(const void *buf, size_t size)
>>> +/*
>>> + * Load a microcode update to current CPU.
>>> + *
>>> + * If no patch is provided, the cached patch will be loaded. Microcode 
>>> update
>>> + * during APs bringup and CPU resuming falls into this case.
>>> + */
>>> +static int microcode_update_cpu(const struct microcode_patch *patch)
>>>    {
>>> -    int err;
>>> -    unsigned int cpu = smp_processor_id();
>>> -    struct cpu_signature *sig = &per_cpu(cpu_sig, cpu);
>>> +    int err = microcode_ops->collect_cpu_info(&this_cpu(cpu_sig));
>>> +
>>> +    if ( unlikely(err) )
>>> +        return err;
>>>    
>>>        spin_lock(&microcode_mutex);
>>>    
>>> -    err = microcode_ops->collect_cpu_info(sig);
>>> -    if ( likely(!err) )
>>> -        err = microcode_ops->cpu_request_microcode(buf, size);
>>> +    if ( patch )
>>> +    {
>>> +        /*
>>> +         * If a patch is specified, it should has newer revision than
>>> +         * that of the patch cached.
>>> +         */
>>> +        if ( microcode_cache &&
>>> +             microcode_ops->compare_patch(patch, microcode_cache) != 
>>> NEW_UCODE )
>>
>> While I see that you've taken care of the one case in Intel specific
>> code, this is again a case where I don't think you can validly call
>> this hook in the Intel case. Albeit maybe it is okay, provided that
>> the caller has already verified it against the CPU signature. Then
>> again I wonder why this check gets done here rather than in the
>> caller (next to that other check) anyway. Afaics this way you'd
>> also avoid re-checking on every CPU a CPU-independent property.
> 
> As said in an earlier reply to patch 6, ->compare_patch can
> be simplified a lot. Do you think it is fine to call it here
> with that change?

As said there - yes, this looks to be an option.

> About avoiding re-checking, we should check it with "microcode_mutex"
> held otherwise we cannot ensure nobody is updating the cache. If we want
> to avoid re-checking, then this lock is held for a long time from loading
> on the first core to the last core. And also for early loading and late
> loading, we pass 'NULL' to this function on following CPUs after the
> first successful loading. I am afraid that there is no redundant checking.

There should not be any updating of the cache while one (system-wide)
ucode load operation is in progress, or else you risk leaving the
system in a "mixed ucode versions" state in the end. As said, also
from a logical flow-of-events perspective it seems to me that it should
be the caller to validate applicability of the patch before calling
here. But as also said, I may as well be missing some important aspect
here.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to