On 09/02/2021 17:17, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH for-4.15] x86/ucode: Fix microcode payload 
> size for Fam19 processors"):
>> On 09.02.2021 16:33, Andrew Cooper wrote:
>>> The original limit provided wasn't accurate.  Blobs are in fact rather 
>>> larger.
>>>
>>> Fixes: fe36a173d1 ("x86/amd: Initial support for Fam19h processors")
>>> Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com>
>> Acked-by: Jan Beulich <jbeul...@suse.com>
>>
>>> --- a/xen/arch/x86/cpu/microcode/amd.c
>>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>>> @@ -111,7 +111,7 @@ static bool verify_patch_size(uint32_t patch_size)
>>>  #define F15H_MPB_MAX_SIZE 4096
>>>  #define F16H_MPB_MAX_SIZE 3458
>>>  #define F17H_MPB_MAX_SIZE 3200
>>> -#define F19H_MPB_MAX_SIZE 4800
>>> +#define F19H_MPB_MAX_SIZE 5568
>> How certain is it that there's not going to be another increase?
>> And in comparison, how bad would it be if we pulled this upper
>> limit to something that's at least slightly less of an "odd"
>> number, e.g. 0x1800, and thus provide some headroom?
> 5568 does seem really an excessively magic number...

It reads better in hex - 0x15c0.  It is exactly the header,
match/patch-ram, and the digital signature of a fixed algorithm.

Its far simpler than Intel's format which contains multiple embedded
blobs, and support for minor platform variations within the same blob.

There are a lot of problems with how we do patch verification on AMD
right now, but its all a consequence of the header not containing a
length field.

This number wont change now.  Zen3 processors are out in the world.

~Andrew

Reply via email to