On Sat, Feb 27, 2016 at 06:51:18PM +0100, Borislav Petkov wrote: > On Sat, Feb 27, 2016 at 11:01:47AM +0000, Chris Bainbridge wrote: > > Checksum should be unsigned 32-bit otherwise the calculation overflows > > resulting in undefined behaviour: > > > > [ 0.000000] > > ================================================================================ > > [ 0.000000] UBSAN: Undefined behaviour in > > arch/x86/kernel/cpu/microcode/intel_lib.c:105:12 > > [ 0.000000] signed integer overflow: > > [ 0.000000] -1500151068 + -2125470173 cannot be represented in type 'int' > > So I've been staring at this error message for a while now and I'm > having hard time understanding what it is telling me. Let's look at all > three ints: > > * sum is computed here: > > sum = orig_sum > - (mc_header->sig + mc_header->pf + mc_header->cksum) > + (ext_sig->sig + ext_sig->pf + ext_sig->cksum); > > and checks mc_header against each extended signature. If it overflows, we > abort: > > if (sum) { > if (print_err) > pr_err("aborting, bad checksum\n"); > > > * orig_sum in the above sum can only be 0: > > if (orig_sum) { > if (print_err) > pr_err("aborting, bad checksum\n"); > return -EINVAL; > > and it adds a bunch of integers which can overflow, sure, but if it > overflows, we exit early. > > * and > > ext_sigcount = ext_header->count; > > so it is some count of extended signatures. > > So what is ubsan complaining about?
/* calculate the checksum */ orig_sum = 0; i = (MC_HEADER_SIZE + data_size) / DWSIZE; while (i--) orig_sum += ((int *)mc)[i]; The checksum is the additive sum of all the 32-bit values, but 32-bit addition overflow is undefined for signed ints. For comparison the checksum function from iucode-tool uses u32: intel_ucode_status_t intel_ucode_check_microcode(const void * const uc, int strict) { ... uint32_t sum, orig_sum; unsigned int i; uint32_t *p; ... /* Calculate the checksum. We exclude the extended table as it * also has to have a zero checksum, in order to get better * coverage */ orig_sum = 0; i = (INTEL_UC_V1_HEADER_SIZE + data_size) / sizeof(uint32_t); p = (uint32_t *)uc; while (i--) { orig_sum += *p; p++; }