https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91373
--- Comment #2 from Qiang <qiang.fu at verisilicon dot com> --- Hi Andrew, Thank your for your quickly reply. I still have some questions about this issue. It's very natural to write down the following code. All arguments are declared with 'U16', and the return type is 'U32'. U32 foo(U16 d1, U16 d2) { U32 data2 = d1 * d2; printf("data2: 0x%08x, data2 >> 31: %d, data2 >> 30: %d\n", data2, data2 >> 31, data2 >> 30); return data2; } It works under the old gcc like (gcc4.6.3 + '-O2') or VS2015. Also works under gcc5.4.0/gcc6.2.0 + '-O0'/'-O1'. But it failed under gcc5.4.0/gcc6.2.0 + '-O2'. If GCC need to follow rule, it should not be relative to GCC optimization. Why does it get different result with different optimization level? Even if there is U16 overflow issue, it's natural that user want GCC tool to take them as 'U32' argument because the return type is 'U32'. The following code works, but it's a burden that tool need user to explicitly cast it too follow the implicit rule, isn't it? U32 data2 = (U32)d1 * (U32)d2;