https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86965
sandra at gcc dot gnu.org changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sandra at gcc dot gnu.org Component|target |tree-optimization --- Comment #1 from sandra at gcc dot gnu.org --- I'm not sure what command-line options you were using, but with -O2 the bad2 case now generates the expected code. Looking at the bad1 case, this is what's coming out of the tree optimizers, and what the back end has to deal with for RTL expansion: bad1 (const signed char * str, int * res) { int c; signed char _1; int _2; int _11; signed char _12; _Bool _13; <bb 2> [local count: 1073741824]: _1 = *str_6(D); c_8 = (int) _1; _2 = c_8 + -48; *res_9(D) = _2; _12 = _1 & -33; _13 = _12 == 69; _11 = (int) _13; return _11; } The code coming out of RTL expand is a mess too; there's no QImode "and" instruction, it can't use the SImode "andi" instruction because that it only accepts small unsigned constants (not -33), and then it has to sign-extend the QImode result it computed because the comparison instructions need SImode too. FWIW I think the real bug here is in the tree reassoc1 pass: it shouldn't attempt this optimization if there is no optab support for bitwise AND in the appropriate mode. So I'm reclassifying this as a tree-optimization bug rather than a target bug; if the maintainers dispute that, feel free to switch it back and I will take another look see if I can do something in the backend to recombine the insns.