Hi, If I know what I am doing, and my code itself guarantees, that there will be no overflows and UB here, can I switch off this signed char to unsigned char expansion in favor of signed char to signed int expansion?
--- With best regards, Konstantin On Thu, Jan 26, 2012 at 3:04 PM, Jakub Jelinek <ja...@redhat.com> wrote: > On Thu, Jan 26, 2012 at 02:27:45PM +0400, Konstantin Vladimirov wrote: >> Consider code: >> >> char A; >> char B; >> >> char sum_A_B ( void ) >> { >> char sum = A + B; >> >> return sum; >> } >> [repro.c : 6:8] A.0 = A; >> [repro.c : 6:8] A.1 = (unsigned char) A.0; >> [repro.c : 6:8] B.2 = B; >> [repro.c : 6:8] B.3 = (unsigned char) B.2; >> [repro.c : 6:8] D.1990 = A.1 + B.3; >> [repro.c : 6:8] sum = (char) D.1990; >> [repro.c : 8:3] D.1991 = sum; >> [repro.c : 8:3] return D.1991; >> } >> >> It looks really weird. Why gcc promotes char to unsigned char internally? > > To avoid triggering undefined behavior. > A + B in C for char A and B is (int) A + (int) B, so either we'd have to > promote it to int and then demote, or we just cast it to unsigned and do the > addition in 8-bit. If we don't do that, e.g. for > A = 127 and B = 127 we'd trigger undefined behavior of signed addition. > In unsigned char 127 + 127 is valid. > > Jakub