On 26/01/2012 12:53, Konstantin Vladimirov wrote:
Hi,
If I know what I am doing, and my code itself guarantees, that there
will be no overflows and UB here, can I switch off this signed char to
unsigned char expansion in favor of signed char to signed int
expansion?
The big question here is why you are using an unqualified "char" for
arithmetic in the first place. The signedness of plain "char" varies by
target (some default to signed, some to unsigned) and by compiler flags.
If you want to write code that uses signed chars, use "signed char".
Or even better, use <stdint.h> type "int8_t".
However, as has been pointed out, the problem is that signed arithmetic
doesn't wrap - it must be turned into unsigned arithmetic to make it
safe. An alternative is to tell gcc that signed arithmetic is 2's
compliment and wraps, by using the "-fwrapv" flag or "int8_t char
sum_A_B(void) __attribute__((optimize("wrapv")));" on the specific function.
mvh.,
David
---
With best regards, Konstantin
On Thu, Jan 26, 2012 at 3:04 PM, Jakub Jelinek<ja...@redhat.com> wrote:
On Thu, Jan 26, 2012 at 02:27:45PM +0400, Konstantin Vladimirov wrote:
Consider code:
char A;
char B;
char sum_A_B ( void )
{
char sum = A + B;
return sum;
}
[repro.c : 6:8] A.0 = A;
[repro.c : 6:8] A.1 = (unsigned char) A.0;
[repro.c : 6:8] B.2 = B;
[repro.c : 6:8] B.3 = (unsigned char) B.2;
[repro.c : 6:8] D.1990 = A.1 + B.3;
[repro.c : 6:8] sum = (char) D.1990;
[repro.c : 8:3] D.1991 = sum;
[repro.c : 8:3] return D.1991;
}
It looks really weird. Why gcc promotes char to unsigned char internally?
To avoid triggering undefined behavior.
A + B in C for char A and B is (int) A + (int) B, so either we'd have to
promote it to int and then demote, or we just cast it to unsigned and do the
addition in 8-bit. If we don't do that, e.g. for
A = 127 and B = 127 we'd trigger undefined behavior of signed addition.
In unsigned char 127 + 127 is valid.
Jakub