On Thu, 27 Nov 2014, Mathias Roslund wrote:

> But isn't the result of an 8bit signed divide the same as the result of 
> a 32bit signed divide when both operands are in the 8bit range? That is, 
> shouldn't the optimizers be able to do the same for signed divide as 
> well as shift operations?

At C level, -128 / -1 is well-defined, but INT_MIN / -1 (and INT_MIN % -1) 
isn't.

At GENERIC, GIMPLE and RTL level, definedness of <min> / -1 and <min> % -1 
is less well-defined.  See bug 30484 for some discussion (-fwrapv *should* 
make these well-defined at C level, but doesn't); it seems appropriate to 
treat these as undefined in RTL unless the target declares them to be 
well-defined with the expected semantics.

Thus, if the user has signed char variables a == -128 and b == -1, 
evaluating a / b must produce 128 (type int), which may then of course be 
converted to another type if e.g. stored in a signed char variable 
(conversion of out-of-range values to signed integer types is 
implementation-defined at C level, not undefined, and GNU C defines it as 
modulo).  So if your 8-bit signed divide operation traps for this case, or 
does not produce the expected result (which includes e.g. producing -128 
for this case of (sign_extend:SI (div:QI)) - the correct result for that 
is +128, not -128), then it cannot be used to implement C division of 
signed char values extended to int.  Of course, if the instruction does 
not trap and produces correct results it's fine to use it when the final 
result gets truncated to 8-bit.  But using an 8-bit divide to produce a 
32-bit result is liable to be problematic, given that -128 / -1 must 
produce +128 but -128 / 1 must produce -128 (i.e. there are 257 possible 
32-bit results of division of 8-bit operands, so something that simply 
extends from an 8-bit result cannot be correct in general).

-- 
Joseph S. Myers
jos...@codesourcery.com

Reply via email to