On Mon, 8 Jul 2013, Jason Merrill wrote:
On 07/07/2013 08:02 AM, Marc Glisse wrote:
+ error_at (loc, "could not find an integer type "
+ "of the same size as %qT", ctype);
Now that I think of it, if a?b:c fails for this reason, it would also fail
later because it can't compute a!=0 (which is what the first argument is
expanded to), so I could move the construction of a!=0 earlier instead,
and re-use its type (modulo signed/unsigned).
Why try to find a result element type the same size as the condition element
type? For scalars the condition is bool and the result can be any type.
It is a guess, so it may be wrong. My experience on x86 is that vectors
have a fixed size (128 bits for SSE, 256 for AVX) and most operations act
on vectors of the same size. Mixing vectors of different sizes is likely
to result in operations not supported by the hardware and expanded to slow
scalar code. Comparisons already return a signed integer vector type of
the same size as the operands (so they fail for vectors of long double on
x86).
Now there are some exceptions on x86, and it may be that other
architectures have more. The vectorizer even has code to handle these
mixed sizes. We could also consider that selecting a better vector size is
the role of a middle-end optimization and should not affect the language
rules.
Note that the VEC_COND_EXPR we currently have in gcc requires the 3
arguments to have the same size and number of elements, so we would need
to use VEC_(UN)PACK_*_EXPR and CONSTRUCTOR/BIT_FIELD_REF (all our tree
codes are for vectors of fixed size, so doubling the size of the elements
also drops half the elements).
Perhaps the most conservative rule would be to only accept the case where
the sizes match and reject the others for now, so whatever is decided
later for other cases is unlikely to require a breaking change. Though I
would like the general case to be accepted eventually, whatever it ends up
meaning ;-)
--
Marc Glisse