Hello,

we've noticed the following behavior of the GCC vector extension, and were
wondering whether this is actually intentional:

When you use binary operators on two vectors, GCC will accept not only operands
that use the same vector type, but also operands whose types only differ in
signedness of the vector element type.  The result type of such an operation
(in C) appears to be the type of the first operand of the binary operator.

For example, the following test case compiles:

typedef signed int vector_signed_int __attribute__ ((vector_size (16)));
typedef unsigned int vector_unsigned_int __attribute__ ((vector_size (16)));

vector_unsigned_int test (vector_unsigned_int x, vector_signed_int y)
{
  return x + y;
}

However, this variant

vector_unsigned_int test1 (vector_unsigned_int x, vector_signed_int y)
{
  return y + x;
}

fails to build:

xxx.c: In function 'test1':
xxx.c:12:3: note: use -flax-vector-conversions to permit conversions between 
vectors with differing element types or numbers of subparts
   return y + x;
   ^
xxx.c:12:10: error: incompatible types when returning type 'vector_signed_int 
{aka __vector(4) int}' but 'vector_unsigned_int {aka __vector(4) unsigned int}' 
was expected
   return y + x;
          ^

Given a commutative operator, this behavior seems surprising.


Note that for C++, the behavior is apparently different: both test
and test1 above compile as C++ code, but this version:

vector_signed_int test2 (vector_unsigned_int x, vector_signed_int y)
{
  return y + x;
}

which builds on C, fails on C++ with:

xxx.C:17:14: note: use -flax-vector-conversions to permit conversions between 
vectors with differing element types or numbers of subparts
   return y + x;
              ^
xxx.C:17:14: error: cannot convert 'vector_unsigned_int {aka __vector(4) 
unsigned int}' to 'vector_signed_int {aka __vector(4) int}' in return

This C vs. C++ mismatch likewise seems surprising.


Now, the manual page for the GCC vector extension says:

You cannot operate between vectors of different lengths or different signedness 
without a cast.

And the change log of GCC 4.3, where the strict vector type checks (and the
above-mentioned -flax-vector-conversions option) were introduced, says:

    Implicit conversions between generic vector types are now only permitted 
when the two vectors in question have the same number of elements and 
compatible element types. (Note that the restriction involves compatible 
element types, not implicitly-convertible element types: thus, a vector type 
with element type int may not be implicitly converted to a vector type with 
element type unsigned int.) This restriction, which is in line with 
specifications for SIMD architectures such as AltiVec, may be relaxed using the 
flag -flax-vector-conversions. This flag is intended only as a compatibility 
measure and should not be used for new code. 

Both of these statements appear to imply (as far as I can tell) that all
the functions above ought to be rejected (unless -flax-vector-conversions).

So at the very least, we should bring the documentation in line with the
actual behavior.  However, as seen above, that actual behavior is probably
not really useful in any case, at least in C.


So I'm wondering whether we should:

A. Bring C in line with C++ by making the result of a vector binary operator
   use the unsigned type if the two input types differ in signedness?

and/or

B. Enforce that both operands to a vector binary operator must have the same
   type (except for opaque vector types) unless -flax-vector-conversions?


Thanks,
Ulrich


PS: FYI some prior discussion of related issues that I found:

https://gcc.gnu.org/ml/gcc/2006-10/msg00235.html
https://gcc.gnu.org/ml/gcc/2006-10/msg00682.html
https://gcc.gnu.org/ml/gcc-patches/2006-11/msg00926.html

https://gcc.gnu.org/ml/gcc-patches/2013-08/msg01634.html
https://gcc.gnu.org/ml/gcc-patches/2013-09/msg00450.html


-- 
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  ulrich.weig...@de.ibm.com

Reply via email to