https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78303

kelvin at gcc dot gnu.org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |kelvin at gcc dot gnu.org

--- Comment #2 from kelvin at gcc dot gnu.org ---
The gcc.pdf documentation states the following:

-maltivec=be
Generate  AltiVec  instructions  using  big-endian  element  order,  regardless
 of whether  the  target  is  big-  or  little-endian.   This  is  the  default
 when  targeting a big-endian platform.  The element order is used to interpret
element numbers in AltiVec intrinsics such as vec_splat, vec_extract, and
vec_insert.  By default, these match array element order corresponding to the
endianness for the target.

Should this documentation be clarified?  As I ponder the problem, it is not
clear to me whether I should be fixing the load and store operations, or fixing
the layout of the "initialization vectors" in memory.

Suppose I have code such as the following:

static vector int v = { 0, 1, 2, 3 };
int *ip = (int *) &v;
fprintf (stderr, "The vector contains { %d, %d, %d, %d }\n", 
         ip[0], ip[1], ip[2], ip[3]);

Do I expect different output depending on whether this program is compiled with
-maltivec=be?

My first inclination is to do "extra swapping" (or undo existing swapping) on
each load and store if the "Altivec element ordering" is different than the
target's default ordering.  It's not entirely clear if this is what was
intended.

Reply via email to