Paul Brook wrote: > On Friday 24 August 2007, Ian Lance Taylor wrote: >> Paolo Bonzini <[EMAIL PROTECTED]> writes: >>> 1) neg, abs and copysign operations on vectors. These we can make >>> available via builtins (for - of course you don't need it); we already >>> support them in many back-ends. >> Here is my point of view. People using the vector extensions are >> already writing inherently machine specific code, and they are >> (ideally) familiar with the instruction set of their processor. > > By the same argument, If you're already writing machine specific code then > there shouldn't be a problem using machine specific intrinsics. I admit I've > never been convinced that the generic vector support was sufficient to write > useful code without resorting to machine specific intrinsics.
Our VSIPL++ team is using it for some things. My guess is that it's probably not sufficient for all things, but probably is sufficient for many things. Also, I expect some users get (say) a 4x speedup over C code easily by using the vector extension, and could get an 8x speedup by using intrinsics, but with a lot more work. So, the vector extensions give them a sweet spot on the performance/effort/portability curve. > I'm partly worried about cross-platform compatibility, and what this imples > for other SIMD targets. Yes. Here's a proposed definition: Let "a" and "b" be floating-point operands of type F, where F is a floating-point type. Let N be the number of bytes in F. Then, "a | b" is defined as: ({ union fi { F f; char bytes[N]; }; union fi au; union fi bu; au.f = a; bu.f = b; for (i = 0; i < N; ++i) au.bytes[i] |= bu.bytes[i]; au.f; }) If the resulting floating-point value is denormal, NaN, etc., whether or not exceptions are raised is unspecified. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713