On 04/29/2014 09:57 PM, Matt Turner wrote:
On Tue, Apr 29, 2014 at 6:01 AM, Petri Latvala <petri.latv...@intel.com> wrote:
For the record, tested this with the following shader:
Cool. Please submit this as a piglit test.
Sent to piglit mailing list, with accompanying tests for min3 and max3.
Wouldn't it be simpler to detect constant arguments in opt_algebraic
and do the optimization there, and just perform the standard lowering
here? It seems cleaner to me. I don't think we generate different code
based on the arguments in any other lowering pass.
I was thinking about that, but ended up optimizing on lowering. My
reasoning was that if mid3(x, 1, 3) was transformed to min(max(x, 1), 3)
in opt_algebraic, backends with theoretical native support for mid3
would then need to recognize that and transform it back to a mid3.
(Are there even any GPUs that can do mid3 directly? AMD?)
Of course, it can be said that it's not a regression as mid3 doesn't get
to backends as mid3 before this patch either.
I'll change the patch to do this optimization in opt_algebraic.
--
Petri Latvala
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev