On 17/07/18 03:41, Eric Anholt wrote:
Timothy Arceri <tarc...@itsqueeze.com> writes:

This makes this opt behave more like the GLSL IR opt
lower_if_to_cond_assign(). With this we can disable that GLSL IR
opt on drivers with a NIR backend without causing spill
regressions.

shader-db results for radeonsi (RX580):

Totals from affected shaders:
SGPRS: 12200 -> 13072 (7.15 %)
VGPRS: 13496 -> 11840 (-12.27 %)
Spilled SGPRs: 285 -> 290 (1.75 %)
Spilled VGPRs: 115 -> 0 (-100.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 116 -> 0 (-100.00 %) dwords per thread
Code Size: 781304 -> 770168 (-1.43 %) bytes
LDS: 0 -> 0 (0.00 %) blocks
Max Waves: 1558 -> 1586 (1.80 %)
Wait states: 0 -> 0 (0.00 %)

Moving UBO loads out of conditionals seems questionable to me, and
I don't think shader-db would represent the effect of this kind of code
transformation well.  I'd love to see some actual performance numbers on
affected applications.

That said, lower_if_to_cond_assign() was already doing this, and I'm
assuming these shader-db stats are just from
with-glsl-optimization-disabled to with-nir-patch.

No these results are with lower_if_to_cond_assign() still turned on.

The biggest gain seemed to come from a few shaders with a bunch of consecutive if's. It enables a chain of phis to be eliminated and reduces a bunch of VGPR spilling. Maybe we should look at that more closely on its own.

I'll hold onto this patch for now.

So, while we should
probably investigate tuning this, since it lets us dump some GLSL
optimization:

Acked-by: Eric Anholt <e...@anholt.net>

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to