On 04.02.2014 00:53, Dave Airlie wrote:
From: Dave Airlie <airl...@redhat.com>
attempt to calculate a better value for array size to avoid breaking apps.
Signed-off-by: Dave Airlie <airl...@redhat.com>
---
src/gallium/drivers/r600/r600_shader.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/gallium/drivers/r600/r600_shader.c
b/src/gallium/drivers/r600/r600_shader.c
index 8fa7054..f0e980b 100644
--- a/src/gallium/drivers/r600/r600_shader.c
+++ b/src/gallium/drivers/r600/r600_shader.c
@@ -1416,7 +1416,7 @@ static int emit_gs_ring_writes(struct r600_shader_ctx
*ctx, bool ind)
if (ind) {
output.array_base = ring_offset >> 2; /* in dwords */
- output.array_size = 0xff
+ output.array_size = ctx->shader->gs_max_out_vertices *
4;
array_size is 12 bits in size. It overflows when gs_max_out_vertices is
set to 1024, and no vertices will be written at all. I don't believe
it's safe to assume a fixed output size per vertex either. This easily
breaks GSVS writes in case there are many vertex attributes.
Is there anything wrong with just setting array_size to the maximum,
0xfff? streamout does the same thing.
output.index_gpr = ctx->gs_export_gpr_treg;
} else
output.array_base = ring_offset >> 2; /* in dwords */
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev