The particular failing case was a 1024^3 3D texture. 1024*1024*1024*4 (RGBA) = 4GB which overflows to zero in a 32-bit uint.

The limit used to be 512^3 but Jose bumped it up in commit bc8509b4.

I think we could look if the img_stride is > 1024 then do an alternate computation in kilobytes instead of bytes...

-Brian


On 09/19/2012 04:47 PM, Roland Scheidegger wrote:
Good catch.
I'm not sure though using size_t casts (and size_t sized total_value)
is good enough since this could also be hit on archs using 32bit size_t?
Though i guess using 64bit arithmetic on 32bit would be sort of slow...

Roland


Am 19.09.2012 21:30, schrieb Brian Paul:
Add size_t casts when multiplying slice size by number of slices to
avoid 32-bit uint overflow.  This bug has been here for a long time.
But before the recent proxy texture changes, core Mesa was detecting
that the texture was too large and we never got this far.

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=55117
---
  src/gallium/drivers/llvmpipe/lp_texture.c |    4 +++-
  1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/src/gallium/drivers/llvmpipe/lp_texture.c 
b/src/gallium/drivers/llvmpipe/lp_texture.c
index c0a612c..0aa1299 100644
--- a/src/gallium/drivers/llvmpipe/lp_texture.c
+++ b/src/gallium/drivers/llvmpipe/lp_texture.c
@@ -172,7 +172,9 @@ llvmpipe_texture_layout(struct llvmpipe_screen *screen,
           }
        }

-      total_size += lpr->num_slices_faces[level] * lpr->img_stride[level];
+      total_size += (size_t) lpr->num_slices_faces[level]
+                  * (size_t) lpr->img_stride[level];
+
        if (total_size>  LP_MAX_TEXTURE_SIZE) {
           goto fail;
        }


_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to