On 04/15/2011 09:14 PM, José Fonseca wrote:
On 04/15/2011 08:51 PM, Brian Paul wrote:
On 04/15/2011 01:41 PM, José Fonseca wrote:
On 04/15/2011 07:20 PM, Christoph Bumiller wrote:
On 15.04.2011 18:04, Brian Paul wrote:

Hi Marek,

Back on Jan 10 you removed the per-vertex array max_index field
(commit cdca3c58aa2d9549f5188910e2a77b438516714f).  I believe this was
a mistake.

I noticed today that the piglit draw-instanced-divisor test is
failing.  A bisection points to Jose's commit
17bbc1f0425b3768e26473eccea5f2570dcb26d3.  But that's a red herring.
If I disable the SSE code paths, the regression goes back to the Jan
10 change.

With the GL_ARB_instanced_arrays extension vertex array indexing
changes quite a bit.  Specifically, some arrays (those with divisors
!= 0) are now indexed by instance ID, not the primitive's vertex
index.  The draw_info::max_index  field doesn't let us take this into
account.  I believe that we really need a per-array max_index value.

As an example, suppose we're drawing a star field with 1 million
instanced stars, each star drawn as a 4-vertex quad.  We might use a
vertex array indexed by the instance ID to color the stars.

The draw call would look like:

glDrawArraysInstanced(GL_QUADS, 0, 4, 1000*1000);

In this case we'd have two vertex arrays.  The first array is the quad
vertex positions with four elements.  The second array is the star
colors with 1 million elements.  As it is now, we're setting
draw_info::max_index  = 4 and we errantly clamp the index into the
second array to 4 instead of 1 million.

As a temporary work around we can disable clamping of array indexes
for instance arrays.  But I think we need to revert the Jan 10 commit
and then rework some of Jose's related changes.

-Brian


You know which vertex elements are per-instance, you know the divisor,
and you know the max instance index - that should be all the information
you need.

You just have to clamp it to startInstance + instanceCount for them instead.


draw_info::max_index  is an optimization hint. In theory it could be always 
0xfffffff and the pipe driver should still cope with it. Provided it is an 
upper bound of the true max index in the index buffer it should cause no 
visible difference.


My u_draw.c code already ignores instanced elements when computing
max_index.

And I believe the translate module doesn't clamp to max_index when
fetching instanced elements, but I'll have to double check.

I didn't look too closely at current st_draw.c code though yet. But it
appears the bug is in st_draw.c, as there is no need to artificially
limit the max_index passed by the app.
I thought we were using max_index to prevent out-of-bounds buffer
reads.  Are we not doing that anywhere?


Inside the draw module there is a max_index with that meaning that: draw_context->pt.max_index.

This is all very subtle, but min_index/max_index is really a property of the index_buffer alone, independent of vertex buffers size and/or any index_bias.

src/gallium/docs/source/context.rst could be a bit better... This is what it has currently:

    All vertex indices must fall inside the range given by
    ``min_index`` and
    ``max_index``.  In case non-indexed draw, ``min_index`` should be
    set to
    ``start`` and ``max_index`` should be set to ``start``+``count``-1.

    ``index_bias`` is a value added to every vertex index before
    fetching vertex
    attributes.  It does not affect ``min_index`` and ``max_index``.

    If there is an index buffer bound, and ``indexed`` field is true,
    all vertex
    indices will be looked up in the index buffer.  ``min_index``,
    ``max_index``,
    and ``index_bias`` apply after index lookup.


This would be a better description:

    In indexed draw, all vertex indices in the vertex buffer should
    fall inside the range given by ``min_index`` and


Oops. I meant

In indexed draw, all vertex indices in the **index** buffer should fall inside the range given by ``min_index`` and

also, it is not really all indices in the buffer, but just the indices in the range being drawn...

    ``max_index``, but the driver should not crash or hang if not.  It
    is merely an optimization hint so the driver knows which range of
    vertices will be referenced without having to scan the index
    buffer with the CPU.  A state tracker could always set
    ``min_index`` and ``max_index`` to 0 and 0xffffffff respectively,
    and the only prejudice would be performance, not stability, which
    means that the driver should internally guarantee there will be no
    out of bounds accesses.

    In case non-indexed draw, ``min_index`` should be set to
    ``start`` and ``max_index`` should be set to ``start``+``count``-1.

    ``index_bias`` is a value added to every vertex index from the
    index buffer before fetching vertex
    attributes.  It does not affect ``min_index`` and ``max_index``.

    If there is an index buffer bound, and ``indexed`` field is true,
    all vertex
    indices will be looked up in the index buffer.

Any suggestions for better english welcome.

Jose

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to