On Sat, 17 Mar 2012 10:58:27 +0800, Liu Aleaxander <[email protected]> wrote: > On Sat, Mar 17, 2012 at 1:57 AM, Eric Anholt <[email protected]> wrote: > > On Mon, 12 Mar 2012 16:04:00 +0800, Yuanhan Liu > > <[email protected]> wrote: > >> /**********************************************************************/ > >> /* High level hooks for t_vb_render.c */ > >> @@ -1070,6 +1112,15 @@ intelRunPipeline(struct gl_context * ctx) > >> if (ctx->NewState) > >> _mesa_update_state_locked(ctx); > >> > >> + /* > >> + * Enable POINT_SPRITE_ENABLE bit when needed here > >> + * > >> + * Handle it at _draw_ time so that we can guarantee the CoordReplace > >> + * changes handled well. And we must do it before the tnl pipeline is > >> + * running so that we can fallback when finding something goes wrong. > >> + */ > >> + intel_validate_sprite_point_enable(intel); > > > > Other computed state happens in i915InvalidateState. Why does this one > > go here? > > A nice point. Yeah, I should do the stuff there. > > So, how about the following patch > > (note: I haven't tested the patch yet since I don't have hardware for > testing at home, but it should work ;) > (send from my gmail account, the format may not good, sorry for that) > > --- > > From 34964ef86aad7361cb4f3f5f73ae4e42928a4b31 Mon Sep 17 00:00:00 2001 > From: Yuanhan Liu <[email protected]> > Date: Sat, 17 Mar 2012 10:48:23 +0800 > Subject: [PATCH] i915: set SPRITE_POINT_ENABLE bit correctly > > When SPRITE_POINT_ENABLE bit is set, the texture coord would be > replaced, and this is only needed when we called something like > glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE). > > And more, we currently handle varying inputs as texture coord, > we would be careful when setting this bit and set it just when > needed, or you will find the value of varying input is not right > and changed. > > Thus we do set SPRITE_POINT_ENABLE bit only when all enabled tex > coord units need do CoordReplace. Or fallback is needed to make > sure the rendering is right. > > With handling the bit setup at i915_update_sprite_point_enable(), > we don't need the relative code at i915Enable then. > > This patch would _really_ fix the webglc point-size.html test case and > of course, not regress piglit point-sprite and glean-pointSprite > testcase. > > NOTE: This is a candidate for stable release branches. > > v2: fallback just when all enabled tex coord units need do > CoordReplace(Eric). > v3: move the sprite point validate code at I915InvalidateState(Eric) > > Signed-off-by: Yuanhan Liu <[email protected]> > --- > src/mesa/drivers/dri/i915/i915_context.c | 2 + > src/mesa/drivers/dri/i915/i915_context.h | 2 + > src/mesa/drivers/dri/i915/i915_state.c | 53 > +++++++++++++++++++++++------- > src/mesa/drivers/dri/i915/intel_tris.c | 1 + > 4 files changed, 46 insertions(+), 12 deletions(-) > > diff --git a/src/mesa/drivers/dri/i915/i915_context.c > b/src/mesa/drivers/dri/i915/i915_context.c > index 36563ef..d7785be 100644 > --- a/src/mesa/drivers/dri/i915/i915_context.c > +++ b/src/mesa/drivers/dri/i915/i915_context.c > @@ -76,6 +76,8 @@ i915InvalidateState(struct gl_context * ctx, GLuint > new_state) > i915_update_provoking_vertex(ctx); > if (new_state & (_NEW_PROGRAM | _NEW_PROGRAM_CONSTANTS)) > i915_update_program(ctx); > + if (new_state & _NEW_POINT) > + i915_update_sprite_point_enable(ctx); > }
> +void
> +i915_update_sprite_point_enable(struct gl_context *ctx)
> +{
> + struct intel_context *intel = intel_context(ctx);
In the next two lines, you make use of _NEW_PROGRAM-governed state, but
you only call this function based on _NEW_POINT.
In the i965 driver, we annotate state usage with /* _NEW_WHATEVER */ so
the reader can see what state is being used, and make sure that it's
reflected in the caller.
> + struct i915_fragment_program *p =
> + (struct i915_fragment_program *) ctx->FragmentProgram._Current;
> + const GLbitfield64 inputsRead = p->FragProg.Base.InputsRead;
> + struct i915_context *i915 = i915_context(ctx);
> + GLuint s4 = i915->state.Ctx[I915_CTXREG_LIS4] & ~S4_VFMT_MASK;
> + int i;
> + GLuint coord_replace_bits = 0x0;
> + GLuint tex_coord_unit_bits = 0x0;
> +
> + for (i = 0; i < ctx->Const.MaxTextureCoordUnits; i++) {
> + if (ctx->Point.CoordReplace[i] && ctx->Point.PointSprite)
> + coord_replace_bits |= (1 << i);
> + if (inputsRead & FRAG_BIT_TEX(i))
> + tex_coord_unit_bits |= (1 << i);
> + }
> +
> + s4 &= ~S4_SPRITE_POINT_ENABLE;
> + s4 |= (coord_replace_bits && coord_replace_bits == tex_coord_unit_bits) ?
> + S4_SPRITE_POINT_ENABLE : 0;
> + if (s4 != i915->state.Ctx[I915_CTXREG_LIS4]) {
> + i915->state.Ctx[I915_CTXREG_LIS4] = s4;
> + I915_STATECHANGE(i915, I915_UPLOAD_CTX);
> + }
> +}
This still looks wrong to me. Take a shader reading gl_PointCoord with
point sprite enabled, and also a user-defined varying.
tex_coord_unit_bits will be 0, coord_replace_bits will be 0, and
gl_PointCoord will get garbage.
I specifically mention a user-defined varying, because one might be
tempted to replace the first check for coord_replace_bits with
(coord_replace_bits || (inputsRead & FRAG_ATTRIB_PNTC)), which would
then trash the user-defined varying.
pgpYqycXnADX6.pgp
Description: PGP signature
_______________________________________________ mesa-dev mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/mesa-dev
