Commit b8b1d83c71fd148d2fd84afdc20c0aa367114f92 partially fixed
dEQP-GLES3.functional.state_query.integers.stencil*value*mask*getfloat
by changing the initial value masks from 32-bit ~0 (0x) to 0xFF.
However, the application can call glStencilFunc and related functions
to set a new value m
https://bugs.freedesktop.org/show_bug.cgi?id=99116
--- Comment #2 from almos ---
Probably related to bug 94168
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are the assignee for the bug.___
mesa-dev mailing list
mesa-
https://bugs.freedesktop.org/show_bug.cgi?id=99125
Bug ID: 99125
Summary: Log to a file all GALLIUM_HUD infos
Product: Mesa
Version: unspecified
Hardware: Other
OS: All
Status: NEW
Severity: enhancement
https://bugs.freedesktop.org/show_bug.cgi?id=99116
--- Comment #3 from Fabian Maurer ---
I just tried LIBGL_ALWAYS_SOFTWARE=1, but still got a black-screen.
--
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug._
https://bugs.freedesktop.org/show_bug.cgi?id=99116
--- Comment #4 from almos ---
(In reply to Fabian Maurer from comment #3)
> I just tried LIBGL_ALWAYS_SOFTWARE=1, but still got a black-screen.
If your software driver is llvmpipe, and the bug is in mesa/st, then it will be
black. For me radeons
https://bugs.freedesktop.org/show_bug.cgi?id=99116
--- Comment #5 from Fabian Maurer ---
Created attachment 128523
--> https://bugs.freedesktop.org/attachment.cgi?id=128523&action=edit
Log for swrast
(In reply to almos from comment #4)
> For me radeonsi and llvmpipe render black, swrast render
Should ref also get clamped?
On Dec 17, 2016 1:03 AM, "Kenneth Graunke" wrote:
> Commit b8b1d83c71fd148d2fd84afdc20c0aa367114f92 partially fixed
> dEQP-GLES3.functional.state_query.integers.stencil*value*mask*getfloat
> by changing the initial value masks from 32-bit ~0 (0x) to 0xFF.
>
>
We are removing the entry right beforehand, so this can never succeed.
Tested with a shader-db run. No changes in instruction count.
---
src/compiler/glsl/opt_copy_propagation.cpp | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/compiler/glsl/opt_copy_propagation.cpp
b/src/compiler/gl
On 17/12/16 03:35, Kenneth Graunke wrote:
> This fixes 555 dEQP tests (using the nougat-cts-dev branch), Piglit's
> arb_program_interface_query/arb_program_interface_query-resource-query,
Somewhat offtopic: FWIW, piglit tests for arb_program_interface_query
doesn't tests too much the REFERENCED_BY
Part of the refactor to move all gallium calls to
nine_state.c, and have all internal states required
for those calls in nine_context.
v2: Release buffers for Draw*Up functions in device9.c,
instead of nine_context. This prevents a leak with csmt
where the wrong pointers were released.
Signed-off
On Saturday, December 17, 2016 5:41:35 PM PST Alejandro Piñeiro wrote:
> On 17/12/16 03:35, Kenneth Graunke wrote:
> > This fixes 555 dEQP tests (using the nougat-cts-dev branch), Piglit's
> > arb_program_interface_query/arb_program_interface_query-resource-query,
>
> Somewhat offtopic: FWIW, pigl
Thomas Helland writes:
> We are removing the entry right beforehand, so this can never succeed.
> Tested with a shader-db run. No changes in instruction count.
> ---
> src/compiler/glsl/opt_copy_propagation.cpp | 6 --
> 1 file changed, 6 deletions(-)
>
> diff --git a/src/compiler/glsl/opt_c
I don't see any spec justification for masking this. dEQP is broken here.
Implementations have the flexibility to retain more bits in the mask (and
have more bits set in the initial state) than the depth of the deepest
stencil buffer supported. From the ES3 spec, 4.1.4, second to last para:
"In
Hi,
Up to now, load_push_constant intrinsics generated by the spirv
compiler were hardcoded to have an offset of 0 and a range of 128.
This series adds a function compute those offset & range.
Cheers,
Lionel
Lionel Landwerlin (2):
spirv: move block_size() definition
spirv: compute push con
Signed-off-by: Lionel Landwerlin
---
src/compiler/spirv/vtn_variables.c | 85 +++--
src/intel/vulkan/anv_nir_lower_push_constants.c | 1 -
2 files changed, 66 insertions(+), 20 deletions(-)
diff --git a/src/compiler/spirv/vtn_variables.c
b/src/compiler/spirv/vt
Signed-off-by: Lionel Landwerlin
---
src/compiler/spirv/vtn_variables.c | 96 +++---
1 file changed, 48 insertions(+), 48 deletions(-)
diff --git a/src/compiler/spirv/vtn_variables.c
b/src/compiler/spirv/vtn_variables.c
index be64dd9550..f27d75cbec 100644
--- a/s
2016-12-17 21:25 GMT+01:00 Eric Anholt :
> Thomas Helland writes:
>
>> We are removing the entry right beforehand, so this can never succeed.
>> Tested with a shader-db run. No changes in instruction count.
>> ---
>> src/compiler/glsl/opt_copy_propagation.cpp | 6 --
>> 1 file changed, 6 dele
On Sunday, December 18, 2016 11:33:49 AM PST Chris Forbes wrote:
> I don't see any spec justification for masking this. dEQP is broken here.
> Implementations have the flexibility to retain more bits in the mask (and
> have more bits set in the initial state) than the depth of the deepest
> stencil
These are listed as Z+ in the GL spec, and often have values of
0x. For glGetFloat, we should return 4294967295.0 rather than
-1.0. Similarly, for glGetInteger64v, we should return 0x, not
the sign extended 0x.
Fixes 6 dEQP tests matching the pattern
dEQP-GLES3.fu
The "State Tables" section of the OpenGL specification lists many values
as belonging to Z+ (non-negative integers), not Z (all integers).
For ordinary glGetInteger queries, this doesn't matter. However, when
accessing Z+ values via glGetFloat or glGetInteger64, we need to treat
the source value
GetFloat of integer valued things is supposed to perform a simple
int -> float conversion. INT_TO_FLOAT is not that. Instead, it
converts [-2147483648, 2147483647] to a normalized [-1.0, 1.0] float.
This is only used for COMPRESSED_TEXTURE_FORMATS, which nobody in
their right mind would try and
I think you missed the one in _mesa_GetFloati_v. With that fixed,
Reviewed-by: Ilia Mirkin
On Sun, Dec 18, 2016 at 12:02 AM, Kenneth Graunke wrote:
> GetFloat of integer valued things is supposed to perform a simple
> int -> float conversion. INT_TO_FLOAT is not that. Instead, it
> converts [
You've missed piping it through for a lot of functions (mostly the
*i_v variants). Is that because they shouldn't be used with pnames
that are non-indexed? If so, should probably remove the TYPE_INT_*
from them as well to avoid confusion. Alternatively, if they are
potentially needed, it would save
GetFloat of integer valued things is supposed to perform a simple
int -> float conversion. INT_TO_FLOAT is not that. Instead, it
converts [-2147483648, 2147483647] to a normalized [-1.0, 1.0] float.
This is only used for COMPRESSED_TEXTURE_FORMATS, which nobody in
their right mind would try and
The "State Tables" section of the OpenGL specification lists many values
as belonging to Z+ (non-negative integers), not Z (all integers).
For ordinary glGetInteger queries, this doesn't matter. However, when
accessing Z+ values via glGetFloat or glGetInteger64, we need to treat
the source value
On Sun, Dec 18, 2016 at 12:38 AM, Kenneth Graunke wrote:
> The "State Tables" section of the OpenGL specification lists many values
> as belonging to Z+ (non-negative integers), not Z (all integers).
>
> For ordinary glGetInteger queries, this doesn't matter. However, when
> accessing Z+ values v
Reviewed-by: Ilia Mirkin
On Sun, Dec 18, 2016 at 12:02 AM, Kenneth Graunke wrote:
> These are listed as Z+ in the GL spec, and often have values of
> 0x. For glGetFloat, we should return 4294967295.0 rather than
> -1.0. Similarly, for glGetInteger64v, we should return 0x, not
>
27 matches
Mail list logo