On Fri, Apr 29, 2016 at 11:42 AM, Ian Romanick <i...@freedesktop.org> wrote: > On 02/09/2016 12:02 AM, Ian Romanick wrote: >> I submitted a public spec bug for this issue: >> >> https://www.khronos.org/bugzilla/show_bug.cgi?id=1460 >> >> I'm investigating whether a similar bug is needed for the SPIR-V >> specification. >> >> I think an argument can be made for either the flush-to-zero or >> non-flush-to-zero behavior in the case of unpackHalf2x16 and (possibly) >> packHalf2x16. The only place in the GLSL 4.50.5 specification that >> mentions subnormal values is section 4.7.1 (Range and Precision). >> >> "The precision of stored single- and double-precision floating-point >> variables is defined by the IEEE 754 standard for 32-bit and 64-bit >> floating-point numbers....Any denormalized value input into a >> shader or potentially generated by any operation in a shader can be >> flushed to 0." >> >> Since there is no half-precision type in desktop GLSL, there is no >> mention of 16-bit subnormal values. As Roland mentioned before, all >> 16-bit subnormal values values are 32-bit normal values. >> >> As I mentioned before, from the point of view of an application >> developer, the flush-to-zero behavior for unpackHalf2x16 is both >> surprising and awful. :) >> >> While I think an argument can be made for either behavior, I also think >> the argument for the non-flush-to-zero behavior is slightly stronger. >> The case for flush-to-zero based on the above spec quotation fails for >> two reasons. First, the "input into [the] shader" is not a subnormal >> number. It is an integer. Second, the "[value] potentially generated >> by [the] operation" is not subnormal in single-precision. >> >> We've already determined that NVIDIA closed-source drivers do not flush >> to zero. I'm curious to know what AMD's closed-source drivers do for >> 16-bit subnormal values supplied to unpackHalf2x16. If they do not >> flush to zero, then you had better believe that applications depend on >> that behavior... and that also means that it doesn't matter very much >> what piglit does or the spec does (or does not) say. This is the sort >> of situation where the spec changes to match application expectations >> and shipping implementations... and Mesa drivers change to follow. This >> isn't even close to the first time through that loop. > > There is finally a conclusion to this issue. The GLSL ES 3.2 spec says: > > "Returns a two-component floating-point vector with components > obtained by unpacking a 32-bit unsigned integer into a pair of > 16-bit values, interpreting those values as 16-bit floating-point > numbers according to the OpenGL ES Specification, and converting > them to 32-bit floating-point values." > > Since the OpenGL ES specification allows 16-bit floating-point denorms > to flush to zero, the "interpreting those values as 16-bit > floating-point numbers according to the OpenGL ES Specification" allows > the flush to zero. > > The bug has been closed with the following comment: > >> For current versions and hardware, we have decided both ways have to >> be supported: it okay to flush to zero and it is okay to not flush to >> zero. >> >> Due to the general desire is to preserve values, we are looking at >> future versions revisiting this current state. > > So... flushing is okay, but it may be okay in the future. I don't think > that helps answer the question about whether or not we should keep the > piglit test that expects non-flush-to-zero behavior. >
Thanks for the info. Since radeonsi now preserves 16-bit denorms on all chips, we should be fine. Marek _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev