On 02/19/2014 02:27 PM, Ian Romanick wrote: > On 02/19/2014 12:08 PM, Kenneth Graunke wrote: >> On 02/18/2014 09:48 PM, Chia-I Wu wrote: >>> Since 73bc6061f5c3b6a3bb7a8114bb2e1ab77d23cfdb, Z16 support is >>> not advertised for OpenGL ES contexts due to the terrible >>> performance. It is still enabled for desktop GL because it was >>> believed GL 3.0+ requires Z16. >>> >>> It turns out only GL 3.0 requires Z16, and that is corrected in >>> later GL versions. In light of that, and per Ian's suggestion, >>> stop advertising Z16 support by default, and add a drirc option, >>> gl30_sized_format_rules, so that users can override. > >> I actually don't think that GL 3.0 requires Z16, either. > >> In glspec30.20080923.pdf, page 180, it says: "[...] memory >> allocation per texture component is assigned by the GL to match the >> allocations listed in tables 3.16-3.18 as closely as possible. >> [...] > >> Required Texture Formats [...] In addition, implementations are >> required to support the following sized internal formats. >> Requesting one of these internal formats for any texture type will >> allocate exactly the internal component sizes and types shown for >> that format in tables 3.16-3.17:" > >> Notably, however, GL_DEPTH_COMPONENT16 does /not/ appear in table >> 3.16 or table 3.17. It appears in table 3.18, where the "exact" >> rule doesn't apply, and thus we fall back to the "closely as >> possible" rule. > >> The confusing part is that the ordering of the tables in the PDF >> is: > >> Table 3.16 (pages 182-184) Table 3.18 (bottom of page 184 to top of >> 185) Table 3.17 (page 185) > >> I'm guessing that people saw table 3.16, then saw the one after >> with DEPTH_COMPONENT* formats, and assumed it was 3.17. But it's >> not. > > Yay latex! Thank you for putting things in random order because it > fit better. :( > >> I think we should just drop Z16 support entirely, and I think we >> should remove the requirement from the Piglit test. > > If the test is wrong, and it sounds like it is, then I'm definitely in > favor of changing it. > > The reason to have Z16 is low-bandwidth GPUs in resource constrained > environments. If an app specifically asks for Z16, then there's a > non-zero (though possibly infinitesimal) probability they're doing it > for a reason. For at least some platforms, isn't there "just" a > work-around to implement to fix the performance issue? Doesn't the > performance issue only affect some platforms to begin with? > > Maybe just change the check to > > ctx->TextureFormatSupported[MESA_FORMAT_Z_UNORM16] = > ! platform has z16 performance issues;
Currently, all platforms have Z16 performance issues. On Haswell and later, we could potentially implement the PMA stall optimization, which I believe would reduce(?) the problem. I'm not sure if it would eliminate it though. I think the best course of action is: 1. Fix the Piglit test to not require precise depth formats. 2. Disable Z16 on all generations. 3. Add a "to do" item for implementing the HSW+ PMA stall optimization. 4. Add a "to do" item for re-evaluating Z16 on HSW+ once that's done. --Ken
signature.asc
Description: OpenPGP digital signature
_______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev