On Tue, May 20, 2014 at 4:51 PM, Marek Olšák <mar...@gmail.com> wrote: > On Tue, May 20, 2014 at 9:58 PM, Ilia Mirkin <imir...@alum.mit.edu> wrote: >> Hello, >> >> I attempted doing this a while back, before I really understood what >> was going on. I got some advice that went totally over my head, and I >> dropped the issue. I think I'm much better-prepared to tackle the >> issue this time around. >> >> To recap, nv30 (and nv40) hw can't handle certain color/depth format >> combinations. Specifically, a 16-bit color format must be paired with >> a 16-bit depth format, and a 32-bit color format must be paired with a >> 32-bit depth format (well, Z24S8). This HW also can't handle different >> per-attachment sizes, and also the "linearity" of all the attachments >> must be the same. (Whether a surface is linear or not is _generally_ >> dictated by its w/h, but not always -- POT textures can sometimes end >> up allocated linearly by e.g. the ddx, or something else.) The >> different sizes are handled by not exposing ARB_fbo. However the rest >> of the cases remain. >> >> Now that I kinda understand how things are structured, I was thinking >> of doing the following: >> >> When rendering (i.e. draw_vbo & co) and the fbo has changed (and has >> one of the problems above), I would instead put in a temporary texture >> as the depth attachment. Before actually drawing, I would blit from >> the real target texture into the temporary texture, and then when >> rendering is done, blit back from the temp texture back into the >> target. This deals with the target texture getting modified between >> draws with various blits/mapping/whatever. >> >> This means that you'll only get 16 bits of depth even if you ask for >> 24 with a 16-bit color format, but the alternative seems too >> complex/costly. >> >> So there are a few questions from this approach: >> >> 1. Where do I get the temporary texture from? (And more importantly -- >> when... what happens if allocation fails?) >> >> 2. Having to blit the depth texture back and forth on every draw seems >> _really_ wasteful... anything I can do about that? > > You can do that in set_framebuffer_state. When binding, blit to a > depth buffer which matches the colorbuffer format. When unbinding, > blit back.
set_framebuffer_state doesn't allow an error to be returned. Should I just print a warning and move on? I guess I'm still not 100% on all the terminology -- what do you mean exactly by bind/unbind? Do you mean the transfer_map/unmap stuff? So basically I would blit once on set_framebuffer_state, and then blit back and forth on resource map/unmap, and only ever render to the "temporary" buffer without worrying about blitting wihle rendering? > > You can also drop support for 16-bit formats. I assumed that these were required by some GL version... I also presume that it's faster to use these. BTW, when I say 16-bit, I mean like B5G6R5 or B5G5R5X1, not R16*. > > What about rendering to R16F, RG16F, and RGBA16F? Does R16F have to be > coupled with a 16-bit depth buffer too? Gooood question. I assume they want a 32-bit depth buffer. Although I believe those are actually disabled for now (aka forever, unless someone cares enough to turn them on), even though it does look like there's HW support for R16G16B16A16_FLOAT and R32 versions. Or perhaps they're disabled because they can't be used in conjunction with depth? Not familiar enough with the hw restrictions, sorry. -ilia _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev