On 05/14/2012 07:30 AM, Paul Berry wrote:
I'm trying to figure out how glBlitFramebuffer() is supposed to behave
when the source rectangle exceeds the bounds of the read framebuffer.
For example, if the read framebuffer has width 4 and height 1, and the
draw framebuffer has width 100 and height 100, what should be the result
of these two calls?

glBlitFramebuffer(-1, 0, 3, 1, 0, 0, 9, 1, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBlitFramebuffer(-1, 0, 3, 1, 0, 0, 9, 1, GL_COLOR_BUFFER_BIT, GL_LINEAR);

(In other words, the source rect is 4 pixels wide with 1 pixel falling
off the left hand edge of the read framebuffer, and the destination rect
is 9 pixels wide with all pixels falling within the draw framebuffer).


Here is the relevant text from the spec (e.g. GL 4.2 core spec, p316):

"The actual region taken from the read framebuffer is limited to the
intersection of the source buffers being transferred, which may include
the color buffer selected by the read buffer, the depth buffer, and/or
the stencil buffer depending on mask. The actual region written to the
draw framebuffer is limited to the intersection of the destination
buffers being written, which may include multiple draw buffers, the
depth buffer, and/or the stencil buffer depending on mask. Whether or
not the source or destination regions are altered due to these limits,
the scaling and offset applied to pixels being transferred is performed
as though no such limits were present."

This is trying to describe the case where buffers attached to the source FBO have different sizes. If the color buffer is 64x64, the depth buffer is 32x32, and the selected buffers is GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, then the source region is treated as though it's 32x32. If the color buffer is 64x64, the depth buffer is 32x32, and the selected buffers is only GL_COLOR_BUFFER_BIT, then the source region is treated as though it's 64x64.

And then later:

"If a linear filter is selected and the rules of LINEAR sampling would
require sampling outside the bounds of a source buffer, it is as though
CLAMP_TO_EDGE texture sampling were being performed. If a linear filter
is selected and sampling would be required outside the bounds of the
specified source region, but within the bounds of a source buffer, the
implementation may choose to clamp while sampling or not."

The last sentence here is also about the mismatched buffer size scenario. If there are two color buffers attached to the source FBO, one being 32x32 and the other being 64x64, a blit of

        glBlitFramebuffer(0, 0, 64, 64, 0, 0, 128, 128,
                        GL_COLOR_BUFFER_BIT,
                        GL_LINEAR);

would sample outside the 32x32 intersection region of the source buffers. However, one of the color buffers has pixel data outside that region. The implementation may or may not sample those pixels.

The behaviour I observe on my nVidia system is: in GL_NEAREST mode,
destination pixels that map to a source pixel outside the read
framebuffer are clipped out of the blit, and are left unchanged.  So, in

So, the source framebuffer has a single 32x32 color buffer, the blit

        glBlitFramebuffer(0, 0, 64, 64, 0, 0, 128, 128,
                        GL_COLOR_BUFFER_BIT,
                        GL_LINEAR);

only modifies the destination pixels (0, 0) - (64, 64)?

While I can see that as being a valid design choice, it's not the one the ARB made. I can't find anything in the spec, even going back to GL_EXT_framebuffer_blit, to support this behavior.

the GL_NEAREST call above, the first two destination pixels are left
unchanged.  In GL_LINEAR mode, the same set of pixels is clipped off as
in GL_LINEAR mode, and the remaining pixels are interpolated as though
no clipping had occurred, with CLAMP_TO_EDGE behaviour occurring for
situations where linear interpolation would have required reading a
non-existent pixel from the read framebuffer.  Notably, this means that
the nVidia driver is not simply reducing the size of the source and
destination rectangles to eliminate the clipped off pixels, because

glBlitFramebuffer(-1, 0, 3, 1, 0, 0, 9, 1, GL_COLOR_BUFFER_BIT, GL_LINEAR);

does *not* produce equivalent interpolation to

glBlitFramebuffer(0, 0, 3, 1, 2, 0, 9, 1, GL_COLOR_BUFFER_BIT, GL_LINEAR);


Mesa, on the other hand, never clips.  The behaviour of destination
pixels that map to a source pixel outside the read framebuffer depends
on whether the read framebuffer is backed by a texture or a
renderbuffer.  If it's backed by a texture, then those pixels are
rendered with CLAMP_TO_EDGE behaviour, regardless of whether the blit is
GL_LINEAR or GL_NEAREST.  If it's backed by a renderbuffer, then garbage
is written to those pixels.

I can't find anything in the spec to support writing garbage either. Any idea what the garbage is? I'm a little surprised that the behavior is different for textures and renderbuffers.

Any opinions on what the correct behaviour is?
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to