On Mon, Mar 4, 2013 at 10:11 AM, Pohjolainen, Topi <topi.pohjolai...@intel.com> wrote: > On Mon, Mar 04, 2013 at 09:56:34AM -0500, Kristian H?gsberg wrote: >> On Mon, Mar 4, 2013 at 4:55 AM, Pohjolainen, Topi >> <topi.pohjolai...@intel.com> wrote: >> > On Fri, Mar 01, 2013 at 10:03:45AM -0500, Kristian H?gsberg wrote: >> >> On Fri, Mar 1, 2013 at 3:51 AM, Pohjolainen, Topi >> >> <topi.pohjolai...@intel.com> wrote: >> >> > On Tue, Feb 26, 2013 at 04:05:25PM +0000, Tom Cooksey wrote: >> >> >> Hi Topi, >> >> >> >> >> >> > The second more or less questionable part is the support for >> >> >> > creating YUV >> >> >> > buffers. In order to test for YUV sampling one needs a way of >> >> >> > providing them >> >> >> > for the EGL stack. Here I chose to augment the dri driver backing >> >> >> > gbm as I >> >> >> > couldn't come up with anything better. It may be helpful to take a >> >> >> > look at the >> >> >> > corresponding piglit test case and framework support I've written >> >> >> > for it. >> >> >> >> >> >> You might want to take a look at the EGL_EXT_image_dma_buf_import[i] >> >> >> which has been written >> >> >> specifically for this purpose. Though this does assume you have a >> >> >> driver which supports exporting a >> >> >> YUV buffer it has allocated with dma_buf, such as a v4l2 driver or >> >> >> even ion on Android. >> >> >> >> >> > >> >> > It certainly looks good addressing not only the individual plane setup >> >> > but >> >> > allowing one to control also the conversion coefficients and subsampling >> >> > position. >> >> > Coming from piglit testing point of view, do you have any ideas where to >> >> > allocate the buffers from? I guess people wouldn't be too happy seeing >> >> > v4l2 tied >> >> > into piglit, for example. >> >> >> >> SInce you're already using intel specific ioctls to mmap the buffers, >> >> I'd suggest you just go all the way and allocate using intel specific >> >> ioctls (like my simple-yuv.c example). I don't really see any other >> >> approach, but it's not pretty... >> >> >> > >> > I used gbm buffer objects in order to match the logic later in >> > 'dri2_drm_create_image_khr()' which expects the buffer to be of the type >> > 'gbm_dri_bo' (gbm_bo) for the target EGL_NATIVE_PIXMAP_KHR. Giving drm >> > buffer >> > objects instead would require new target, I guess? >> >> Right... I'd use the extension Tom suggests: >> >> http://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt >> >> which is mostly implemented by this patch: >> >> http://lists.freedesktop.org/archives/mesa-dev/2013-February/035429.html >> >> with just the EGL extension bits missing. That way, you're also not >> dependent on any specific window system. As it is your test has to >> run under gbm, using the dmabuf import extension it can run under any >> window system. > > Just to clarify that I understood correctly. The actual creation of the buffer > (and dma_buf exporting) would still be via hardware specific ioctls (in > intel's > case GEM)? Your and Tom's material address only the importing side, or did I > miss something?
Yes, that's correct. You'll need intel create and export to fd functions, but you are already mapping the bo using intel specific ioctls. So I think it's cleaner to just have a chipset specific function to create the bo and return an fd, stride etc, and from there on it's generic code where you feed it into the dma_buf_import function. Kristian _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev