Great to see this patch making progress.
I've tested it extensively, and unfortunately, I’ve noticed a memory
leak in surface_gl_create_texture_from_fd(). The memory leak is hard to
see since the memory is owned by the gpu driver.
On Intel hardware, it's possible to observe the leak using:
cat /sys/module/i915/refcnt
or
xpu-smi ps
In on of my use case—which involves frequent scanout disable/enable
cycles—the leak is quite apparent. However, in more typical scenarios,
it might be difficult to catch.
The issue stems from the mem_obj not being deleted after use. I’ve put
together a minimal modification to address it:
On 15.05.25 04:45, Vivek Kasireddy wrote:
There are cases where we do not want the memory layout of a texture to
be tiled as the component processing the texture would not know how to
de-tile either via software or hardware. Therefore, ensuring that the
memory backing the texture has a linear layout is absolutely necessary
in these situations.
Cc: Gerd Hoffmann <kra...@redhat.com>
Cc: Marc-André Lureau <marcandre.lur...@redhat.com>
Cc: Dmitry Osipenko <dmitry.osipe...@collabora.com>
Cc: Frediano Ziglio <fredd...@gmail.com>
Cc: Dongwon Kim <dongwon....@intel.com>
Signed-off-by: Vivek Kasireddy <vivek.kasire...@intel.com>
---
include/ui/console.h | 2 ++
ui/console-gl.c | 32 ++++++++++++++++++++++++++++++++
2 files changed, 34 insertions(+)
diff --git a/include/ui/console.h b/include/ui/console.h
index 46b3128185..5cfa6ae215 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -422,6 +422,8 @@ bool console_gl_check_format(DisplayChangeListener *dcl,
pixman_format_code_t format);
void surface_gl_create_texture(QemuGLShader *gls,
DisplaySurface *surface);
+bool surface_gl_create_texture_from_fd(DisplaySurface *surface,
+ int fd, GLuint *texture);
void surface_gl_update_texture(QemuGLShader *gls,
DisplaySurface *surface,
int x, int y, int w, int h);
diff --git a/ui/console-gl.c b/ui/console-gl.c
index 103b954017..97f7989651 100644
--- a/ui/console-gl.c
+++ b/ui/console-gl.c
@@ -25,6 +25,7 @@
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
+#include "qemu/error-report.h"
#include "ui/console.h"
#include "ui/shader.h"
@@ -96,6 +97,37 @@ void surface_gl_create_texture(QemuGLShader *gls,
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
+bool surface_gl_create_texture_from_fd(DisplaySurface *surface,
+ int fd, GLuint *texture)
+{
+ unsigned long size = surface_stride(surface) * surface_height(surface);
+ GLenum err = glGetError();
+ GLuint mem_obj;
+ GLuint mem_obj = 0;
+
+ if (!epoxy_has_gl_extension("GL_EXT_memory_object") ||
+ !epoxy_has_gl_extension("GL_EXT_memory_object_fd")) {
+ return false;
+ }
+
+#ifdef GL_EXT_memory_object_fd
+ glCreateMemoryObjectsEXT(1, &mem_obj);
+ glImportMemoryFdEXT(mem_obj, size, GL_HANDLE_TYPE_OPAQUE_FD_EXT, fd);
+
+ err = glGetError();
+ if (err != GL_NO_ERROR) {
+ if (mem_obj) {
+ glDeleteMemoryObjectsEXT(1, &mem_obj);
+ }
+ error_report("spice: cannot import memory object from fd");
+ return false;
+ }
+
+ glGenTextures(1, texture);
+ glBindTexture(GL_TEXTURE_2D, *texture);
+ glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_TILING_EXT,
GL_LINEAR_TILING_EXT);
+ glTexStorageMem2DEXT(GL_TEXTURE_2D, 1, GL_RGBA8, surface_width(surface),
+ surface_height(surface), mem_obj, 0);
+#endif
+ glDeleteMemoryObjectsEXT(1, &mem_obj);
+ return *texture > 0 && glGetError() == GL_NO_ERROR;
+}
+
void surface_gl_update_texture(QemuGLShader *gls,
DisplaySurface *surface,
int x, int y, int w, int h)
That said, my OpenGL knowledge is somewhat limited, and the
documentation wasn’t entirely clear to me on whether deleting the memory
object while the texture is still being used, is always safe. Based on a
quick look at the iris and llvmpipe implementations, it appears to be
acceptable.
If that's not the case, an alternative fix could follow this approach
instead:
https://gitlab.uni-freiburg.de/opensourcevdi/qemu/-/commit/4ca806871c141089be16af25c1820d3e04f3e27d
Greetings Michael