On 11/09/14 12:52, Chris Wilson wrote: > On Thu, Sep 11, 2014 at 12:33:41PM +0100, Lionel Landwerlin wrote: >> When using Mesa and LibVA in the same process, one would like to be >> able bind buffers from the output of the decoder to a GL texture >> through an EGLImage. >> >> LibVA can reuse buffers allocated by Gbm through a file descriptor. It >> will then wrap it into a drm_intel_bo with >> drm_intel_bo_gem_create_from_prime(). >> >> Given both libraries are using libdrm to allocate and use buffer >> objects, there is a need to have the buffer objects properly >> refcounted. That is possible if both API use the same drm_intel_bo >> objects, but that also requires that both API use the same >> drm_intel_bufmgr object. > The description is wrong though. Reusing buffers export and import > through a dmabuf, should work and be correctly refcounted already. > > This patch adds the ability to use the same /dev/dri/card0 device fd > between two libraries. This implies that they share the same context and > address space, which is probably not what you want, but nevertheless > seems sensible if they are sharing the device fd in the first place.
That's what I meant, sorry if it was unclear. > > I suspect this may break unwary users such as igt, which would fork > after creating a bufmgr, close the fds, but then open their own device > fd with the same fd as before. Not a huge issue, just something to check > in case it causes some fun fallout. Will have a look, thanks. > >> This patch modifies drm_intel_bufmgr_gem_init() so given a file >> descriptor, it will look for an already existing drm_intel_bufmgr >> using the same file descriptor and return that object. >> >> Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin at intel.com> >> --- >> intel/intel_bufmgr_gem.c | 100 >> +++++++++++++++++++++++++++++++++++++++++------ >> 1 file changed, 88 insertions(+), 12 deletions(-) >> >> diff --git a/intel/intel_bufmgr_gem.c b/intel/intel_bufmgr_gem.c >> index 0e1cb0d..125c81c 100644 >> --- a/intel/intel_bufmgr_gem.c >> +++ b/intel/intel_bufmgr_gem.c >> @@ -94,6 +94,8 @@ struct drm_intel_gem_bo_bucket { >> typedef struct _drm_intel_bufmgr_gem { >> drm_intel_bufmgr bufmgr; >> >> + atomic_t refcount; >> + >> int fd; >> >> int max_relocs; >> @@ -3186,6 +3188,85 @@ drm_intel_bufmgr_gem_set_aub_annotations(drm_intel_bo >> *bo, >> bo_gem->aub_annotation_count = count; >> } >> >> +static pthread_mutex_t bufmgr_list_mutex = PTHREAD_MUTEX_INITIALIZER; >> +static drm_intel_bufmgr_gem **bufmgr_list = NULL; >> +static unsigned bufmgr_list_size = 0, bufmgr_list_nb; >> + >> +static drm_intel_bufmgr_gem * >> +drm_intel_bufmgr_gem_find_or_create_for_fd(int fd, int *found) >> +{ >> + drm_intel_bufmgr_gem *bufmgr_gem; >> + >> + assert(pthread_mutex_lock(&bufmgr_list_mutex) == 0); >> + >> + if (bufmgr_list == NULL) { > Just use an embedded list rather than array, that would greatly simplify > the search, cration and deletion. > -Chris > I tried to use the embedded list, but from my understanding I need the embedded structure at the top of the bufmgr struct. Is that possible? Sounds like an ABI break. Thanks, - Lionel