On 3/26/23 12:19, Christian König wrote:
> Am 25.03.23 um 15:58 schrieb Dmitry Osipenko:
>> On 3/15/23 16:46, Dmitry Osipenko wrote:
>>> On 3/14/23 05:26, Dmitry Osipenko wrote:
@@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct
drm_gem_shmem_object *shmem, struct vm_area_struct
Am 25.03.23 um 15:58 schrieb Dmitry Osipenko:
On 3/15/23 16:46, Dmitry Osipenko wrote:
On 3/14/23 05:26, Dmitry Osipenko wrote:
@@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem,
struct vm_area_struct
return ret;
}
+ dma_resv_lock(shmem->
On 3/15/23 16:46, Dmitry Osipenko wrote:
> On 3/14/23 05:26, Dmitry Osipenko wrote:
>> @@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object
>> *shmem, struct vm_area_struct
>> return ret;
>> }
>>
>> +dma_resv_lock(shmem->base.resv, NULL);
>> ret = dr
On 3/14/23 05:26, Dmitry Osipenko wrote:
> @@ -633,7 +605,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object
> *shmem, struct vm_area_struct
> return ret;
> }
>
> + dma_resv_lock(shmem->base.resv, NULL);
> ret = drm_gem_shmem_get_pages(shmem);
> + dma_resv
Replace all drm-shmem locks with a GEM reservation lock. This makes locks
consistent with dma-buf locking convention where importers are responsible
for holding reservation lock for all operations performed over dma-bufs,
preventing deadlock between dma-buf importers and exporters.
Suggested-by: D