On 1/25/24 12:49, Boris Brezillon wrote:
> On Fri,  5 Jan 2024 21:46:24 +0300
> Dmitry Osipenko <dmitry.osipe...@collabora.com> wrote:
> 
>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
>> @@ -328,6 +328,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping 
>> *mapping)
>>      struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
>>      struct sg_table *sgt;
>>      int prot = IOMMU_READ | IOMMU_WRITE;
>> +    int ret = 0;
>>  
>>      if (WARN_ON(mapping->active))
>>              return 0;
>> @@ -335,15 +336,32 @@ int panfrost_mmu_map(struct panfrost_gem_mapping 
>> *mapping)
>>      if (bo->noexec)
>>              prot |= IOMMU_NOEXEC;
>>  
>> +    if (!obj->import_attach) {
>> +            /*
>> +             * Don't allow shrinker to move pages while pages are mapped.
>> +             * It's fine to move pages afterwards because shrinker will
>> +             * take care of unmapping pages during eviction.
>> +             */
> 
> That's not exactly what this shmem_pin() is about, is it? I think it's
> here to meet the drm_gem_shmem_get_pages_sgt() rule stating that pages
> must be pinned while the sgt returned by drm_gem_shmem_get_pages_sgt()
> is manipulated. You actually unpin the GEM just after the mmu_map_sg()
> call, which means pages could very well be reclaimed while the MMU
> still has a mapping referencing those physical pages. And that's fine,
> because what's supposed to protect against that is the fence we
> register to the GEM resv at job submission time.

The comment indeed needs to be improved, thanks.

s/are mapped/in process of mapping creation/

-- 
Best regards,
Dmitry


Reply via email to