On 2/19/2025 12:21 PM, Lazar, Lijo wrote:
> 
> 
> On 2/19/2025 11:50 AM, jesse.zh...@amd.com wrote:
>> From: "jesse.zh...@amd.com" <jesse.zh...@amd.com>
>>
>> - Modify the VM invalidation engine allocation logic to handle SDMA page 
>> rings.
>>   SDMA page rings now share the VM invalidation engine with SDMA gfx rings 
>> instead of
>>   allocating a separate engine. This change ensures efficient resource 
>> management and
>>   avoids the issue of insufficient VM invalidation engines.
>>
>> - Add synchronization for GPU TLB flush operations in gmc_v9_0.c.
>>   Use spin_lock and spin_unlock to ensure thread safety and prevent race 
>> conditions
>>   during TLB flush operations. This improves the stability and reliability 
>> of the driver,
>>   especially in multi-threaded environments.
>>
>> Signed-off-by: Jesse Zhang <jesse.zh...@amd.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 9 +++++++++
>>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c   | 2 ++
>>  2 files changed, 11 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> index cb914ce82eb5..013d31f2794b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> @@ -601,8 +601,17 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device 
>> *adev)
>>                      return -EINVAL;
>>              }
>>  
>> +    if (ring->funcs->type == AMDGPU_RING_TYPE_SDMA &&
> 
> I think better would be to combine ring type with something like this
> 
> bool amdgpu_sdma_is_page_queue(struct amdgpu_device *adev, struct
> amdgpu_ring* ring)
> {
>         int i = ring->me;
> 
>         if (!adev->sdma.has_page_queue || i > adev->sdma.num_instances)

Correction -
        (i >= adev->sdma.num_instances)

Thanks,
Lijo

>                 return false;
> 
>         return (ring == &adev->sdma.instance[i].page);
> }
> 
> Thanks,
> Lijo
> 
>> +        adev->sdma.has_page_queue &&
>> +        (strncmp(ring->name, "sdma", 4) == 0)) {
>> +            /* Do not allocate a separate VM invalidation engine for SDMA 
>> page rings.
>> +             * Shared VM invalid engine with sdma gfx ring.
>> +             */
>> +            ring->vm_inv_eng = inv_eng - 1;
>> +    } else {
>>              ring->vm_inv_eng = inv_eng - 1;
>>              vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
>> +    }
>>  
>>              dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
>>                       ring->name, ring->vm_inv_eng, ring->vm_hub);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> index 2aa87fdf715f..2599da8677da 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -1000,6 +1000,7 @@ static uint64_t gmc_v9_0_emit_flush_gpu_tlb(struct 
>> amdgpu_ring *ring,
>>       * to WA the Issue
>>       */
>>  
>> +    spin_lock(&adev->gmc.invalidate_lock);
>>      /* TODO: It needs to continue working on debugging with semaphore for 
>> GFXHUB as well. */
>>      if (use_semaphore)
>>              /* a read return value of 1 means semaphore acuqire */
>> @@ -1030,6 +1031,7 @@ static uint64_t gmc_v9_0_emit_flush_gpu_tlb(struct 
>> amdgpu_ring *ring,
>>              amdgpu_ring_emit_wreg(ring, hub->vm_inv_eng0_sem +
>>                                    hub->eng_distance * eng, 0);
>>  
>> +    spin_unlock(&adev->gmc.invalidate_lock);
>>      return pd_addr;
>>  }
>>  
> 

Reply via email to