On 11/5/25 14:38, Liang, Prike wrote:
> [Public]
> 
> Regards,
>       Prike
> 
>> -----Original Message-----
>> From: Koenig, Christian <[email protected]>
>> Sent: Wednesday, November 5, 2025 8:50 PM
>> To: Liang, Prike <[email protected]>; [email protected]
>> Cc: Deucher, Alexander <[email protected]>
>> Subject: Re: [PATCH] drm/amdgpu: attach tlb fence to the PTs update
>>
>>
>>
>> On 11/5/25 13:14, Prike Liang wrote:
>>> Ensure the userq TLB flush is emitted only after the VM update
>>> finishes and the PT BOs have been annotated with bookkeeping fences.
>>>
>>> Suggested-by: Christian König <[email protected]>
>>> Signed-off-by: Prike Liang <[email protected]>
>>
>> Reviewed-by: Christian König <[email protected]>
>>
>> Could be that people start to complain that this results in extra overhead, 
>> but that
>> shouldn't be much of an issue.
> If without sorting the userq or KFD compute context, maybe overhead on legacy 
> kernel queue case?

Yes, starting a worker all the time is not that much overhead but checking all 
the VMIDs for the PASID is.

We could make it depend on the HW generation if that really becomes a problem.

Regards,
Christian.

> 
>> Regards,
>> Christian.
>>
>>> ---
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index db66b4232de0..79d687dee877 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -1062,7 +1062,7 @@ amdgpu_vm_tlb_flush(struct
>> amdgpu_vm_update_params *params,
>>>     }
>>>
>>>     /* Prepare a TLB flush fence to be attached to PTs */
>>> -   if (!params->unlocked && vm->is_compute_context) {
>>> +   if (!params->unlocked) {
>>>             amdgpu_vm_tlb_fence_create(params->adev, vm, fence);
>>>
>>>             /* Makes sure no PD/PT is freed before the flush */
> 

Reply via email to