On 4/8/26 10:26, Pierre-Eric Pelloux-Prayer wrote:
> 
> 
> Le 07/04/2026 à 12:05, Christian König a écrit :
>> On 4/3/26 10:35, Pierre-Eric Pelloux-Prayer wrote:
>>> With this change we now have as many clear and move entities as we
>>> have sdma engines (limited to TTM_NUM_MOVE_FENCES).
>>>
>>> To enable load-balancing this patch gives access to all sdma
>>> schedulers to all entities.
>>>
>>> Signed-off-by: Pierre-Eric Pelloux-Prayer 
>>> <[email protected]>
>>> Reviewed-by: Christian König <[email protected]>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 16 +++++++++-------
>>>   1 file changed, 9 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> index 83f6d00dc3a0..648ad344e89c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -2349,8 +2349,6 @@ void amdgpu_ttm_set_buffer_funcs_status(struct 
>>> amdgpu_device *adev, bool enable)
>>>           return;
>>>         if (enable) {
>>> -        struct drm_gpu_scheduler *sched;
>>> -
>>>           if (!adev->mman.num_buffer_funcs_scheds) {
>>>               dev_warn(adev->dev, "Not enabling DMA transfers for in kernel 
>>> use");
>>>               return;
>>> @@ -2358,11 +2356,10 @@ void amdgpu_ttm_set_buffer_funcs_status(struct 
>>> amdgpu_device *adev, bool enable)
>>>             num_clear_entities = MIN(adev->mman.num_buffer_funcs_scheds, 
>>> TTM_NUM_MOVE_FENCES);
>>>           num_move_entities = MIN(adev->mman.num_buffer_funcs_scheds, 
>>> TTM_NUM_MOVE_FENCES);
>>> -        sched = adev->mman.buffer_funcs_scheds[0];
>>>           r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
>>>                             &adev->mman.default_entity,
>>>                             DRM_SCHED_PRIORITY_KERNEL,
>>> -                          &sched, 1, 0);
>>> +                          adev->mman.buffer_funcs_scheds, 1, 0);
>>
>> Why still giving num_schedulers as 1 here???
>>
> 
> Because I think multiple schedulers aren't useful for this entity. But if you 
> prefer I can pass all available schedulers to all ttm entities (in which case 
> I'd remove the parameters from amdgpu_ttm_buffer_entity_init).

Ah! Yeah that makes sense, but please add a comment why we do that.

I completely missed that this is for the default_entity.

Regards,
Christian.

> 
>>>           if (r < 0) {
>>>               dev_err(adev->dev,
>>>                   "Failed setting up TTM entity (%d)\n", r);
>>> @@ -2380,8 +2377,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct 
>>> amdgpu_device *adev, bool enable)
>>>             for (i = 0; i < num_clear_entities; i++) {
>>>               r = amdgpu_ttm_buffer_entity_init(
>>> -                &adev->mman.gtt_mgr, &adev->mman.clear_entities[i],
>>> -                DRM_SCHED_PRIORITY_NORMAL, &sched, 1, 1);
>>> +                &adev->mman.gtt_mgr,
>>> +                &adev->mman.clear_entities[i],
>>> +                DRM_SCHED_PRIORITY_NORMAL,
>>
>> That should be DRM_SCHED_PRIORITY_KERNEL, same below.
> 
> OK, will update in v6.
> 
> Thanks,
> Pierre-Eric
> 
> 
>>
>> Regards,
>> Christian.
>>
>>> +                adev->mman.buffer_funcs_scheds,
>>> +                adev->mman.num_buffer_funcs_scheds, 1);
>>>                 if (r < 0) {
>>>                   for (j = 0; j < i; j++)
>>> @@ -2400,7 +2400,9 @@ void amdgpu_ttm_set_buffer_funcs_status(struct 
>>> amdgpu_device *adev, bool enable)
>>>               r = amdgpu_ttm_buffer_entity_init(
>>>                   &adev->mman.gtt_mgr,
>>>                   &adev->mman.move_entities[i],
>>> -                DRM_SCHED_PRIORITY_NORMAL, &sched, 1, 2);
>>> +                DRM_SCHED_PRIORITY_NORMAL,
>>> +                adev->mman.buffer_funcs_scheds,
>>> +                adev->mman.num_buffer_funcs_scheds, 2);
>>>                 if (r < 0) {
>>>                   for (j = 0; j < i; j++)

Reply via email to