To be on the safe side I've adjusted the code to work with any number of
SDMA instances.
Christian.
Am 12.03.19 um 16:33 schrieb Deucher, Alexander:
I don't think Raven has a paging queue in the first place.
Alex
------------------------------------------------------------------------
*From:* amd-gfx <amd-gfx-boun...@lists.freedesktop.org> on behalf of
Kuehling, Felix <felix.kuehl...@amd.com>
*Sent:* Tuesday, March 12, 2019 11:29 AM
*To:* Christian König; amd-gfx@lists.freedesktop.org
*Subject:* RE: [PATCH 2/3] drm/amdgpu: free up the first paging queue
I think this would break Raven, which only has one SDMA engine.
Regards,
Felix
-----Original Message-----
From: amd-gfx <amd-gfx-boun...@lists.freedesktop.org> On Behalf Of
Christian König
Sent: Tuesday, March 12, 2019 8:38 AM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH 2/3] drm/amdgpu: free up the first paging queue
We need the first paging queue to handle page faults.
Signed-off-by: Christian König <christian.koe...@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 3ac5abe937f4..bed18e7bbc36 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -2266,7 +2266,7 @@ static void sdma_v4_0_set_buffer_funcs(struct
amdgpu_device *adev) {
adev->mman.buffer_funcs = &sdma_v4_0_buffer_funcs;
if (adev->sdma.has_page_queue)
- adev->mman.buffer_funcs_ring =
&adev->sdma.instance[0].page;
+ adev->mman.buffer_funcs_ring =
&adev->sdma.instance[1].page;
else
adev->mman.buffer_funcs_ring =
&adev->sdma.instance[0].ring; } @@ -2285,15 +2285,19 @@ static void
sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev)
unsigned i;
adev->vm_manager.vm_pte_funcs = &sdma_v4_0_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- if (adev->sdma.has_page_queue)
- sched = &adev->sdma.instance[i].page.sched;
- else
- sched = &adev->sdma.instance[i].ring.sched;
- adev->vm_manager.vm_pte_rqs[i] =
+ if (adev->sdma.has_page_queue) {
+ sched = &adev->sdma.instance[1].page.sched;
+ adev->vm_manager.vm_pte_rqs[0] =
&sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+ adev->vm_manager.vm_pte_num_rqs = 1;
+ } else {
+ for (i = 0; i < adev->sdma.num_instances; i++) {
+ sched = &adev->sdma.instance[i].ring.sched;
+ adev->vm_manager.vm_pte_rqs[i] =
+ &sched->sched_rq[DRM_SCHED_PRIORITY_KERNEL];
+ }
+ adev->vm_manager.vm_pte_num_rqs =
adev->sdma.num_instances;
}
- adev->vm_manager.vm_pte_num_rqs = adev->sdma.num_instances;
}
const struct amdgpu_ip_block_version sdma_v4_0_ip_block = {
--
2.17.1
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx