Am 26.06.2017 um 22:12 schrieb Felix Kuehling:
I'm wondering what makes this possible. Let me quote the last discussion
we had about GART:
On 17-04-04 06:26 PM, Felix Kuehling wrote:
Even with GART address space being allocated on demand, it still seems
to be limiting the maximum available system memory that can be allocated
from TTM. We have a test that allocates a bunch of 128MB buffers. On a
32GB system memory system with a 4GB GPU it can only get 31 buffers, so
a bit under 4GB. Looks like BOs remain bound to GART after being
initialized or migrated to GTT. For KFD that limits the amount of usable
system memory, for amdgpu_cs, I think it limits the amount of system
memory that can be used in a single command submission.
That's why I added a new limit instead of modifying the existing one. 
The gartsize parameter still works as it did before.
On 17-04-05 02:55 AM, Christian König wrote:
Are these two effects intentional?
Yes, thought about dropping this as well but testing showed that it is
still necessary.

The total amount of memory bound to the GPU must be limited by the
GART size or otherwise the swapping code won't work any more. E.g.
suspend/resume fails immediately if I remove that.
Has that changed? I don't remember seeing any changes to that effect.
No, there are still a bunch of problems to solve. I should have put a 
WIP mark on that patch.
The maximum BO size is limited with this patch and will probably result 
in a bunch of S3 problems.
But as long as you don't care about those limitations it should work.

Christian.

Regards,
   Felix


On 17-06-26 09:39 AM, Christian König wrote:
From: Christian König <christian.koe...@amd.com>

Limit the size of the GART table for the system domain.

This saves us a bunch of visible VRAM, but also limitates the maximum BO size 
we can swap out.

Signed-off-by: Christian König <christian.koe...@amd.com>
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h         | 2 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 6 ++++++
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c     | 4 ++++
  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c    | 8 ++++++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     | 2 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 6 ++++--
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 6 +++---
  7 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index ab1dad2..a511029 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -76,6 +76,7 @@
  extern int amdgpu_modeset;
  extern int amdgpu_vram_limit;
  extern int amdgpu_gart_size;
+extern unsigned amdgpu_gart_sys_limit;
  extern int amdgpu_moverate;
  extern int amdgpu_benchmarking;
  extern int amdgpu_testing;
@@ -602,6 +603,7 @@ struct amdgpu_mc {
        u64                     mc_vram_size;
        u64                     visible_vram_size;
        u64                     gtt_size;
+       u64                     gtt_sys_limit;
        u64                     gtt_start;
        u64                     gtt_end;
        u64                     vram_start;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 44484bb..b6edb83 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1122,6 +1122,12 @@ static void amdgpu_check_arguments(struct amdgpu_device 
*adev)
                }
        }
+ if (amdgpu_gart_sys_limit < 32) {
+               dev_warn(adev->dev, "gart sys limit (%d) too small\n",
+                                amdgpu_gart_sys_limit);
+                       amdgpu_gart_sys_limit = 32;
+       }
+
        amdgpu_check_vm_size(adev);
amdgpu_check_block_size(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 5a1d794..907ae5e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -75,6 +75,7 @@
int amdgpu_vram_limit = 0;
  int amdgpu_gart_size = -1; /* auto */
+unsigned amdgpu_gart_sys_limit = 256;
  int amdgpu_moverate = -1; /* auto */
  int amdgpu_benchmarking = 0;
  int amdgpu_testing = 0;
@@ -124,6 +125,9 @@ module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
  MODULE_PARM_DESC(gartsize, "Size of PCIE/IGP gart to setup in megabytes (32, 64, 
etc., -1 = auto)");
  module_param_named(gartsize, amdgpu_gart_size, int, 0600);
+MODULE_PARM_DESC(gartlimit, "GART limit for the system domain in megabytes (default 256)");
+module_param_named(gartlimit, amdgpu_gart_sys_limit, int, 0600);
+
  MODULE_PARM_DESC(moverate, "Maximum buffer migration rate in MB/s. (32, 64, etc., 
-1=auto, 0=1=disabled)");
  module_param_named(moverate, amdgpu_moverate, int, 0600);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
index 8877015..5c6a461 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
@@ -70,6 +70,9 @@ void amdgpu_gart_set_defaults(struct amdgpu_device *adev)
                                        adev->mc.mc_vram_size);
        else
                adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
+
+       adev->mc.gtt_sys_limit = min((uint64_t)amdgpu_gart_sys_limit << 20,
+                                    adev->mc.gtt_size);
  }
/**
@@ -350,8 +353,9 @@ int amdgpu_gart_init(struct amdgpu_device *adev)
        if (r)
                return r;
        /* Compute table size */
-       adev->gart.num_cpu_pages = adev->mc.gtt_size / PAGE_SIZE;
-       adev->gart.num_gpu_pages = adev->mc.gtt_size / AMDGPU_GPU_PAGE_SIZE;
+       adev->gart.num_cpu_pages = adev->mc.gtt_sys_limit / PAGE_SIZE;
+       adev->gart.num_gpu_pages = adev->mc.gtt_sys_limit /
+               AMDGPU_GPU_PAGE_SIZE;
        DRM_INFO("GART: num cpu pages %u, num gpu pages %u\n",
                 adev->gart.num_cpu_pages, adev->gart.num_gpu_pages);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 96c4493..6eac481 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -62,7 +62,7 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
                /* Maximum bo size is the unpinned gtt size since we use the 
gtt to
                 * handle vram to system pool migrations.
                 */
-               max_size = adev->mc.gtt_size - adev->gart_pin_size;
+               max_size = adev->mc.gtt_sys_limit - adev->gart_pin_size;
                if (size > max_size) {
                        DRM_DEBUG("Allocation size %ldMb bigger than %ldMb 
limit\n",
                                  size >> 20, max_size >> 20);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
index f7d22c4..7609229 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
@@ -42,13 +42,14 @@ struct amdgpu_gtt_mgr {
  static int amdgpu_gtt_mgr_init(struct ttm_mem_type_manager *man,
                               unsigned long p_size)
  {
+       struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
        struct amdgpu_gtt_mgr *mgr;
mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
        if (!mgr)
                return -ENOMEM;
- drm_mm_init(&mgr->mm, 0, p_size);
+       drm_mm_init(&mgr->mm, 0, adev->mc.gtt_sys_limit >> PAGE_SHIFT);
        spin_lock_init(&mgr->lock);
        mgr->available = p_size;
        man->priv = mgr;
@@ -95,6 +96,7 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
                         const struct ttm_place *place,
                         struct ttm_mem_reg *mem)
  {
+       struct amdgpu_device *adev = amdgpu_ttm_adev(man->bdev);
        struct amdgpu_gtt_mgr *mgr = man->priv;
        struct drm_mm_node *node = mem->mm_node;
        enum drm_mm_insert_mode mode;
@@ -112,7 +114,7 @@ int amdgpu_gtt_mgr_alloc(struct ttm_mem_type_manager *man,
        if (place && place->lpfn)
                lpfn = place->lpfn;
        else
-               lpfn = man->size;
+               lpfn = adev->gart.num_cpu_pages;
mode = DRM_MM_INSERT_BEST;
        if (place && place->flags & TTM_PL_FLAG_TOPDOWN)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index e4860ac..1e0fcb6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -221,7 +221,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
                                 * allocating address space for the BO.
                                 */
                                abo->placements[i].lpfn =
-                                       adev->mc.gtt_size >> PAGE_SHIFT;
+                                       adev->gart.num_cpu_pages;
                        }
                }
                break;
@@ -384,7 +384,7 @@ static int amdgpu_move_vram_ram(struct ttm_buffer_object 
*bo,
        placement.num_busy_placement = 1;
        placement.busy_placement = &placements;
        placements.fpfn = 0;
-       placements.lpfn = adev->mc.gtt_size >> PAGE_SHIFT;
+       placements.lpfn = adev->gart.num_cpu_pages;
        placements.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
        r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
                             interruptible, no_wait_gpu);
@@ -431,7 +431,7 @@ static int amdgpu_move_ram_vram(struct ttm_buffer_object 
*bo,
        placement.num_busy_placement = 1;
        placement.busy_placement = &placements;
        placements.fpfn = 0;
-       placements.lpfn = adev->mc.gtt_size >> PAGE_SHIFT;
+       placements.lpfn = adev->gart.num_cpu_pages;
        placements.flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
        r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
                             interruptible, no_wait_gpu);
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to