As we may have to iterate a few thousand elements to acquire and release
the shmemfs backing storage for a GPU object, we need to break up the
long loop with cond_resched() to retain a modicum of low latency for
other processes.

Testcase: igt/benchmarks/gem_syslatency
Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
Cc: Kuo-Hsin Yang <vo...@chromium.org>
Cc: Matthew Auld <matthew.a...@intel.com>
Cc: Joonas Lahtinen <joonas.lahti...@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b1caff07ed65..a120112d0621 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2404,6 +2404,7 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object 
*obj,
                        mark_page_accessed(page);
 
                put_page(page);
+               cond_resched();
        }
        obj->mm.dirty = false;
 
@@ -2574,6 +2575,7 @@ static int i915_gem_object_get_pages_gtt(struct 
drm_i915_gem_object *obj)
                gfp_t gfp = noreclaim;
 
                do {
+                       cond_resched();
                        page = shmem_read_mapping_page_gfp(mapping, i, gfp);
                        if (likely(!IS_ERR(page)))
                                break;
@@ -2584,7 +2586,6 @@ static int i915_gem_object_get_pages_gtt(struct 
drm_i915_gem_object *obj)
                        }
 
                        i915_gem_shrink(dev_priv, 2 * page_count, NULL, *s++);
-                       cond_resched();
 
                        /*
                         * We've tried hard to allocate the memory by reaping
-- 
2.19.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to