On 15/01/2018 12:28, Chris Wilson wrote:
As freeing the objects require serialisation on struct_mutex, we should
prefer to use our singlethreaded driver wq that is dedicated to work
requiring struct_mutex (hence serialised).The benefit should be less
clutter on the system wq, allowing it to make progress even when the
driver/struct_mutex is heavily contended.

Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
---
  drivers/gpu/drm/i915/i915_gem.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1135a77b383a..87937c4f9dff 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4732,7 +4732,7 @@ static void __i915_gem_free_object_rcu(struct rcu_head 
*head)
         * detour through a worker.
         */

This comment which is only partially visible here is a bit wonky...

        if (llist_add(&obj->freed, &i915->mm.free_list))
-               schedule_work(&i915->mm.free_work);
+               queue_work(i915->wq, &i915->mm.free_work);
  }
void i915_gem_free_object(struct drm_gem_object *gem_obj)


.. but the logic seems sound.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com>

In general it is a bit funky to call_rcu to schedule worker - what would be the difference to just queueing the worker which would have synchronize rcu in it?

Or would it be feasible to do multi-pass - 1st pass directly from call_rcu does the ones which can be done wo/ struct mutex, leaves the rest on the list and queues more thorough 2nd pass. Haven't really investigated it, just throwing ideas around.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to