On Mon, 2017-08-21 at 19:34 +0100, Matthew Auld wrote:
> Each backend is now responsible for calling __i915_gem_object_set_pages
> upon successfully gathering its backing storage. This eliminates the
> inconsistency between the async and sync paths, which stands out even
> more when we start throwing around an sg_mask in a later patch.
> 
> Suggested-by: Chris Wilson <ch...@chris-wilson.co.uk>
> Signed-off-by: Matthew Auld <matthew.a...@intel.com>
> Cc: Joonas Lahtinen <joonas.lahti...@linux.intel.com>
> Cc: Chris Wilson <ch...@chris-wilson.co.uk>

<SNIP>

> @@ -2485,12 +2490,10 @@ static int ____i915_gem_object_get_pages(struct 
> drm_i915_gem_object *obj)
>               return -EFAULT;
>       }
>  
> -     pages = obj->ops->get_pages(obj);
> -     if (unlikely(IS_ERR(pages)))
> -             return PTR_ERR(pages);
> +     ret = obj->ops->get_pages(obj);
> +     GEM_BUG_ON(ret == 0 && IS_ERR_OR_NULL(obj->mm.pages));

!ret should be equally readable. Especially if you call the variable
"err".

Reviewed-by: Joonas Lahtinen <joonas.lahti...@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to