[Intel-gfx] [PATCH 05/21] drm/i915: push set_pages down to the callers

2017-10-06 Thread Matthew Auld
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: Chris Wil

[Intel-gfx] [PATCH 05/21] drm/i915: push set_pages down to the callers

2017-10-05 Thread Matthew Auld
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: Chris Wil

[Intel-gfx] [PATCH 05/21] drm/i915: push set_pages down to the callers

2017-09-29 Thread Matthew Auld
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: Chris Wil

Re: [Intel-gfx] [PATCH 05/21] drm/i915: push set_pages down to the callers

2017-09-23 Thread Chris Wilson
Quoting Matthew Auld (2017-09-22 18:32:36) > Each backend is now responsible for calling __i915_gem_object_set_pages > upon successfully gathering its backing storage. This eliminates the > inconsistency between the async and sync paths, which stands out even > more when we start throwing around an

[Intel-gfx] [PATCH 05/21] drm/i915: push set_pages down to the callers

2017-09-22 Thread Matthew Auld
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: Chris Wil