On Tue, Apr 12, 2016 at 04:56:51PM +0100, Tvrtko Ursulin wrote:
> From: Chris Wilson <ch...@chris-wilson.co.uk>
> 
> By tracking the iomapping on the VMA itself, we can share that area
> between multiple users. Also by only revoking the iomapping upon
> unbinding from the mappable portion of the GGTT, we can keep that iomap
> across multiple invocations (e.g. execlists context pinning).
> 
> v2:
>   * Rebase on nightly;
>   * added kerneldoc. (Tvrtko Ursulin)
> 
> Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c     |  2 ++
>  drivers/gpu/drm/i915/i915_gem_gtt.c | 38 
> +++++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_gem_gtt.h | 38 
> +++++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/intel_fbdev.c  |  8 +++-----
>  4 files changed, 81 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index b37ffea8b458..6a485630595e 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -3393,6 +3393,8 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool 
> wait)
>               ret = i915_gem_object_put_fence(obj);
>               if (ret)
>                       return ret;
> +
> +             i915_vma_iounmap(vma);
>       }
>  
>       trace_i915_vma_unbind(vma);
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c 
> b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index c5cb04907525..b2a8a14e8dcb 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -3626,3 +3626,41 @@ i915_ggtt_view_size(struct drm_i915_gem_object *obj,
>               return obj->base.size;
>       }
>  }
> +
> +void *i915_vma_iomap(struct drm_i915_private *dev_priv,
> +                  struct i915_vma *vma)
> +{
> +     struct drm_i915_gem_object *obj = vma->obj;
> +     struct i915_ggtt *ggtt = &dev_priv->ggtt;
> +
> +     if (WARN_ON(!obj->map_and_fenceable))
> +             return ERR_PTR(-ENODEV);
> +
> +     BUG_ON(!vma->is_ggtt);
> +     BUG_ON(vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL);
> +     BUG_ON((vma->bound & GLOBAL_BIND) == 0);
> +
> +     if (vma->iomap == NULL) {
> +             void *ptr;

We could extract ggtt from the is_ggtt vma->vm that would remove the
dev_priv parameter and make the callers a bit tidier.

static inline struct i915_ggtt *to_ggtt(struct i915_address_space *vm)
{
        BUG_ON(!i915_is_ggtt(vm));
        return container_of(vm, struct i915_ggtt, base);
}

> +
> +             ptr = ioremap_wc(ggtt->mappable_base + vma->node.start,
> +                              obj->base.size);
> +             if (ptr == NULL) {
> +                     int ret;
> +
> +                     /* Too many areas already allocated? */
> +                     ret = i915_gem_evict_vm(vma->vm, true);
> +                     if (ret)
> +                             return ERR_PTR(ret);
> +
> +                     ptr = ioremap_wc(ggtt->mappable_base + vma->node.start,
> +                                      obj->base.size);

No, we really don't want to create a new ioremap for every caller when
we already have one, ggtt->mappable. Hence,
    io-mapping: Specify mapping size for io_mapping_map_wc
being its preceeding patch. The difference is huge on Braswell.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to