On Thu, Jun 27, 2013 at 04:30:41PM -0700, Ben Widawsky wrote:
> This allows us to be aware of all the VMAs leftover and teardown, and is
> useful for debug. I suspect it will prove even more useful later.
> 
> Signed-off-by: Ben Widawsky <b...@bwidawsk.net>
> ---
>  drivers/gpu/drm/i915/i915_drv.h | 2 ++
>  drivers/gpu/drm/i915/i915_gem.c | 4 ++++
>  2 files changed, 6 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 247a124..0bc4251 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -446,6 +446,7 @@ struct i915_address_space {
>       struct drm_mm mm;
>       struct drm_device *dev;
>       struct list_head global_link;
> +     struct list_head vma_list;

This one feels a bit unecessary. With the drm_mm_node embedded we already
have a total of 4 lists:
- The node_list in the drm_mm. There's even a for_each helper for it. This
  lists nodes in ascending offset ordering. We only need to upcast from
  the drm_mm_node to our vma, but due to embedded that's no problem.
- The hole list in drm_mm. Again comes with a for_each helper included.
- The inactive/active lists. Together they again list all vmas in a vm.

What's the new one doing that we need it so much?
-Daniel

>       unsigned long start;            /* Start offset always 0 for dri2 */
>       size_t total;           /* size addr space maps (ex. 2GB for ggtt) */
>  
> @@ -556,6 +557,7 @@ struct i915_vma {
>       struct list_head mm_list;
>  
>       struct list_head vma_link; /* Link in the object's VMA list */
> +     struct list_head per_vm_link; /* Link in the VM's VMA list */
>  };
>  
>  struct i915_ctx_hang_stats {
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index a3e8c26..5c0ad6a 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -4112,14 +4112,17 @@ struct i915_vma *i915_gem_vma_create(struct 
> drm_i915_gem_object *obj,
>  
>       INIT_LIST_HEAD(&vma->vma_link);
>       INIT_LIST_HEAD(&vma->mm_list);
> +     INIT_LIST_HEAD(&vma->per_vm_link);
>       vma->vm = vm;
>       vma->obj = obj;
> +     list_add_tail(&vma->per_vm_link, &vm->vma_list);
>  
>       return vma;
>  }
>  
>  void i915_gem_vma_destroy(struct i915_vma *vma)
>  {
> +     list_del(&vma->per_vm_link);
>       WARN_ON(vma->node.allocated);
>       kfree(vma);
>  }
> @@ -4473,6 +4476,7 @@ static void i915_init_vm(struct drm_i915_private 
> *dev_priv,
>       INIT_LIST_HEAD(&vm->active_list);
>       INIT_LIST_HEAD(&vm->inactive_list);
>       INIT_LIST_HEAD(&vm->global_link);
> +     INIT_LIST_HEAD(&vm->vma_list);
>       list_add(&vm->global_link, &dev_priv->vm_list);
>  }
>  
> -- 
> 1.8.3.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to