On Wed, 2015-12-09 at 13:26 +0000, Dave Gordon wrote:
> On 09/12/15 12:46, ankitprasad.r.sha...@intel.com wrote:
> > From: Ankitprasad Sharma <ankitprasad.r.sha...@intel.com>
> >
> > This patch adds support for clearing buffer objects via CPU/GTT. This
> > is particularly useful for clearing out the non shmem backed objects.
> > Currently intend to use this only for buffers allocated from stolen
> > region.
> >
> > v2: Added kernel doc for i915_gem_clear_object(), corrected/removed
> > variable assignments (Tvrtko)
> >
> > v3: Map object page by page to the gtt if the pinning of the whole object
> > to the ggtt fails, Corrected function name (Chris)
> >
> > Testcase: igt/gem_stolen
> >
> > Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sha...@intel.com>
> > Reviewed-by: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
> > ---
> >   drivers/gpu/drm/i915/i915_drv.h |  1 +
> >   drivers/gpu/drm/i915/i915_gem.c | 79 
> > +++++++++++++++++++++++++++++++++++++++++
> >   2 files changed, 80 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h 
> > b/drivers/gpu/drm/i915/i915_drv.h
> > index 548a0eb..8e554d3 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -2856,6 +2856,7 @@ int i915_gem_obj_prepare_shmem_read(struct 
> > drm_i915_gem_object *obj,
> >                                 int *needs_clflush);
> >
> >   int __must_check i915_gem_object_get_pages(struct drm_i915_gem_object 
> > *obj);
> > +int i915_gem_object_clear(struct drm_i915_gem_object *obj);
> >
> >   static inline int __sg_page_count(struct scatterlist *sg)
> >   {
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c 
> > b/drivers/gpu/drm/i915/i915_gem.c
> > index 9d2e6e3..d57e850 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -5244,3 +5244,82 @@ fail:
> >     drm_gem_object_unreference(&obj->base);
> >     return ERR_PTR(ret);
> >   }
> > +
> > +/**
> > + * i915_gem_clear_object() - Clear buffer object via CPU/GTT
> > + * @obj: Buffer object to be cleared
> > + *
> > + * Return: 0 - success, non-zero - failure
> > + */
> > +int i915_gem_object_clear(struct drm_i915_gem_object *obj)
> > +{
> > +   int ret, i;
> > +   char __iomem *base;
> > +   size_t size = obj->base.size;
> > +   struct drm_i915_private *i915 = to_i915(obj->base.dev);
> > +   struct drm_mm_node node;
> > +
> > +   WARN_ON(!mutex_is_locked(&obj->base.dev->struct_mutex));
> > +   ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE | PIN_NONBLOCK);
> > +   if (ret) {
> > +           memset(&node, 0, sizeof(node));
> > +           ret = drm_mm_insert_node_in_range_generic(&i915->gtt.base.mm,
> > +                                                     &node, 4096, 0,
> > +                                                     I915_CACHE_NONE, 0,
> > +                                                     
> > i915->gtt.mappable_end,
> > +                                                     DRM_MM_SEARCH_DEFAULT,
> > +                                                     
> > DRM_MM_CREATE_DEFAULT);
> > +           if (ret)
> > +                   goto out;
> > +
> > +           i915_gem_object_pin_pages(obj);
> > +   } else {
> > +           node.start = i915_gem_obj_ggtt_offset(obj);
> > +           node.allocated = false;
> > +   }
> > +
> > +   ret = i915_gem_object_put_fence(obj);
> > +   if (ret)
> > +           goto unpin;
> > +
> > +   if (node.allocated) {
> > +           for (i = 0; i < size/PAGE_SIZE; i++) {
> > +                   wmb();
> > +                   i915->gtt.base.insert_page(&i915->gtt.base,
> > +                                   i915_gem_object_get_dma_address(obj, i),
> > +                                   node.start,
> > +                                   I915_CACHE_NONE,
> > +                                   0);
> > +                   wmb();
> > +                   base = ioremap_wc(i915->gtt.mappable_base + node.start, 
> > 4096);
> > +                   memset_io(base, 0, 4096);
> > +                   iounmap(base);
> > +           }
> > +   } else {
> > +           /* Get the CPU virtual address of the buffer */
> > +           base = ioremap_wc(i915->gtt.mappable_base +
> > +                             node.start, size);
> > +           if (base == NULL) {
> > +                   DRM_ERROR("Mapping of gem object to CPU failed!\n");
> > +                   ret = -ENOSPC;
> > +                   goto unpin;
> > +           }
> > +
> > +           memset_io(base, 0, size);
> > +           iounmap(base);
> > +   }
> > +unpin:
> > +   if (node.allocated) {
> > +           wmb();
> > +           i915->gtt.base.clear_range(&i915->gtt.base,
> > +                           node.start, node.size,
> > +                           true);
> > +           drm_mm_remove_node(&node);
> > +           i915_gem_object_unpin_pages(obj);
> > +   }
> > +   else {
> > +           i915_gem_object_ggtt_unpin(obj);
> > +   }
> > +out:
> > +   return ret;
> > +}
> 
> This is effectively two functions interleaved, as shown by the repeated 
> if (node.allocated) tests. Would it not be clearer to have the mainline 
> function deal only with the GTT-pinned case, and a separate function for 
> the page-by-page version, called as a fallback if pinning fails?
> 
> int i915_gem_object_clear(struct drm_i915_gem_object *obj)
> {
>       int ret, i;
>       char __iomem *base;
>       size_t size = obj->base.size;
>       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> 
>       WARN_ON(!mutex_is_locked(&obj->base.dev->struct_mutex));
>       ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE|PIN_NONBLOCK);
>       if (ret)
>               return __i915_obj_clear_by_pages(...);
> 
>       ... mainline (fast) code here ...
> 
>       return ret;
> }
> 
> static int __i915_obj_clear_by_pages(...);
> {
>       ... complicated page-by-page fallback code here ...
> }
> 
This is good to separate the page-by-page path to not make the code
messy, Also I kind of liked Chris' suggestion to not use ioremap_wc() as
it could easily exhaust kernel space.

To make it less messy and more robust, I would prefer to use only the
page-by-page path (no need to even try mapping the full object), with
io_mapping_map_wc()

Thanks,
Ankit
> 


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to