On 31/03/2023 05:18, Ira Weiny wrote:
Zhao Liu wrote:
From: Zhao Liu <zhao1....@intel.com>

The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the calls from
kmap_atomic() to kmap_local_page().

The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).

With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults and preemption disables.

In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by
kmap_atomic() in eb_relocate_entry(), and is unmapped by
kunmap_atomic() in reloc_cache_reset().

First off thanks for the series and sticking with this.  That said this
patch kind of threw me for a loop because tracing the map/unmap calls did
not make sense to me.  See below.


And this mapping/unmapping occurs in two places: one is in
eb_relocate_vma(), and another is in eb_relocate_vma_slow().

The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't
need to disable pagefaults and preemption during the above mapping/
unmapping.

So it can simply use kmap_local_page() / kunmap_local() that can
instead do the mapping / unmapping regardless of the context.

Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().

[1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.we...@intel.com

v2: No code change since v1. Added description of the motivation of
     using kmap_local_page() and "Suggested-by" tag of Fabio.

Suggested-by: Ira Weiny <ira.we...@intel.com>
Suggested-by: Fabio M. De Francesco <fmdefrance...@gmail.com>
Signed-off-by: Zhao Liu <zhao1....@intel.com>
---
Suggested by credits:
   Ira: Referred to his task document, review comments.
   Fabio: Referred to his boiler plate commit message and his description
          about why kmap_local_page() should be preferred.
---
  drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
  1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 9dce2957b4e5..805565edd148 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1151,7 +1151,7 @@ static void reloc_cache_unmap(struct reloc_cache *cache)
vaddr = unmask_page(cache->vaddr);
        if (cache->vaddr & KMAP)
-               kunmap_atomic(vaddr);
+               kunmap_local(vaddr);

In the cover letter you don't mention this unmap path.  Rather you mention
only reloc_cache_reset().

After digging into this and considering these are kmap_atomic() calls I
_think_ what you have is ok.  But I think I'd like to see the call paths
documented a bit more clearly.  Or perhaps cleaned up a lot.

For example I see the following call possibility from a user ioctl.  In
this trace I see 2 examples where something is unmapped first.  I don't
understand why that is required?  I would assume reloc_cache_unmap() and
reloc_kmap() are helpers called from somewhere else requiring a remapping
of the cache but I don't see it.

Reloc_cache_unmap is called from eb_relocate_entry.

The confusing part unmap appears first is just because reloc_cache is a stateful setup. The previous mapping is kept around until reset (callers moves to a different parent object), and unampped/remapped once moved to a different page within that object.

However I am unsure if disabling pagefaulting is needed or not. Thomas, Matt, being the last to touch this area, perhaps you could have a look? Because I notice we have a fallback iomap path which still uses io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is safe, does the iomap side also needs converting to io_mapping_map_local_wc? Or they have separate requirements?

Regards,

Tvrtko


i915_gem_execbuffer2_ioctl()
eb_relocate_parse()
eb_relocate_parse_slow()
eb_relocate_vma_slow()
        eb_relocate_entry()
                reloc_cache_unmap()
                        kunmap_atomic()  <=== HERE!
                reloc_cache_remap()
                        kmap_atomic()
                relocate_entry()
                        reloc_vaddr()
                                reloc_kmap()
                                        kunmap_atomic() <== HERE!
                                        kmap_atomic()

        reloc_cache_reset()
                kunmap_atomic()

Could these mappings be cleaned up a lot more?  Perhaps by removing some
of the helper functions which AFAICT are left over from older versions of
the code?

Also as an aside I think it is really bad that eb_relocate_entry() returns
negative errors in a u64.  Better to get the types right IMO.

Thanks for the series!
Ira

        else
                io_mapping_unmap_atomic((void __iomem *)vaddr);
  }
@@ -1167,7 +1167,7 @@ static void reloc_cache_remap(struct reloc_cache *cache,
        if (cache->vaddr & KMAP) {
                struct page *page = i915_gem_object_get_page(obj, cache->page);
- vaddr = kmap_atomic(page);
+               vaddr = kmap_local_page(page);
                cache->vaddr = unmask_flags(cache->vaddr) |
                        (unsigned long)vaddr;
        } else {
@@ -1197,7 +1197,7 @@ static void reloc_cache_reset(struct reloc_cache *cache, 
struct i915_execbuffer
                if (cache->vaddr & CLFLUSH_AFTER)
                        mb();
- kunmap_atomic(vaddr);
+               kunmap_local(vaddr);
                i915_gem_object_finish_access(obj);
        } else {
                struct i915_ggtt *ggtt = cache_to_ggtt(cache);
@@ -1229,7 +1229,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
        struct page *page;
if (cache->vaddr) {
-               kunmap_atomic(unmask_page(cache->vaddr));
+               kunmap_local(unmask_page(cache->vaddr));
        } else {
                unsigned int flushes;
                int err;
@@ -1251,7 +1251,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
        if (!obj->mm.dirty)
                set_page_dirty(page);
- vaddr = kmap_atomic(page);
+       vaddr = kmap_local_page(page);
        cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr;
        cache->page = pageno;
--
2.34.1



Reply via email to