sonable to not handle huge zero folios differently
> to inserting PMDs mapping folios when there already is something mapped.
Yeah, this all sounds reasonable and I was never able to hit this path with the
RFC version of this series anyway. So I suspect it really is impossible to hit
and ther
On Fri, Jul 04, 2025 at 03:22:28PM +0200, David Hildenbrand wrote:
> On 25.06.25 11:03, David Hildenbrand wrote:
> > On 24.06.25 03:16, Alistair Popple wrote:
> > > On Tue, Jun 17, 2025 at 05:43:38PM +0200, David Hildenbrand wrote:
> > > > Let
On Tue, Jun 17, 2025 at 05:43:36PM +0200, David Hildenbrand wrote:
> Let's clean it all further up.
Looks good:
Reviewed-by: Alistair Popple
> Signed-off-by: David Hildenbrand
> ---
> mm/huge_memory.c | 36 +---
> 1 file changed, 13 inserti
whole struct down. It makes it very obvious what
elements insert_pmd() cares about (in this case one of about fourteen fields).
Anyway looks good, thanks:
Reviewed-by: Alistair Popple
> --
> Oscar Salvador
> SUSE Labs
On Tue, Jun 17, 2025 at 05:43:38PM +0200, David Hildenbrand wrote:
> Let's convert to vmf_insert_folio_pmd().
>
> In the unlikely case there is already something mapped, we'll now still
> call trace_dax_pmd_load_hole() and return VM_FAULT_NOPAGE.
>
> That should probably be fine, no need to add s
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
> Marking PMDs that map a "normal" refcounted folios as special is
> against our rules documented for vm_normal_page().
>
> Fortunately, there are not that many pmd_special() check that can be
> mislead, and most vm_normal_page_pmd
> 12/13 ndctl:dax / dm.sh FAIL 0.23s exit
> status 1
> 13/13 ndctl:dax / mmap.sh OK 437.86s
>
> So, no idea if this series breaks something, because the tests are rather
> unreliable. I have plenty of other debug setting
On Wed, Jun 11, 2025 at 02:06:52PM +0200, David Hildenbrand wrote:
> We setup the cache mode but ... don't forward the updated pgprot to
> insert_pfn_pud().
>
> Only a problem on x86-64 PAT when mapping PFNs using PUDs that
> require a special cachemode.
>
> Fix it by using the proper pgprot wher
On Wed, Jun 11, 2025 at 10:42:16AM +0200, Marek Szyprowski wrote:
> On 11.06.2025 10:23, David Hildenbrand wrote:
> > On 11.06.25 10:03, Marek Szyprowski wrote:
> >> On 11.06.2025 04:38, Alistair Popple wrote:
> >>> On Tue, Jun 10, 2025 at 06:18:09PM +0200, Mar
On Tue, Jun 10, 2025 at 06:18:09PM +0200, Marek Szyprowski wrote:
> Dear All,
>
> On 04.06.2025 05:21, Alistair Popple wrote:
> > The PFN_MAP flag is no longer used for anything, so remove it.
> > The PFN_SG_CHAIN and PFN_SG_LAST flags never appear to have been
> > us
The PFN_MAP flag is no longer used for anything, so remove it.
The PFN_SG_CHAIN and PFN_SG_LAST flags never appear to have been
used so also remove them. The last user of PFN_SPECIAL was removed
by 653d7825c149 ("dcssblk: mark DAX broken, remove FS_DAX_LIMITED
support").
Signed-off-by
On Tue, May 27, 2025 at 02:46:28PM -0700, Dan Williams wrote:
> Alistair Popple wrote:
> > Commit 6be3e21d25ca ("fs/dax: don't skip locked entries when scanning
> > entries") introduced a new function, wait_entry_unlocked_exclusive(),
> > which waits for
which is equivalent in
implementation to xas_pause() but does not advance the XArray state.
Fixes: 6be3e21d25ca ("fs/dax: don't skip locked entries when scanning entries")
Signed-off-by: Alistair Popple
Cc: Dan Williams
Cc: Alison Schofield
Cc: Matthew Wilcow (Oracle)
Cc: Balb
On Fri, Apr 11, 2025 at 10:37:17AM +0200, David Hildenbrand wrote:
> (adding CC list again, because I assume it was dropped by accident)
Whoops. Thanks.
> > > diff --git a/fs/dax.c b/fs/dax.c
> > > index af5045b0f476e..676303419e9e8 100644
> > > --- a/fs/dax.c
> > > +++ b/fs/dax.c
> > > @@ -396,6
On Thu, Apr 10, 2025 at 01:14:42PM +0800, kernel test robot wrote:
>
>
> Hello,
>
> kernel test robot noticed
> "WARNING:at_mm/truncate.c:#truncate_folio_batch_exceptionals" on:
>
> commit: bde708f1a65d025c45575bfe1e7bf7bdf7e71e87 ("fs/dax: always remove DAX
> page-cache entries when breaking
On Thu, Feb 27, 2025 at 11:01:55AM +0100, Danilo Krummrich wrote:
> On Thu, Feb 27, 2025 at 11:25:55AM +1100, Alistair Popple wrote:
> > On Tue, Feb 25, 2025 at 12:04:35PM +0100, Danilo Krummrich wrote:
> > > On Tue, Feb 25, 2025 at 04:50:05PM +1100, Alistair Popple wrote:
On Tue, Feb 25, 2025 at 12:04:35PM +0100, Danilo Krummrich wrote:
> On Tue, Feb 25, 2025 at 04:50:05PM +1100, Alistair Popple wrote:
> > Kind of, but given the current state of build_assert's and the impossiblity
> > of
> > debugging them should we avoid adding th
On Fri, Feb 21, 2025 at 04:58:59AM +0100, Miguel Ojeda wrote:
> Hi Alistair,
>
> On Fri, Feb 21, 2025 at 2:20 AM Alistair Popple wrote:
> >
> > Is this a known issue or limitation? Or is this a bug/rough edge that still
> > needs fixing? Or alternatively am I just d
On Thu, Dec 19, 2024 at 06:04:09PM +0100, Danilo Krummrich wrote:
> I/O memory is typically either mapped through direct calls to ioremap()
> or subsystem / bus specific ones such as pci_iomap().
>
> Even though subsystem / bus specific functions to map I/O memory are
> based on ioremap() / iounma
It's no longer used so remove it.
Signed-off-by: Alistair Popple
---
mm/memremap.c | 27 ---
1 file changed, 27 deletions(-)
diff --git a/mm/memremap.c b/mm/memremap.c
index d875534..e40672b 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -38,30 +38,6 @@ unsigned
behaviour as it will always be false.
Signed-off-by: Alistair Popple
---
fs/dax.c | 5 ++---
include/linux/huge_mm.h| 10 --
include/linux/pgtable.h| 2 +-
mm/hmm.c | 4 ++--
mm/huge_memory.c | 31 +--
mm
vmf_insert_mixed(). This is unnecessary as it is no longer checked, instead
relying on pfn_valid() to determine if there is an associated page or not.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
---
drivers/gpu/drm/gma500/fbdev.c | 2 +-
drivers/gpu/drm/omapdrm/omap_gem.c | 5
Now that DAX and all other reference counts to ZONE_DEVICE pages are
managed normally there is no need for the special devmap PTE/PMD/PUD
page table bits. So drop all references to these, freeing up a
software defined page table bit on architectures supporting it.
Signed-off-by: Alistair Popple
All PFN_* pfn_t flags have been removed. Therefore there is no longer
a need for the pfn_t type and all uses can be replaced with normal
pfns.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
---
arch/x86/mm/pat/memtype.c| 6 +-
drivers/dax/device.c
PFN_DEV no longer exists. This means no devmap PMDs or PUDs will be
created, so checking for them is redundant. Instead mappings of pages that
would have previously returned true for pXd_devmap() will return true for
pXd_trans_huge()
Signed-off-by: Alistair Popple
---
arch/powerpc/mm/book3s64
The only users of pmd_devmap were device dax and fs dax. The check for
pmd_devmap() in check_pmd_state() is therefore redundant as callers
explicitly check for is_zone_device_page(), so this check can be dropped.
Signed-off-by: Alistair Popple
---
mm/khugepaged.c | 2 --
1 file changed, 2
memmap so there is no need to hold a reference on the pgmap
data structure to ensure this.
Furthermore mappings with PFN_DEV are no longer created, hence this
effectively dead code anyway so can be removed.
Signed-off-by: Alistair Popple
---
include/linux/huge_mm.h | 3 +-
mm/gup.c
;t support pte_devmap
so those will continue to rely on pfn_valid() to determine if the page can
be mapped.
Signed-off-by: Alistair Popple
---
mm/hmm.c| 3 ---
mm/memory.c | 20 ++--
mm/vmscan.c | 2 +-
3 files changed, 3 insertions(+), 22 deletions(-)
diff --git a/mm/hmm.c b/mm/h
Previously dax pages were skipped by the pagewalk code as pud_special() or
vm_normal_page{_pmd}() would be false for DAX pages. Now that dax pages are
refcounted normally that is no longer the case, so add explicit checks to
skip them.
Signed-off-by: Alistair Popple
---
include/linux/memremap.h
The PFN_MAP flag is no longer used for anything, so remove it. The
PFN_SG_CHAIN and PFN_SG_LAST flags never appear to have been used so
also remove them.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
---
include/linux/pfn_t.h | 31 +++
mm
nel.org
Cc: nvd...@lists.linux.dev
Cc: linux-fsde...@vger.kernel.org
Cc: linux...@kvack.org
Cc: linux-e...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: jhubb...@nvidia.com
Cc: h...@lst.de
Cc: zhang.l...@gmail.com
Cc: de...@rivosinc.com
Cc: bj...@kernel.org
Cc: balb...@nvidia.com
Alistair Po
pXd_devmap to skip DAX pages
continue to do so by adding explicit checks of the VMA instead.
Signed-off-by: Alistair Popple
---
fs/userfaultfd.c | 2 +-
mm/hmm.c | 2 +-
mm/userfaultfd.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
All PFN_* pfn_t flags have been removed. Therefore there is no longer
a need for the pfn_t type and all uses can be replaced with normal
pfns.
Signed-off-by: Alistair Popple
---
I'm guessing people will want this split up into several patches for
merging/review. If so I will do that onc
None of the functionality in pfn_t.h is required so delete it.
Signed-off-by: Alistair Popple
---
include/linux/pfn.h | 10 +-
include/linux/pfn_t.h | 88 +
2 files changed, 98 deletions(-)
delete mode 100644 include/linux/pfn_t.h
diff --git a
unnecessary as it is no longer checked,
instead relying on pfn_valid() to determine if there is an associated
page or not.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm/gma500/fbdev.c | 2 +-
drivers/gpu/drm/omapdrm/omap_gem.c | 5 ++---
drivers/s390/block/dcssblk.c | 3 +--
drivers
The PFN_MAP flag is no longer used for anything, so remove it. The
PFN_SG_CHAIN and PFN_SG_LAST flags never appear to have been used so
also remove them.
Signed-off-by: Alistair Popple
---
include/linux/pfn_t.h | 10 ++
tools/testing/nvdimm/test/iomap.c | 4
2 files
org
Cc: da...@redhat.com
Cc: linux-kernel@vger.kernel.org
Cc: nvd...@lists.linux.dev
Cc: linux-fsde...@vger.kernel.org
Cc: linux...@kvack.org
Cc: linux-e...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: jhubb...@nvidia.com
Cc: h...@lst.de
Alistair Popple (4):
mm: Remove PFN_MAP, PFN_SG
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> "Huang, Ying" writes:
>>>>
>>>>> Alistair Popple writes:
>>&g
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> "Huang, Ying" writes:
>>>>
>>>>> Alistair Popple writes:
>>>
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> Huang Ying writes:
>>>>
>>>>> Previously, a fixed abstract distance MEMTIER_DEF
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>> "Huang, Ying" writes:
>>>>
>>>>> Hi, Alistair,
>>>>>
&g
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Hi, Alistair,
>>>
>>> Sorry for late response. Just come back from vacation.
>>
>> Ditto for this response :-)
>>
>> I
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> Huang Ying writes:
>>
>>> Previously, a fixed abstract distance MEMTIER_DEFAULT_DAX_ADISTANCE is
>>> used for slow memory type in kmem driver. This limits the usage of
>>> km
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> Huang Ying writes:
>>
>>> A memory tiering abstract distance calculation algorithm based on ACPI
>>> HMAT is implemented. The basic idea is as follows.
>>>
>>> The perform
"Huang, Ying" writes:
> Hi, Alistair,
>
> Sorry for late response. Just come back from vacation.
Ditto for this response :-)
I see Andrew has taken this into mm-unstable though, so my bad for not
getting around to following all this up sooner.
> Alistair Popple wri
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>> Alistair Popple writes:
>>>
>>>>>>> While other memory device drivers can use the general notifier chain
>>>>>&
"Huang, Ying" writes:
> Alistair Popple writes:
>
>> "Huang, Ying" writes:
>>
>>>>> And, I don't think that we are forced to use the general notifier
>>>>> chain interface in all memory device drivers. If the memory
"Huang, Ying" writes:
>>> The other way (suggested by this series) is to make dax/kmem call a
>>> notifier chain, then CXL CDAT or ACPI HMAT can identify the type of
>>> device and calculate the distance if the type is correct for them. I
>>> don't think that it's good to make dax/kem to know
"Huang, Ying" writes:
> Hi, Alistair,
>
> Thanks a lot for comments!
>
> Alistair Popple writes:
>
>> Huang Ying writes:
>>
>>> The abstract distance may be calculated by various drivers, such as
>>> ACPI HMAT, CXL CDAT, etc. Whi
ut into the "kmem_memory_types" list and protected by
> kmem_memory_type_lock.
See below but I wonder if kmem_memory_types could be a common helper
rather than kdax specific?
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei Xu
> Cc: Alistair Pop
rate/complete.
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei Xu
> Cc: Alistair Popple
> Cc: Dan Williams
> Cc: Dave Hansen
> Cc: Davidlohr Bueso
> Cc: Johannes Weiner
> Cc: Jonathan Cameron
> Cc: Michal Hocko
>
refactor looks good and I have run the whole series on a system with
some hmat data so:
Reviewed-by: Alistair Popple
Tested-by: Alistair Popple
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
> Cc: Wei Xu
> Cc: Alistair Popple
> Cc: Dan Williams
> Cc: Dave
of
> algorithm implementations can be specified via
> priority (notifier_block.priority).
How/what decides the priority though? That seems like something better
decided by a device driver than the algorithm driver IMHO.
> Signed-off-by: "Huang, Ying"
> Cc: Aneesh Kumar K.V
&g
Thanks for this Huang, I had been hoping to take a look at it this week
but have run out of time. I'm keen to do some testing with it as well.
Hopefully next week...
Huang Ying writes:
> We have the explicit memory tiers framework to manage systems with
> multiple types of memory, e.g., DRAM
On Friday, 16 April 2021 2:19:18 PM AEST Dan Williams wrote:
> The revoke_iomem() change seems like something that should be moved
> into a leaf helper and not called by __request_free_mem_region()
> directly.
Ok. I have split this up but left the call to revoke_iomem() in
__request_free_mem_regi
Refactor the portion of __request_region() done whilst holding the
resource_lock into a separate function to allow callers to hold the
lock.
Signed-off-by: Alistair Popple
---
kernel/resource.c | 52 +--
1 file changed, 32 insertions(+), 20 deletions
arn("Unaddressable device %s %pR conflicts with %pR",
conflict->name, conflict, res);
These unexpected failures can be corrected by holding resource_lock across
the two calls. This also requires memory allocation to be performed prior
to taking the lock.
Signed-
Introduce a version of region_intersects() that can be called with the
resource_lock already held. This is used in a future fix to
__request_free_mem_region().
Signed-off-by: Alistair Popple
---
kernel/resource.c | 52 ---
1 file changed, 31
alls resource code so cannot be called with the resource lock held.
Therefore call it only after dropping the lock.
Fixes: 4ef589dc9b10c ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Signed-off-by: Alistair Popple
Acked-by: Balbir Singh
Reported-by: kernel test robot
---
Chang
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v8:
* Renamed try_to_munlock to page_mlock to better reflect what the
fun
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 +++
lib/test_hmm_uapi.h| 2 +
tools/testing
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gp
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Ralph Campbell
---
v5:
* Added comments about how PMD splitting works for migration vs.
unmapping
* Tightened up the flag check in try_to_migrate() to be explicit about
ecks the results of atomic GPU operations on a
SVM buffer whilst also writing to the same buffer from the CPU.
Alistair Popple (8):
mm: Remove special swap entry functions
mm/swapops: Rework swap entry manipulation code
mm/rmap: Split try_to_munlock from try_to_unmap
mm/rmap: Split migration
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26
to proceed.
Signed-off-by: Alistair Popple
---
v7:
* Removed magic values for fault access levels
* Improved readability of fault comparison code
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h
pfn_swap_entry_to_page(). Also open-code the various entry_to_pfn()
functions as this results is shorter code that is easier to understand.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v7:
* Reworded commit message to include pfn_swap_entry_to_page
original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
On Thursday, 1 April 2021 3:56:05 PM AEDT Muchun Song wrote:
> External email: Use caution opening links or attachments
>
>
> On Fri, Mar 26, 2021 at 9:22 AM Alistair Popple wrote:
> >
> > request_free_mem_region() is used to find an empty range of physical
>
On Wednesday, 31 March 2021 10:57:46 PM AEDT Jason Gunthorpe wrote:
> On Wed, Mar 31, 2021 at 03:15:47PM +1100, Alistair Popple wrote:
> > On Wednesday, 31 March 2021 2:56:38 PM AEDT John Hubbard wrote:
> > > On 3/30/21 3:56 PM, Alistair Popple wrote:
> > > ...
> &
On Thursday, 1 April 2021 11:48:13 AM AEDT Jason Gunthorpe wrote:
> On Thu, Apr 01, 2021 at 11:45:57AM +1100, Alistair Popple wrote:
> > On Thursday, 1 April 2021 12:46:04 AM AEDT Jason Gunthorpe wrote:
> > > On Thu, Apr 01, 2021 at 12:27:52AM +1100, Alistair Popple wrote:
>
On Thursday, 1 April 2021 12:46:04 AM AEDT Jason Gunthorpe wrote:
> On Thu, Apr 01, 2021 at 12:27:52AM +1100, Alistair Popple wrote:
> > On Thursday, 1 April 2021 12:18:54 AM AEDT Jason Gunthorpe wrote:
> > > On Wed, Mar 31, 2021 at 11:59:28PM +1100, Alistair Popple wrote:
>
On Thursday, 1 April 2021 12:18:54 AM AEDT Jason Gunthorpe wrote:
> On Wed, Mar 31, 2021 at 11:59:28PM +1100, Alistair Popple wrote:
>
> > I guess that makes sense as the split could go either way at the
> > moment but I should add a check to make sure this isn't used with
On Wednesday, 31 March 2021 6:32:34 AM AEDT Jason Gunthorpe wrote:
> On Fri, Mar 26, 2021 at 11:08:02AM +1100, Alistair Popple wrote:
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 3a5705cfc891..33d11527ef77 100644
> > +++ b/mm/memory.c
> > @@ -781,6 +781,27 @@
On Tuesday, 30 March 2021 8:13:32 PM AEDT David Hildenbrand wrote:
> External email: Use caution opening links or attachments
>
>
> On 29.03.21 03:37, Alistair Popple wrote:
> > On Friday, 26 March 2021 7:57:51 PM AEDT David Hildenbrand wrote:
> >> On 26.03.21 0
On Wednesday, 31 March 2021 2:56:38 PM AEDT John Hubbard wrote:
> On 3/30/21 3:56 PM, Alistair Popple wrote:
> ...
> >> +1 for renaming "munlock*" items to "mlock*", where applicable. good
grief.
> >
> > At least the situation was weird enough to pr
On Wednesday, 31 March 2021 9:43:19 AM AEDT John Hubbard wrote:
> On 3/30/21 3:24 PM, Jason Gunthorpe wrote:
> ...
> >> As far as I can tell this has always been called try_to_munlock() even
though
> >> it appears to do the opposite.
> >
> > Maybe we should change it then?
> >
> >>> /**
> >>>
On Wednesday, 31 March 2021 9:09:30 AM AEDT Alistair Popple wrote:
> On Wednesday, 31 March 2021 5:49:03 AM AEDT Jason Gunthorpe wrote:
> > On Fri, Mar 26, 2021 at 11:08:00AM +1100, Alistair Popple wrote:
> > So what clears PG_mlocked on this call path?
>
> See munloc
On Wednesday, 31 March 2021 5:49:03 AM AEDT Jason Gunthorpe wrote:
> On Fri, Mar 26, 2021 at 11:08:00AM +1100, Alistair Popple wrote:
>
> > +static bool try_to_munlock_one(struct page *page, struct vm_area_struct
*vma,
> > +unsigned long address, void *arg)
&
On Tuesday, 30 March 2021 2:42:34 PM AEDT John Hubbard wrote:
> On 3/29/21 5:38 PM, Alistair Popple wrote:
> > request_free_mem_region() is used to find an empty range of physical
> > addresses for hotplugging ZONE_DEVICE memory. It does this by iterating
> > over the range
st_free_mem_region variant")
Fixes: 0092908d16c60 ("mm: factor out a devm_request_free_mem_region helper")
Fixes: 4ef589dc9b10c ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Signed-off-by: Alistair Popple
Acked-by: Balbir Singh
Reported-by: kernel test robot
---
gt; https://github.com/0day-ci/linux/commits/Alistair-Popple/kernel-resource-Fix-locking-in-request_free_mem_region/20210326-092150
> base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
a74e6a014c9d4d4161061f770c9b4f98372ac778
>
> in testcase: boot
>
> on test machine:
On Friday, 26 March 2021 4:15:36 PM AEDT Balbir Singh wrote:
> On Fri, Mar 26, 2021 at 12:20:35PM +1100, Alistair Popple wrote:
> > +static int __region_intersects(resource_size_t start, size_t size,
> > +unsigned long flags, unsigned long desc)
> >
On Friday, 26 March 2021 7:57:51 PM AEDT David Hildenbrand wrote:
> On 26.03.21 02:20, Alistair Popple wrote:
> > request_free_mem_region() is used to find an empty range of physical
> > addresses for hotplugging ZONE_DEVICE memory. It does this by iterating
> > over
st_free_mem_region variant")
Fixes: 0092908d16c60 ("mm: factor out a devm_request_free_mem_region helper")
Fixes: 4ef589dc9b10c ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Signed-off-by: Alistair Popple
---
v2:
- Added Fixes tag
---
kernel/resource.c | 146 +++
be held over the required calls.
Instead of creating another version of devm_request_mem_region() that
doesn't take the lock open-code it to allow the caller to pre-allocate
the required memory prior to taking the lock.
Signed-off-by: Alistair Popple
---
ke
to proceed.
Signed-off-by: Alistair Popple
---
v7:
* Removed magic values for fault access levels
* Improved readability of fault comparison code
v4:
* Check that page table entries haven't changed before mapping on the
device
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h
-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/swapops.h | 56 ++---
mm/debug_vm_pgtable.c | 12 -
mm/hmm.c| 2 +-
mm/huge_memory.c| 26
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gp
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 +++
lib/test_hmm_uapi.h| 2 +
tools/testing
original
mapping. This results in MMU notifiers being called which a driver uses
to update access permissions such as revoking atomic access. After
notifiers have been called the device will no longer have exclusive
access to the region.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
pfn_swap_entry_to_page(). Also open-code the various entry_to_pfn()
functions as this results is shorter code that is easier to understand.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v7:
* Reworded commit message to include pfn_swap_entry_to_page
try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Ralph Campbell
---
v5:
* Added comments about how PMD splitting works for migration vs.
unmapping
* Tightened up the flag check in try_to_migrate() to be explicit about
rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
v7:
* Added Christoph's Reviewed-by
v4:
* Removed redundant check for
upstream Mesa userspace with a simple
OpenCL test program which checks the results of atomic GPU operations on a
SVM buffer whilst also writing to the same buffer from the CPU.
Alistair Popple (8):
mm: Remove special swap entry functions
mm/swapops: Rework swap entry manipulation code
mm/rma
On Tuesday, 23 March 2021 9:26:43 PM AEDT David Hildenbrand wrote:
> On 20.03.21 10:36, Miaohe Lin wrote:
> > If the zone device page does not belong to un-addressable device memory,
> > the variable entry will be uninitialized and lead to indeterminate pte
> > entry ultimately. Fix this unexpectan
On Monday, 15 March 2021 6:42:45 PM AEDT Christoph Hellwig wrote:
> > +Not all devices support atomic access to system memory. To support atomic
> > +operations to a shared virtual memory page such a device needs access to
that
> > +page which is exclusive of any userspace access from the CPU. The
On Monday, 15 March 2021 6:51:13 PM AEDT Christoph Hellwig wrote:
> > - /*XXX: atomic? */
> > - return (fa->access == 0 || fa->access == 3) -
> > - (fb->access == 0 || fb->access == 3);
> > + /* Atomic access (2) has highest priority */
> > + return (-1*(fa->access == 2) + (fa->acc
On Monday, 15 March 2021 6:27:57 PM AEDT Christoph Hellwig wrote:
> On Fri, Mar 12, 2021 at 07:38:44PM +1100, Alistair Popple wrote:
> > Remove the migration and device private entry_to_page() and
> > entry_to_pfn() inline functions and instead open code them directly.
> > Th
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 ++
lib/test_hmm_uapi.h| 2 +
tools/testing
1 - 100 of 250 matches
Mail list logo