t we're here, if you drop this hunk then
it will make merging easier, I think.
[1] https://lore.kernel.org/r/20210813044133.1536842-4-jhubb...@nvidia.com
thanks,
--
John Hubbard
NVIDIA
ory in vidmem.
So I think we don't want to rule out that behavior, right? Or is the
thinking more like, "you're lucky that this old non-ODP setup works at
all, and we'll make it work by routing through host/cpu memory, but it
will be slow"?
thanks,
--
John Hubbard
NVIDIA
block migration from happening eg if the CPU touches it
later or something.
OK. I just want to avoid creating any API-level assumptions that dma_buf_pin()
necessarily implies or requires migrating to host memory.
thanks,
--
John Hubbard
NVIDIA
;t need to worry so much about
providing first-class support for non-ODP setups.
I've got to drag my brain into 2021+! :)
thanks,
--
John Hubbard
NVIDIA
unlock*" items to "mlock*", where applicable. good grief.
Although, it seems reasonable to tack such renaming patches onto the tail end
of this series. But whatever works.
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-dev
as there
is only one caller of try_to_munlock.
- Alistair
No objections here. :)
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 3/30/21 8:56 PM, John Hubbard wrote:
On 3/30/21 3:56 PM, Alistair Popple wrote:
...
+1 for renaming "munlock*" items to "mlock*", where applicable. good grief.
At least the situation was weird enough to prompt further investigation :)
Renaming to mlock* doesn
dm_mr.close()
+
+
+def check_dmabuf_support(unit=0):
+"""
+Check if dma-buf allocation is supported by the system.
+Skip the test on failure.
+"""
+device_num = 128 + unit
+try:
+DmaBuf(1, unit=unit)
unit?? Th
about the future of pinning to vidmem,
for this? It would allow a huge group of older GPUs and NICs and such to
do p2p with this approach, and it seems like a natural next step, right?
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
way to use this solution here for peer-to-peer. So I'm glad to see that
so far you're not ruling out the pinning option.
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Obviously not worth spinning another version for that, as it is still readable
as-is. Just mentioning it for the sake of pointless perfectionism, and in case
someone ever wonders why it was missed during a review. :) Either way, feel free
to add:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NV
in, but not a refcount
pin.)
It's a nice clean design point that we need to preserve, and fortunately it
doesn't conflict with anything I'm seeing here. But I want to say this out
loud because I see some doubt about it creeping into the discussion.
thanks,
--
John Hubbard
NVIDIA
asily write programs that do a long series of atomic
operations.
Such a program would be a little weird, but it's hard to rule out.
- long term pin: the page cannot be moved, all migration must fail.
Also this will have an impact on COW behaviour for fork (but not sure
where those patches ar
ut we can forcefully break this whenever we feel like by revoking
the page, moving it, and then reinstating the gpu pte again and let it
continue.
Oh yes, that's true.
If that's no possible then what we need here instead is an mlock() type of
thing I think.
No need for that, then.
ng in
^FOLL_LONGTERM
ZONE_MOVEABLE before they're migrated) is still being worked on. So
not big benefits yet.
Yes. Great write-up, that's very clear, and it's exactly where we're at.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
Cc: John Hubbard
Signed-
spelling. "Coyyntext" just doesn't sound as good. :)
:param num: Size of initial collection
:return: A random legal value for MR flags
"""
thanks,
--
John Hubbard
NVIDIA
___
dri-devel
/O RESET_REG.
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
oliver.s...@intel.com
thanks,
--
John Hubbard
NVIDIA
___
age, get_futex_key requires a
* get_user_pages_fast_only implementation that can pin pages. Thus it's still
* useful to have gup_huge_pmd even if we can't operate on ptes.
*/
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dr
ly, and I'm not
seeing a pud_mkhugespecial anywhere. So not sure this works, but probably
just me missing something again.
It means ioremap can't create an IO page PUD, it has to be broken up.
Does ioremap even create anything larger than PTEs?
From my reading, yes. See ioremap_try_hug
On 1/20/22 6:12 AM, Christoph Hellwig wrote:
On Tue, Jan 11, 2022 at 12:17:18AM -0800, John Hubbard wrote:
Zooming in on the pinning aspect for a moment: last time I attempted to
convert O_DIRECT callers from gup to pup, I recall wanting very much to
record, in each bio_vec, whether these pages
And of course, you're signing for another huge naming debate with Linus
if you go with the "cool" name here. :)
thanks,
--
John Hubbard
NVIDIA
range is now invalid, so poison it's hmm pointer.
* Leave other range-> fields in place, for the caller's use.
*/
...or something like that?
> + range->valid = false;
> + memset(&range->hmm, POISON_INUSE, sizeof(range
ULL;
> - hmm->dead = true;
> - spin_unlock(&mm->page_table_lock);
> - hmm_put(hmm);
> - return;
> - }
> -
> - spin_unlock(&mm->page_table_lock);
> -}
> -
> static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
> {
> struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier);
>
Failed to find any problems with this. :)
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
general, holding
a lock during a driver callback usually leads to deadlocks.)
Ralph, is this the one? It's the only place in this patchset where I can
see a lock around a callback to driver code, that wasn't there before. So
I'm pretty sure it is the one...
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
*kref)
> {
> struct hmm *hmm = container_of(kref, struct hmm, kref);
> - struct mm_struct *mm = hmm->mm;
> -
> - mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
>
> - spin_lock(&mm->page_table_lock);
> - if (mm->hmm == hmm)
> - mm->hmm = NULL;
> - spin_unlock(&mm->page_table_lock);
> -
> - mmdrop(hmm->mm);
> + mmu_notifier_unregister_no_release(&hmm->mmu_notifier, hmm->mm);
> mmu_notifier_call_srcu(&hmm->rcu, hmm_free_rcu);
> }
>
>
Yes.
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
@ -941,7 +941,7 @@ void hmm_range_unregister(struct hmm_range *range)
> return;
>
> mutex_lock(&hmm->lock);
> - list_del_rcu(&range->list);
> + list_del(&range->list);
> mutex_unlock(&hmm->lock);
>
>
exclusive(&mm->mmap_sem);
> +
> /* Sanity check */
> if (!mm || !mirror || !mirror->ops)
> return -EINVAL;
>
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
return -EFAULT;
> -
> /* Check if hmm_mm_destroy() was call. */
> - if (hmm->mm == NULL || hmm->dead) {
> - hmm_put(hmm);
> + if (hmm->mm == NULL || hmm->dead)
> return -EFAULT;
> - }
> +
> + range-&
;valid);
Just to ensure that I actually understand the model: I'm assuming that the
READ_ONCE is there solely to ensure that range->valid is read *after* the
wait_event_timeout() returns. Is that correct?
> }
>
> /*
>
In any case, it looks good, so:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
- VM_BUG_ON(!hmm);
> + /* hmm is in progress to free */
And here.
> + if (!kref_get_unless_zero(&hmm->kref))
> + return;
>
> mutex_lock(&hmm->lock);
> hmm->notifiers--;
>
Elegant fix. Regardless of the above chatter I added, you can add:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
7 @@ long hmm_range_snapshot(struct hmm_range *range)
> struct vm_area_struct *vma;
> struct mm_walk mm_walk;
>
> - /* Check if hmm_mm_destroy() was call. */
> - if (hmm->mm == NULL || hmm->dead)
> - return -EFAULT;
> -
> + lockdep_assert_
em);
> -
> hmm_put(hmm);
> + memset(&mirror->hmm, POISON_INUSE, sizeof(mirror->hmm));
I hadn't thought of POISON_* for these types of cases, it's a
good technique to remember.
I noticed that this is now done outside of the lock, but that
follows directly from
On 6/7/19 5:34 AM, Jason Gunthorpe wrote:
> On Thu, Jun 06, 2019 at 07:29:08PM -0700, John Hubbard wrote:
>> On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
>>> From: Jason Gunthorpe
>> ...
>>> @@ -153,10 +158,14 @@ void hmm_mm_destroy(struct mm_struct *mm)
>>
has a valid hmm->mm.
>
> Allowing the return value of wait_event_timeout() (along with its internal
> barriers) to compute the result of the function.
>
> Signed-off-by: Jason Gunthorpe
> ---
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
On 6/7/19 5:34 AM, Jason Gunthorpe wrote:
> On Thu, Jun 06, 2019 at 07:29:08PM -0700, John Hubbard wrote:
>> On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
>>> From: Jason Gunthorpe
>> ...
>>> diff --git a/mm/hmm.c b/mm/hmm.c
>>> index 8e7403f081f44a
we need to fix
> hmm_vma_fault() and all its varients. Otherwise the driver will be
> broken.
>
> I'm hoping to first define what this locking should be (see other
> emails to Ralph) then, ideally, see if we can extend mmu notifiers to
> get it dire
w that we've simplified the API for it.
>
> Signed-off-by: Christoph Hellwig
> ---
> include/linux/hmm.h | 3 ---
> mm/hmm.c| 54 -
> 2 files changed, 57 deletions(-)
>
No objections here, good cleanup.
Reviewe
On 6/13/19 2:43 AM, Christoph Hellwig wrote:
> noveau is currently using this through an odd hmm wrapper, and I plan
"nouveau"
> to switch it to the real thing later in this series.
>
> Signed-off-by: Christoph Hellwig
> ---
Reviewed-by: John Hubbard
thanks
mutex_lock(&drm->dmem->mutex);
> continue;
> }
>
>
The above comment is about pre-existing potential problems, but your patch
itself
looks correct, so:
Reviewed-by: John Hubbard
thanks,
--
John Hubbard
NVIDIA
_
re-added as part of a future patchset to use that
kind of memory, but yes, we should not hesitate to clean house at this
point, and delete unused code.
thanks,
--
John Hubbard
NVIDIA
>
>>
>> So, yes, we really don't want any distro or something to turn this on
>> until i
a good housecleaning here.
(As to the history: I know that there was some early "HMM dummy device"
testing when the HMM code was much younger, but such testing has long since
been superseded by more elaborate testing with real drivers.)
Reviewed-by: John Hubbard
thanks,
--
John Hubb
that line was unnecessary. I see from git history that it was
originally being set to NULL from within __put_devmap_managed_page(), and then
in commit 2fa147bdbf672c53386a8f5f2c7fe358004c3ef8, Dan moved it out of there,
and stashed in specifically here. But it appears to
is enabled, then say Y.
config MIGRATE_VMA_HELPER
- bool
+ bool "migrate_vma() helper routine"
+ help
+ Provides a migrate_vma() routine that GPUs and other
+ device drivers may need.
config DEV_PAGEMAP_OPS
bool
thanks,
--
John Hubb
));
> - if (!devmem->resource)
> - return ERR_PTR(-ENOMEM);
> - break;
> - }
> - if (!devmem->resource)
> - return ERR_PTR(-ERANGE);
> -
> - devmem->resource->desc = IORES_DESC_DEVICE_PRIVA
On 6/14/19 10:39 AM, Ralph Campbell wrote:
> On 6/13/19 5:49 PM, John Hubbard wrote:
>> On 6/13/19 5:11 PM, Ralph Campbell wrote:
...
>> Actually, the pre-existing code is a little concerning. Your change preserves
>> the behavior, but it seems questionable to be doing a "
On 6/19/19 12:27 PM, Jason Gunthorpe wrote:
> On Thu, Jun 13, 2019 at 06:23:04PM -0700, John Hubbard wrote:
>> On 6/13/19 5:43 PM, Ira Weiny wrote:
>>> On Thu, Jun 13, 2019 at 07:58:29PM +, Jason Gunthorpe wrote:
>>>> On Thu, Jun 13, 2019 at 12:53:02
On 6/25/19 10:45 PM, Michal Hocko wrote:
> On Tue 25-06-19 20:15:28, John Hubbard wrote:
>> On 6/19/19 12:27 PM, Jason Gunthorpe wrote:
>>> On Thu, Jun 13, 2019 at 06:23:04PM -0700, John Hubbard wrote:
>>>> On 6/13/19 5:43 PM, Ira Weiny wrote:
>>>>> On
From: John Hubbard
While converting call sites to use put_user_page*() [1], quite a few
places ended up needing a single-page routine to put and dirty a
page.
Provide put_user_page_dirty() and put_user_page_dirty_lock(),
and use them in a few places: net/xdp, drm/via/, drivers/infiniband.
Cc
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Hi,
Here is the first small batch of call site conversions for put_page()
to put_user_page().
This batch includes some, but not all of the places that benefit from the
two new put_user_page_dirty*() helper functions. (The ordering of call site
conversion patch submission
s, as I can't see how a reserved
> page can end up here. So IMHO the above snippled should really look
> something like this:
>
> put_user_pages(vsg->pages[i], vsg->num_pages,
> vsg->direction == DMA_FROM_DEVICE);
>
> in the end.
>
Agreed.
thanks,
--
John Hubbard
NVIDIA
On 7/21/19 9:30 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard
>
> While converting call sites to use put_user_page*() [1], quite a few
> places ended up needing a single-page routine to put and dirty a
> page.
>
> Provide put_user_page_dirty() and put_user_page_di
On 7/22/19 12:07 PM, Matthew Wilcox wrote:
> On Mon, Jul 22, 2019 at 11:53:54AM -0700, John Hubbard wrote:
>> On 7/22/19 2:33 AM, Christoph Hellwig wrote:
>>> On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubb...@gmail.com wrote:
>>>>for (
From: John Hubbard
Add a more capable variation of put_user_pages() to the
API set, and call it from the simple ones.
The new __put_user_pages() takes an enum that handles the various
combinations of needing to call set_page_dirty() or
set_page_dirty_lock(), before calling put_user_page().
Cc
From: John Hubbard
As discussed in [1] just now, this adds a more capable variation of
put_user_pages() to the API set, and uses it to simplify both the
main implementation, and (especially) the call sites.
Thanks to Christoph for the simplifying ideas, and Matthew for (again)
recommending an
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
On 7/22/19 5:25 PM, Ira Weiny wrote:
On Mon, Jul 22, 2019 at 03:34:15PM -0700, john.hubb...@gmail.com wrote:
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is
quot; really means, "call set_page_dirty()": */
void __put_user_pages_unlocked(struct page **pages, unsigned long npages,
bool dirty);
?
thanks,
--
John Hubbard
NVIDIA
On 7/23/19 11:06 AM, Ira Weiny wrote:
> On Mon, Jul 22, 2019 at 09:41:34PM -0700, John Hubbard wrote:
>> On 7/22/19 5:25 PM, Ira Weiny wrote:
>>> On Mon, Jul 22, 2019 at 03:34:15PM -0700, john.hubb...@gmail.com wrote:
...
>> Obviously, this stuff is all subject to a certai
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Changes since v1:
* Instead of providing __put_user_pages(), add an argument to
put_user_pages_dirty_lock(), and delete put_user_pages_dirty().
This is based on the following points:
1. Lots of call sites become simpler if a bool is passed
into put_user_page
From: John Hubbard
Provide more capable variation of put_user_pages_dirty_lock(),
and delete put_user_pages_dirty(). This is based on the
following:
1. Lots of call sites become simpler if a bool is passed
into put_user_page*(), instead of making the call site
choose which put_user_page
On 7/23/19 6:26 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard
...
> + * 2) This code sees the page as clean, so it calls
> + * set_page_dirty(). The page stays dirty, despite being
> + * written back, so it gets written back
From: John Hubbard
Hi,
I apologize for the extra emails (v2 was sent pretty recently), but I
didn't want to leave a known-broken version sitting out there, creating
problems.
Changes since v2:
* Critical bug fix: remove a stray "break;" from the new routine.
Changes since v1
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Provide more capable variation of put_user_pages_dirty_lock(),
and delete put_user_pages_dirty(). This is based on the
following:
1. Lots of call sites become simpler if a bool is passed
into put_user_page*(), instead of making the call site
choose which put_user_page
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Provide more capable variation of put_user_pages_dirty_lock(),
and delete put_user_pages_dirty(). This is based on the
following:
1. Lots of call sites become simpler if a bool is passed
into put_user_page*(), instead of making the call site
choose which put_user_page
From: John Hubbard
Changes since v4:
* Christophe Hellwig's review applied: deleted siw_free_plist() and
__qib_release_user_pages(), now that put_user_pages_dirty_lock() does
what those routines were doing.
* Applied Bjorn's ACK for net/xdp, and Christophe's Reviewed-by
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Provide more capable variation of put_user_pages_dirty_lock(),
and delete put_user_pages_dirty(). This is based on the
following:
1. Lots of call sites become simpler if a bool is passed
into put_user_page*(), instead of making the call site
choose which put_user_page
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Hi,
These are best characterized as miscellaneous conversions: many (not all)
call sites that don't involve biovec or iov_iter, nor mm/. It also leaves
out a few call sites that require some more work. These are mostly pretty
simple ones.
It's probably best to s
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
Hi,
These are best characterized as miscellaneous conversions: many (not all)
call sites that don't involve biovec or iov_iter, nor mm/. It also leaves
out a few call sites that require some more work. These are mostly pretty
simple ones.
It's probably best to s
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
On 8/1/19 7:16 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard
>
> Hi,
>
> These are best characterized as miscellaneous conversions: many (not all)
> call sites that don't involve biovec or iov_iter, nor mm/. It also leaves
> out a few call sites that require
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
From: John Hubbard
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeh
1 - 100 of 704 matches
Mail list logo