CMOS,
periodically, and during an orderly shutdown.
Hope that adds some clarity.
thanks,
John Hubbard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
the lock prefix. The function above does not use the
lock prefix, so it is not atomic.
thanks,
John Hubbard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordom
On 06/20/2018 05:08 AM, Jan Kara wrote:
> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>> On 06/19/2018 03:41 AM, Jan Kara wrote:
>>> On Tue 19-06-18 02:02:55, Matthew Wilcox wrote:
>>>> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote:
[...]
>>&g
From: John Hubbard
KASAN reports a use-after-free during startup, in mei_cl_write:
BUG: KASAN: use-after-free in mei_cl_write+0x601/0x870 [mei]
(drivers/misc/mei/client.c:1770)
This is caused by commit 98e70866aacb ("mei: add support for variable
length mei headers."), whi
On 04/11/2018 08:37 AM, Jann Horn wrote:
> On Wed, Apr 11, 2018 at 2:04 PM, wrote:
>> From: Michal Hocko
>>
>> 4.17+ kernels offer a new MAP_FIXED_NOREPLACE flag which allows the caller to
>> atomicaly probe for a given address range.
>>
>> [wording heav
ection. Maybe if you break up the sentence, and
possibly omit non-MAP_FIXED discussion, it will help.
> +and take appropriate action if the kernel places the new mapping at a
> +different address.
> .TP
> .BR MAP_FIXED_NOREPLACE " (since Linux 4.17)"
> .\" commit a4ff8e8620d3f4f50ac4b41e8067b7d395056843
>
thanks,
--
John Hubbard
NVIDIA
On 04/12/2018 11:49 AM, Jann Horn wrote:
> On Thu, Apr 12, 2018 at 8:37 PM, Michael Kerrisk (man-pages)
> wrote:
>> Hi John,
>>
>> On 12 April 2018 at 20:33, John Hubbard wrote:
>>> On 04/12/2018 08:39 AM, Jann Horn wrote:
>>>> Clarify that MAP_FIXED
On 04/12/2018 12:18 PM, Jann Horn wrote:
> On Thu, Apr 12, 2018 at 8:59 PM, John Hubbard wrote:
>> On 04/12/2018 11:49 AM, Jann Horn wrote:
>>> On Thu, Apr 12, 2018 at 8:37 PM, Michael Kerrisk (man-pages)
>>> wrote:
>>>> Hi John,
>>>>
>>&g
distinction ie remove HMM_PFN_EMPTY flag and merge now
> duplicate hmm_vma_walk_hole() and hmm_vma_walk_clear() functions.
>
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> include/linux
Mark Hairgrove
> Cc: John Hubbard
> ---
> include/linux/hmm.h | 4 ++--
> mm/hmm.c| 2 +-
> 2 files changed, 3 insertions(+), 3 deletions(-)
Seems entirely harmless. :)
Reviewed-by: John Hubbard
>
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
&
Mark Hairgrove
> Cc: John Hubbard
> ---
> mm/hmm.c | 16
> 1 file changed, 8 insertions(+), 8 deletions(-)
Reviewed-by: John Hubbard
>
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 857eec622c98..3a708f500b80 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @
ct. With
> this device driver can get pfn that match their own private encoding
> out of HMM without having to do any convertion.
>
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hub
.kernel.org
> Cc: Evgeny Baskakov
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> include/linux/hmm.h | 10 ++
> mm/hmm.c| 18 +-
> 2 files changed, 27 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/hmm.h b/include/lin
e
> refcount we have on it.
>
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> mm/hmm.c | 19 +++
> 1 file changed, 19 insertions(+)
>
> diff --git a/mm/hmm.c b/mm/hmm.c
&
ôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> include/linux/hmm.h | 130
> +---
> mm/hmm.c| 85 +++---
> 2 files cha
Cc: Ralph Campbell
> Cc: Mark Hairgrove
> Cc: John Hubbard
> ---
> mm/hmm.c | 174
> +--
> 1 file changed, 102 insertions(+), 72 deletions(-)
>
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 52cdceb35733..dc
distinction ie remove HMM_PFN_EMPTY flag and merge now
> duplicate hmm_vma_walk_hole() and hmm_vma_walk_clear() functions.
>
> Changed since v1:
> - Improved comments
>
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Ralph Campbell
> Cc: Mark Hairgrove
>
On 03/21/2018 11:03 AM, Jerome Glisse wrote:
> On Tue, Mar 20, 2018 at 09:14:34PM -0700, John Hubbard wrote:
>> On 03/19/2018 07:00 PM, jgli...@redhat.com wrote:
>>> From: Ralph Campbell
>> Hi Jerome,
>>
>> This presents a deadlock problem (details bel
On 03/21/2018 08:08 AM, Jerome Glisse wrote:
> On Tue, Mar 20, 2018 at 10:07:29PM -0700, John Hubbard wrote:
>> On 03/19/2018 07:00 PM, jgli...@redhat.com wrote:
>>> From: Jérôme Glisse
>>> +static int hmm_vma_handle_pmd(struct mm_walk *walk,
>>> +
On 03/21/2018 03:46 PM, Jerome Glisse wrote:
> On Wed, Mar 21, 2018 at 03:16:04PM -0700, John Hubbard wrote:
>> On 03/21/2018 11:03 AM, Jerome Glisse wrote:
>>> On Tue, Mar 20, 2018 at 09:14:34PM -0700, John Hubbard wrote:
>>>> On 03/19/2018 07:00 PM, jgli...@redha
On 03/21/2018 07:48 AM, Jerome Glisse wrote:
> On Tue, Mar 20, 2018 at 10:24:34PM -0700, John Hubbard wrote:
>> On 03/19/2018 07:00 PM, jgli...@redhat.com wrote:
>>> From: Jérôme Glisse
>>>
>>
>>
>>
>>> @@ -438,7 +423,7 @@ static int hmm
On 03/21/2018 08:52 AM, Jerome Glisse wrote:
> On Tue, Mar 20, 2018 at 09:39:27PM -0700, John Hubbard wrote:
>> On 03/19/2018 07:00 PM, jgli...@redhat.com wrote:
>>> From: Jérôme Glisse
>
> [...]
>
>>
>> Let's just keep it simple, and go back to
g set outside of locks, so now there is another race with
another hmm_mirror_register...
I'll take a moment and draft up what I have in mind here, which is a more
symmetrical locking scheme for these routines.
thanks,
--
John Hubbard
NVIDIA
On 03/21/2018 04:37 PM, Jerome Glisse wrote:
> On Wed, Mar 21, 2018 at 04:10:32PM -0700, John Hubbard wrote:
>> On 03/21/2018 03:46 PM, Jerome Glisse wrote:
>>> On Wed, Mar 21, 2018 at 03:16:04PM -0700, John Hubbard wrote:
>>>> On 03/21/2018 11:03 AM, Jerome Glis
ase callback. This
> allow the release callback to wait on any pending fault handler
> without deadlock.
>
> Signed-off-by: Ralph Campbell
> Signed-off-by: Jérôme Glisse
> Cc: Evgeny Baskakov
> Cc: Mark Hairgrove
> Cc: John Hubbard
> -
On 03/21/2018 04:41 PM, Jerome Glisse wrote:
> On Wed, Mar 21, 2018 at 04:22:49PM -0700, John Hubbard wrote:
>> On 03/21/2018 11:16 AM, jgli...@redhat.com wrote:
>>> From: Jérôme Glisse
>>>
>>> This code was lost in translat
On 03/22/2018 04:37 PM, Jerome Glisse wrote:
> On Thu, Mar 22, 2018 at 03:47:16PM -0700, John Hubbard wrote:
>> On 03/21/2018 04:41 PM, Jerome Glisse wrote:
>>> On Wed, Mar 21, 2018 at 04:22:49PM -0700, John Hubbard wrote:
>>>> On 03/21/2018 11:16 AM, jgli...@redhat
On 03/22/2018 05:50 PM, Jerome Glisse wrote:
> On Thu, Mar 22, 2018 at 05:13:14PM -0700, John Hubbard wrote:
>> On 03/22/2018 04:37 PM, Jerome Glisse wrote:
>>> On Thu, Mar 22, 2018 at 03:47:16PM -0700, John Hubbard wrote:
>>>> On 03/21/2018 04:41 PM, Jerome Glis
From: John Hubbard
Hi,
This short series prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2].
I'd like to get the first two patches into the -mm tree.
Patch 1, although not technically critical to do now, is still nice to have,
because
From: John Hubbard
An upcoming patch requires a way to operate on each page that
any of the get_user_pages_*() variants returns.
In preparation for that, consolidate the error handling for
__get_user_pages(). This provides a single location (the "out:" label)
for operating on the col
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new release_user_pages(),
instead of calling put_page().
This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2].
[1] https://lwn.net/Articles
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also adds release_user_pages(), a drop-in replacement for
release_pages(). This is intended to be
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().
This prepares for eventually fixing the problem described
in [1], and is following a plan listed in [2].
[1] https://lwn.net/Articles/753027/ : &quo
On 9/28/18 8:29 AM, Jerome Glisse wrote:
> On Thu, Sep 27, 2018 at 10:39:45PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Hi,
>>
>> This short series prepares for eventually fixing the problem described
>> in [1], and is following a pl
On 9/28/18 2:49 PM, Jerome Glisse wrote:
> On Fri, Sep 28, 2018 at 12:06:12PM -0700, John Hubbard wrote:
>> On 9/28/18 8:29 AM, Jerome Glisse wrote:
>>> On Thu, Sep 27, 2018 at 10:39:45PM -0700, john.hubb...@gmail.com wrote:
>>>> From: John Hubbard
[...]
>>&g
On 9/28/18 8:39 AM, Jason Gunthorpe wrote:
> On Thu, Sep 27, 2018 at 10:39:47PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
[...]
>>
>> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
>> index a41792dbae1f..9430d697c
's my main concern:
>
Hi Tom,
Thanks again for looking at this!
> On 11/10/2018 3:50 AM, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>> ...
>> --
>> WITHOUT the patch:
>> -
From: John Hubbard
Hi,
Keith Busch and Dan Williams noticed that this patch
(which was part of my RFC[1] for the get_user_pages + DMA
fix) also fixes a bug. Accordingly, I'm adjusting
the changelog and posting this as it's own patch.
[1] https://lkml.kernel.org/r/20181110085041.100
From: John Hubbard
Commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while pinning pages")
attempted to operate on each page that get_user_pages had retrieved. In
order to do that, it created a common exit point from the routine.
However, one case was missed, which this patch fixes
On 11/21/18 8:49 AM, Tom Talpey wrote:
> On 11/21/2018 1:09 AM, John Hubbard wrote:
>> On 11/19/18 10:57 AM, Tom Talpey wrote:
>>> ~14000 4KB read IOPS is really, really low for an NVMe disk.
>>
>> Yes, but Jan Kara's original config file for fio is *intended*
to pin the gate page, as part of this call",
rather than the generic case of crossing a vma boundary. (I think there's a fine
point that I must be overlooking.) But it's still a valid case, either way.
--
thanks,
John Hubbard
NVIDIA
On 11/27/18 5:21 PM, Tom Talpey wrote:
> On 11/21/2018 5:06 PM, John Hubbard wrote:
>> On 11/21/18 8:49 AM, Tom Talpey wrote:
>>> On 11/21/2018 1:09 AM, John Hubbard wrote:
>>>> On 11/19/18 10:57 AM, Tom Talpey wrote:
[...]
>>>
>>> What I'd
On 11/28/18 5:59 AM, Tom Talpey wrote:
> On 11/27/2018 9:52 PM, John Hubbard wrote:
>> On 11/27/18 5:21 PM, Tom Talpey wrote:
>>> On 11/21/2018 5:06 PM, John Hubbard wrote:
>>>> On 11/21/18 8:49 AM, Tom Talpey wrote:
>>>>> On 11/21/2018 1:09 AM, John Hu
On 11/29/18 6:18 PM, Tom Talpey wrote:
> On 11/29/2018 8:39 PM, John Hubbard wrote:
>> On 11/28/18 5:59 AM, Tom Talpey wrote:
>>> On 11/27/2018 9:52 PM, John Hubbard wrote:
>>>> On 11/27/18 5:21 PM, Tom Talpey wrote:
>>>>> On 11/21/2018 5:06 PM, John H
On 11/29/18 6:30 PM, Tom Talpey wrote:
> On 11/29/2018 9:21 PM, John Hubbard wrote:
>> On 11/29/18 6:18 PM, Tom Talpey wrote:
>>> On 11/29/2018 8:39 PM, John Hubbard wrote:
>>>> On 11/28/18 5:59 AM, Tom Talpey wrote:
>>>>> On 11/27/2018 9:52 PM, John H
On 11/12/18 8:14 AM, Dan Williams wrote:
On Mon, Nov 12, 2018 at 7:45 AM Keith Busch wrote:
On Sat, Nov 10, 2018 at 12:50:36AM -0800, john.hubb...@gmail.com wrote:
From: John Hubbard
An upcoming patch wants to be able to operate on each page that
get_user_pages has retrieved. In order to
On 12/4/18 12:28 PM, Dan Williams wrote:
> On Mon, Dec 3, 2018 at 4:17 PM wrote:
>>
>> From: John Hubbard
>>
>> Introduces put_user_page(), which simply calls put_page().
>> This provides a way to update all get_user_pages*() callers,
>> so that they ca
On 12/4/18 3:03 PM, Dan Williams wrote:
> On Tue, Dec 4, 2018 at 1:56 PM John Hubbard wrote:
>>
>> On 12/4/18 12:28 PM, Dan Williams wrote:
>>> On Mon, Dec 3, 2018 at 4:17 PM wrote:
>>>>
>>>> From: John Hubbard
>>>>
>>>&
On 12/4/18 4:40 PM, Dan Williams wrote:
> On Tue, Dec 4, 2018 at 4:37 PM Jerome Glisse wrote:
>>
>> On Tue, Dec 04, 2018 at 03:03:02PM -0800, Dan Williams wrote:
>>> On Tue, Dec 4, 2018 at 1:56 PM John Hubbard wrote:
>>>>
>>>> On 12/4/18 12:28 PM,
information, which I
should have added in this cover letter. Here's a start:
https://lore.kernel.org/r/20181110085041.10071-1-jhubb...@nvidia.com
...and it looks like this small patch series is not going to work out--I'm
going to have to fall back to another RFC spin. So I'll be sure to include
you and everyone on that. Hope that helps.
thanks,
--
John Hubbard
NVIDIA
On 12/3/18 11:53 PM, Mike Rapoport wrote:
> Hi John,
>
> Thanks for having documentation as a part of the patch. Some kernel-doc
> nits below.
>
> On Mon, Dec 03, 2018 at 04:17:19PM -0800, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> Introduces
On 12/4/18 5:44 PM, Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 05:15:19PM -0800, Matthew Wilcox wrote:
>> On Tue, Dec 04, 2018 at 04:58:01PM -0800, John Hubbard wrote:
>>> On 12/4/18 3:03 PM, Dan Williams wrote:
>>>> Except the LRU fields are already in us
On 12/4/18 5:57 PM, John Hubbard wrote:
> On 12/4/18 5:44 PM, Jerome Glisse wrote:
>> On Tue, Dec 04, 2018 at 05:15:19PM -0800, Matthew Wilcox wrote:
>>> On Tue, Dec 04, 2018 at 04:58:01PM -0800, John Hubbard wrote:
>>>> On 12/4/18 3:03 PM, Dan Williams wrote:
&
On 12/7/18 11:16 AM, Jerome Glisse wrote:
> On Thu, Dec 06, 2018 at 06:45:49PM -0800, John Hubbard wrote:
>> On 12/4/18 5:57 PM, John Hubbard wrote:
>>> On 12/4/18 5:44 PM, Jerome Glisse wrote:
>>>> On Tue, Dec 04, 2018 at 05:15:19PM -0800, Matthew Wilcox wrote:
>&
From: John Hubbard
Hi,
Summary: I'd like these two patches to go into the next convenient cycle.
I *think* that means 4.21.
Details
At the Linux Plumbers Conference, we talked about this approach [1], and
the primary lingering concern was over performance. Tom Talpey helped me
through a
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also introduces put_user_pages(), and a few dirty/locked variations,
as a replacement for
From: John Hubbard
For infiniband code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(), or
put_user_pages*(), instead of put_page()
This is a tiny part of the second step of fixing the problem described
in [1]. The steps are:
1) Provide put_user_page
From: John Hubbard
Hi, here is fodder for conversation during LPC.
Changes since v1:
a) Uses a simpler set/clear/test bit approach in the page flags.
b) Added some initial performance results in the cover letter here, below.
c) Rebased to latest linux.git
d) Puts pages back on the LRU when
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also introduces put_user_pages(), and a few dirty/locked variations,
as a replacement for
From: John Hubbard
An upcoming patch wants to be able to operate on each page that
get_user_pages has retrieved. In order to do that, it's best to
have a common exit point from the routine. Most of this has been
taken care of by commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap whi
From: John Hubbard
For infiniband code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(), or
put_user_pages*(), instead of put_page()
This is a tiny part of the second step of fixing the problem described
in [1]. The steps are:
1) Provide put_user_page
From: John Hubbard
Add two struct page fields that, combined, are unioned with
struct page->lru. There is no change in the size of
struct page. These new fields are for type safety and clarity.
Also add page flag accessors to test, set and clear the new
page->dma_pinned_flags field.
Th
From: John Hubbard
This patch sets and restores the new page->dma_pinned_flags and
page->dma_pinned_count fields, but does not actually use them for
anything yet.
In order to use these fields at all, the page must be removed from
any LRU list that it's on. The patch also adds some
From: John Hubbard
The page->dma_pinned_flags and _count fields require
lock protection. A lock at approximately the granularity
of the zone_lru_lock is called for, but adding to the
locking contention of zone_lru_lock is undesirable,
because that is a pre-existing hot spot. Fortunately,
th
From: John Hubbard
This fixes a few problems that came up when using devices (NICs, GPUs,
for example) that want to have direct access to a chunk of system (CPU)
memory, so that they can DMA to/from that memory. Problems [1] come up
if that memory is backed by persistence storage; for example
From: John Hubbard
The page->dma_pinned_flags and _count fields require
lock protection. A lock at approximately the granularity
of the zone_lru_lock is called for, but adding to the
locking contention of zone_lru_lock is undesirable,
because that is a pre-existing hot spot. Fortunately,
th
From: John Hubbard
This patch sets and restores the new page->dma_pinned_flags and
page->dma_pinned_count fields, but does not actually use them for
anything yet.
In order to use these fields at all, the page must be removed from
any LRU list that it's on. The patch also adds some
From: John Hubbard
An upcoming patch requires a way to operate on each page that
any of the get_user_pages_*() variants returns.
In preparation for that, consolidate the error handling for
__get_user_pages(). This provides a single location (the "out:" label)
for operating on the col
From: John Hubbard
Update page_mkclean(), page_mkclean's callers, and try_to_unmap(), so that
there is a choice: in some cases, skipped dma-pinned pages. In other cases
(sync_mode == WB_SYNC_ALL), wait for those pages to become unpinned.
This fixes some problems that came up when using de
From: John Hubbard
Add a sync_mode parameter to clear_page_dirty_for_io(), to specify the
writeback sync mode, and also pass in the appropriate value
(WB_SYNC_NONE or WB_SYNC_ALL), from each filesystem location that calls
it. This will be used in subsequent patches, to allow page_mkclean() to
From: John Hubbard
Add two struct page fields that, combined, are unioned with
struct page->lru. There is no change in the size of
struct page. These new fields are for type safety and clarity.
Also add page flag accessors to test, set and clear the new
page->dma_pinned_flags field.
Th
ed to the wrong git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/mm-fs-gup-don-t-unmap-or-drop-filesystem-buffers/20180702-090125
> config: x86_64-randconfig-x010-201826 (attached as .config)
&g
ng git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/mm-fs-gup-don-t-unmap-or-drop-filesystem-buffers/20180702-090125
> config: i386-randconfig-x075-201826 (attached as .config)
> compiler:
ed to the wrong git tree, please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/mm-fs-gup-don-t-unmap-or-drop-filesystem-buffers/20180702-090125
> config: x86_64-randconfig-x010-201826 (attached as .config)
&g
On 07/01/2018 05:56 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard
>
There were some typos in patches #4 and #5, which I've fixed locally.
Let me know if anyone would like me to repost with those right away, otherwise
I'll wait for other review besides the kbuild test r
On 07/01/2018 10:52 PM, Leon Romanovsky wrote:
> On Thu, Jun 28, 2018 at 11:17:43AM +0200, Jan Kara wrote:
>> On Wed 27-06-18 19:42:01, John Hubbard wrote:
>>> On 06/27/2018 10:02 AM, Jan Kara wrote:
>>>> On Wed 27-06-18 08:57:18, Jason Gunthorpe wrote:
>>&g
On 07/01/2018 11:34 PM, Leon Romanovsky wrote:
> On Sun, Jul 01, 2018 at 11:10:04PM -0700, John Hubbard wrote:
>> On 07/01/2018 10:52 PM, Leon Romanovsky wrote:
>>> On Thu, Jun 28, 2018 at 11:17:43AM +0200, Jan Kara wrote:
>>>> On Wed 27-06-18 19:42:01, John Hubbard
On 07/02/2018 02:53 AM, Jan Kara wrote:
> On Sun 01-07-18 17:56:53, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
> ...
>
>> @@ -904,12 +907,24 @@ static inline void get_page(struct page *page)
>> */
>> VM_BUG_ON_PAGE(page_ref_count(
is for each page table this is mapped in? Also the
> locking is IMHO going to hurt a lot and we need to avoid it.
>
> What I think needs to happen is that in page_mkclean(), after you've
> cleared all the page tables, you check PageDmaPinned() and wait if needed.
> Page cannot
On 07/02/2018 03:17 AM, Jan Kara wrote:
> On Sun 01-07-18 17:56:49, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> An upcoming patch requires a way to operate on each page that
>> any of the get_user_pages_*() variants returns.
>>
>> In prep
On 07/02/2018 05:08 PM, Christopher Lameter wrote:
> On Mon, 2 Jul 2018, John Hubbard wrote:
>
>>>
>>> These two are just wrong. You cannot make any page reference for
>>> PageDmaPinned() account against a pin count. First, it is just conceptually
>>>
From: John Hubbard
Introduces put_user_page(), which simply calls put_page().
This provides a safe way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().
Also adds release_user_pages(), a drop-in replacement for
release_pages(). This is intended
From: John Hubbard
Hi,
With respect to tracking get_user_pages*() pages with page->dma_pinned*
fields [1], I spent a few days retrofitting most of the get_user_pages*()
call sites, by adding calls to a new put_user_page() function, in place
of put_page(), where appropriate. This will work,
From: John Hubbard
For code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(),
instead of put_page().
Also: rename release_user_pages(), to avoid a naming
conflict with the new external function of the same name.
CC: Al Viro
Signed-off-by: John Hubbard
x27;release_user_pages'
> static void release_user_pages(struct page **pages, int pages_count,
> ^~
Yes. Patches #1 and #2 need to be combined here. I'll do that in the next
version, which will probably include several of the easier put_user_page()
conversions, as well.
thanks,
--
John Hubbard
NVIDIA
On 06/25/2018 08:21 AM, Jan Kara wrote:
> On Thu 21-06-18 18:30:36, Jan Kara wrote:
>> On Wed 20-06-18 15:55:41, John Hubbard wrote:
>>> On 06/20/2018 05:08 AM, Jan Kara wrote:
>>>> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>>>>> On 06/19/2018 03:
On 06/25/2018 08:21 AM, Jan Kara wrote:
> On Thu 21-06-18 18:30:36, Jan Kara wrote:
>> On Wed 20-06-18 15:55:41, John Hubbard wrote:
>>> On 06/20/2018 05:08 AM, Jan Kara wrote:
>>>> On Tue 19-06-18 11:11:48, John Hubbard wrote:
>>>>> On 06/19/2018 03:
page_mkclean_one
try_to_unmap_one
At the moment, they are both just doing an evil little early-out:
if (PageDmaPinned(page))
return false;
...but we talked about maybe waiting for the condition to clear, instead?
Thoughts?
And if so, does it sound reasonable to refactor wait_on_page_bit_common(),
so that it learns how to wait for a bit that, while inside struct page, is
not within page->flags?
thanks,
--
John Hubbard
NVIDIA
map_managed_key.
put_page
put_devmap_managed_page
devmap_managed_key
__put_devmap_managed_page
So if the goal is to restore put_page() to be effectively EXPORT_SYMBOL
again, then I think there would also need to be either a non-inlined
wrapper for devmap_managed_key (awkward for a static key), or else make
it EXPORT_SYMBOL, or maybe something else that's less obvious to me at the
moment.
thanks,
--
John Hubbard
NVIDIA
On 06/15/2018 10:22 PM, Dan Williams wrote:
> On Fri, Jun 15, 2018 at 9:43 PM, John Hubbard wrote:
>> On 06/13/2018 12:51 PM, Dan Williams wrote:
>>> [ adding Andrew, Christoph, and linux-mm ]
>>>
>>> On Wed, Jun 13, 2018 at 12:33 PM, Joe Gorse wrote:
[snip]
&
From: John Hubbard
This fixes a few problems that come up when using devices (NICs, GPUs,
for example) that want to have direct access to a chunk of system (CPU)
memory, so that they can DMA to/from that memory. Problems [1] come up
if that memory is backed by persistence storage; for example
From: John Hubbard
In preparation for a subsequent patch, consolidate the error handling
for __get_user_pages(). This provides a single location (the "out:" label)
for operating on the collected set of pages that are about to be returned.
As long as we are already touching every use o
From: John Hubbard
Hi,
I'm including people who have been talking about this. This is in one sense
a medium-term work around, because there is a plan to talk about more
extensive fixes at the upcoming Linux Plumbers Conference. I am seeing
several customer bugs, though, and I really want t
On 06/17/2018 01:10 PM, Dan Williams wrote:
> On Sun, Jun 17, 2018 at 1:04 PM, Jason Gunthorpe wrote:
>> On Sun, Jun 17, 2018 at 12:53:04PM -0700, Dan Williams wrote:
diff --git a/mm/rmap.c b/mm/rmap.c
index 6db729dc4c50..37576f0a4645 100644
+++ b/mm/rmap.c
@@ -1360,6 +1360,8 @
On 06/17/2018 12:53 PM, Dan Williams wrote:
> [..]
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 6db729dc4c50..37576f0a4645 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1360,6 +1360,8 @@ static bool try_to_unmap_one(struct page *page, struct
>> vm_area_struct *vma,
>>
ng time, but on the other hand--you get
behavior that the hardware cannot otherwise do: access to non-pinned memory.
I know this was brought up before. Definitely would like to hear more
opinions and brainstorming here.
thanks,
--
John Hubbard
NVIDIA
Hi Christoph,
Thanks for looking at this...
On 06/18/2018 12:56 AM, Christoph Hellwig wrote:
> On Sat, Jun 16, 2018 at 06:25:10PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard
>>
>> This fixes a few problems that come up when using devices (NICs, GPUs,
>>
On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
> On Sun, Jun 17, 2018 at 01:28:18PM -0700, John Hubbard wrote:
>> Yes. However, my thinking was: get_user_pages() can become a way to indicate
>> that
>> these pages are going to be treated specially. In particular, the call
On 06/18/2018 10:56 AM, Dan Williams wrote:
> On Mon, Jun 18, 2018 at 10:50 AM, John Hubbard wrote:
>> On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
>>> On Sun, Jun 17, 2018 at 01:28:18PM -0700, John Hubbard wrote:
>>>> Yes. However, my thinking was: get_us
On 06/18/2018 12:21 PM, Dan Williams wrote:
> On Mon, Jun 18, 2018 at 11:14 AM, John Hubbard wrote:
>> On 06/18/2018 10:56 AM, Dan Williams wrote:
>>> On Mon, Jun 18, 2018 at 10:50 AM, John Hubbard wrote:
>>>> On 06/18/2018 01:12 AM, Christoph Hellwig wrote:
>&
1 - 100 of 1009 matches
Mail list logo