On Fri, May 8, 2020 at 1:14 PM Sean Christopherson
wrote:
>
> On Fri, May 08, 2020 at 11:24:25AM -0700, Jon Cargille wrote:
> > From: Peter Feiner
> >
> > Optimization for avoiding lookups in mmu_page_hash. When there's a
> > single direct root, a shadow page h
Commit-ID: 301d328a6f8b53bb86c5ecf72db7bc178bcf1999
Gitweb: https://git.kernel.org/tip/301d328a6f8b53bb86c5ecf72db7bc178bcf1999
Author: Peter Feiner
AuthorDate: Wed, 1 Aug 2018 11:06:57 -0700
Committer: Thomas Gleixner
CommitDate: Fri, 3 Aug 2018 12:36:23 +0200
x86/cpufeatures: Add
On Tue, Sep 12, 2017 at 12:55 PM, Paolo Bonzini wrote:
> On 12/09/2017 18:48, Peter Feiner wrote:
>>>>
>>>> Because update_permission_bitmask is actually the top item in the profile
>>>> for nested vmexits, this speeds up an L2->L1 vmexit by about t
On Mon, Aug 28, 2017 at 12:42 PM, Jim Mattson wrote:
>
> Looks okay to me, but I'm hoping Peter will chime in.
Sorry, this slipped by. Busy couple of weeks!
>
>
> Reviewed-by: Jim Mattson
>
> On Thu, Aug 24, 2017 at 8:56 AM, Paolo Bonzini wrote:
> > update_permission_bitmask currently does a 1
Commit-ID: 956959f6b7a982b2e789a7a8fa1de437074a5eb9
Gitweb: http://git.kernel.org/tip/956959f6b7a982b2e789a7a8fa1de437074a5eb9
Author: Peter Feiner
AuthorDate: Wed, 4 Nov 2015 09:21:46 -0800
Committer: Arnaldo Carvalho de Melo
CommitDate: Thu, 5 Nov 2015 12:47:51 -0300
perf trace: Fix
file versus live).
Signed-off-by: Peter Feiner
---
tools/perf/Documentation/perf-trace.txt | 1 -
1 file changed, 1 deletion(-)
diff --git a/tools/perf/Documentation/perf-trace.txt
b/tools/perf/Documentation/perf-trace.txt
index 7ea0786..13293de 100644
--- a/tools/perf/Documentation/perf
On Fri, Oct 31, 2014 at 11:29:49AM +0800, zhanghailiang wrote:
> Agreed, but for doing live memory snapshot (VM is running when do snapsphot),
> we have to do this (block the write action), because we have to save the page
> before it
> is dirtied by writing action. This is the difference, compare
ble to block indefinitely by being allowed to release the mmap_sem).
>
> Signed-off-by: Andrea Arcangeli
Reviewed-by: Peter Feiner
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
> Signed-off-by: Andrea Arcangeli
Reviewed-by: Peter Feiner
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Wed, Oct 29, 2014 at 05:35:18PM +0100, Andrea Arcangeli wrote:
> This allows the get_user_pages_fast slow path to release the mmap_sem
> before blocking.
>
> Signed-off-by: Andrea Arcangeli
Reviewed-by: Peter Feiner
--
To unsubscribe from this list: send the line "unsubsc
On Wed, Oct 29, 2014 at 05:35:19PM +0100, Andrea Arcangeli wrote:
> This allows those get_user_pages calls to pass FAULT_FLAG_ALLOW_RETRY
> to the page fault in order to release the mmap_sem during the I/O.
>
> Signed-off-by: Andrea Arcangeli
Reviewed-by: Peter Feiner
> diff -
On Thu, Oct 30, 2014 at 07:31:48PM +0800, zhanghailiang wrote:
> On 2014/10/30 1:46, Andrea Arcangeli wrote:
> >On Mon, Oct 27, 2014 at 05:32:51PM +0800, zhanghailiang wrote:
> >>I want to confirm a question:
> >>Can we support distinguishing between writing and reading memory for
> >>userfault?
>
On Tue, Oct 07, 2014 at 05:52:47PM +0200, Andrea Arcangeli wrote:
> I probably grossly overestimated the benefits of resolving the
> userfault with a zerocopy page move, sorry. [...]
For posterity, I think it's worth noting that most expensive aspect of a TLB
shootdown is the interprocessor interr
On Fri, Sep 26, 2014 at 04:33:26PM -0400, Naoya Horiguchi wrote:
> Could you test and merge the following change?
Many apologies for the late reply! Your email was in my spam folder :-( I see
that Andrew has already merged the patch, so we're in good shape!
Thanks for fixing this bug Naoya!
--
To
On Wed, Oct 01, 2014 at 10:56:35AM +0200, Andrea Arcangeli wrote:
> +static inline long __get_user_pages_locked(struct task_struct *tsk,
> +struct mm_struct *mm,
> +unsigned long start,
> +
On Thu, Sep 25, 2014 at 05:47:30PM +1000, Stephen Rothwell wrote:
> Hi Andrew,
>
> Today's linux-next merge of the akpm-current tree got a conflict in
> include/asm-generic/pgtable.h between commit b766eafe6828 ("PCI: Add
> pci_remap_iospace() to map bus I/O resources") from the pci tree and
> com
ls/testing/selftests/vm/map_hugetlb
tools/testing/selftests/vm/thuge-gen
tools/testing/selftests/mount/unprivileged-remount-test
in the list of untracked files.
Signed-off-by: Peter Feiner
---
v1 -> v2:
* added changelog blurb
* added mount/.gitignore for unprivileged
Gets rid of this error when running 'make clean' in the selftests
directory:
make[1]: Entering directory `.../tools/testing/selftests/user'
make[1]: *** No rule to make target `clean'. Stop.
Signed-off-by: Peter Feiner
---
v1 -> v2:
Separated this from the p
Now make -jN builds and runs selftests in parallel. Also, if one
selftest fails to build or run, make will return an error, whereas
before the error was ignored.
Signed-off-by: Peter Feiner
---
v1 -> v2:
Moved fix for missing 'make clean' target into separate
patc
A couple of small patches to make working with selftests easier.
v1 -> v2:
Addressed Shuah's comments.
v2 -> v3:
Forgot Signed-off-by footer.
*** BLURB HERE ***
Peter Feiner (3):
tools: add .gitignore entries for selftests
tools: adding clean target to user selft
Gets rid of this error when running 'make clean' in the selftests
directory:
make[1]: Entering directory `.../tools/testing/selftests/user'
make[1]: *** No rule to make target `clean'. Stop.
---
v1 -> v2:
Separated this from the parallel build patch.
---
tools/testing/selftests/user/Ma
ls/testing/selftests/vm/map_hugetlb
tools/testing/selftests/vm/thuge-gen
tools/testing/selftests/mount/unprivileged-remount-test
in the list of untracked files.
Signed-off-by: Peter Feiner
---
v1 -> v2:
* added changelog blurb
* added mount/.gitignore for unprivileged
A couple of small patches to make working with selftests easier.
v1 -> v2:
Addressed Shuah's comments.
Peter Feiner (3):
tools: add .gitignore entries for selftests
tools: adding clean target to user selftest
tools: parallel selftests building & running
tools/testi
Now make -jN builds and runs selftests in parallel. Also, if one
selftest fails to build or run, make will return an error, whereas
before the error was ignored.
Signed-off-by: Peter Feiner
---
v1 -> v2:
Moved fix for missing 'make clean' target into separate
patc
Now make -jN builds and runs selftests in parallel. Also, if one
selftest fails to build or run, make will return an error, whereas
before the error was ignored.
Also added missing clean target to user/Makefile so 'make clean' doesn't fail.
Signed-off-by: Peter Feiner
---
No arguments given after printf format string with "%s" conversion.
Signed-off-by: Peter Feiner
---
There are a couple of patches floating around for this one. I'm just
including it so somebody that applies this patch series doesn't get a
broken build :-)
---
tools/tes
Signed-off-by: Peter Feiner
---
tools/testing/selftests/breakpoints/.gitignore | 1 +
tools/testing/selftests/efivarfs/.gitignore| 2 ++
tools/testing/selftests/ptrace/.gitignore | 1 +
tools/testing/selftests/timers/.gitignore | 1 +
tools/testing/selftests/vm/.gitignore
A couple of small patches to make working with selftests easier.
Peter Feiner (3):
tools: add .gitignore entries for selftests
tools: fix warning in memfd_test
tools: parallel selftests building & running
tools/testing/selftests/Makefile
86/include/asm/pgtable.h with __must_check and rebuilt.
Signed-off-by: Peter Feiner
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index adeac30..fc46934 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1125,7 +1125,
neglected to observe the start of VMAs returned by find_vma.
Tested:
Wrote a selftest that creates a PMD-sized VMA then unmaps the first
page and asserts that the page is not softdirty. I'm going to send the
pagemap selftest in a later commit.
Signed-off-by: Peter Feiner
---
v1
Used the program given above. I'm going to include this code in
a selftest in the future.
Signed-off-by: Peter Feiner
---
v1 -> v2:
Restructured patch to make logic more clear.
---
fs/proc/task_mmu.c | 61 +++---
1 file changed
On Wed, Sep 10, 2014 at 04:36:28PM -0700, Andrew Morton wrote:
> On Wed, 10 Sep 2014 16:24:46 -0700 Peter Feiner wrote:
> > @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned
> > long addr, unsigned long end,
> > + while (1) {
> > + u
Used the program given above. I'm going to include this code in
a selftest in the future.
Signed-off-by: Peter Feiner
---
fs/proc/task_mmu.c | 61 +++---
1 file changed, 40 insertions(+), 21 deletions(-)
diff --git a/fs/proc/task_mm
neglected to observe the start of VMAs returned by find_vma.
Tested:
Wrote a selftest that creates a PMD-sized VMA then unmaps the first
page and asserts that the page is not softdirty. I'm going to send the
pagemap selftest in a later commit.
Signed-off-by: Peter Feiner
---
fs/proc/task_
enabling and disabling write notifications with
care, this patch fixes a bug in mprotect where vm_page_prot bits set
by drivers were zapped on mprotect. An analogous bug was fixed in mmap
by c9d0bf241451a3ab7d02e1652c22b80cd7d93e8f.
Reported-by: Peter Feiner
Suggested-by: Kirill A. Shutemov
Sign
On Thu, Sep 04, 2014 at 09:43:11AM -0700, Peter Feiner wrote:
> On Mon, Aug 25, 2014 at 09:45:34PM -0700, Hugh Dickins wrote:
> > That sets me wondering: have you placed the VM_SOFTDIRTY check in the
> > right place in this series of tests?
> >
> > I think, once pgprot
On Mon, Aug 25, 2014 at 09:45:34PM -0700, Hugh Dickins wrote:
> On Sun, 24 Aug 2014, Peter Feiner wrote:
> > With this patch, write notifications are enabled when VM_SOFTDIRTY is
> > cleared. Furthermore, to avoid unnecessary faults, write
> > notifications are disabled when
ct of enabling and disabling write notifications with
care, this patch fixes a bug in mprotect where vm_page_prot bits set
by drivers were zapped on mprotect. An analogous bug was fixed in mmap
by c9d0bf241451a3ab7d02e1652c22b80cd7d93e8f.
Reported-by: Peter Feiner
Suggested-by: Kirill A. Shutem
ct of enabling and disabling write notifications with
care, this patch fixes a bug in mprotect where vm_page_prot bits set
by drivers were zapped on mprotect. An analogous bug was fixed in mmap
by c9d0bf241451a3ab7d02e1652c22b80cd7d93e8f.
Reported-by: Peter Feiner
Suggested-by: Kirill A. Shutemov
For VMAs that don't want write notifications, PTEs created for read
faults have their write bit set. If the read fault happens after
VM_SOFTDIRTY is cleared, then the PTE's softdirty bit will remain
clear after subsequent writes.
Here's a simple code snippet to demonstrate the bug:
char* m = mm
On Sun, Aug 24, 2014 at 02:50:58AM +0300, Kirill A. Shutemov wrote:
> One more case to consider: mprotect() which doesn't trigger successful
> vma_merge() will not set VM_SOFTDIRTY and will not enable write-protect on
> the vma.
>
> It's probably better to take VM_SOFTDIRTY into account in
> vma_w
On Sun, Aug 24, 2014 at 02:00:11AM +0300, Kirill A. Shutemov wrote:
> On Sat, Aug 23, 2014 at 06:11:59PM -0400, Peter Feiner wrote:
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index dfc791c..f1a5382 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/pr
We don't want to zap special page protection bits on mprotect.
Analogous to the bug fixed in c9d0bf241451a3ab7d02e1652c22b80cd7d93e8f
where vm_page_prot bits set by drivers were zapped when write
notifications were enabled on new VMAs.
Signed-off-by: Peter Feiner
---
mm/mprotect.c | 2
m = 'x'; /* should dirty the page */
assert(soft_dirty(x)); /* fails */
With this patch, write notifications are enabled when VM_SOFTDIRTY is
cleared. Furthermore, to avoid faults, write notifications are
disabled when VM_SOFTDIRTY is reset.
Signed-off-by: Peter Feiner
Replace logic that has been factored out into a utility method.
Signed-off-by: Peter Feiner
---
mm/mmap.c | 16 ++--
1 file changed, 2 insertions(+), 14 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index abcac32..c18c49a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1618,20 +1618,8
Here's the new patch that uses Kirill's approach of setting write
notifications on the VMA. I also included write notification cleanups and
fixes per our discussion.
Peter Feiner (3):
mm: softdirty: enable write notifications on VMAs after VM_SOFTDIRTY
cleared
mm: mprotect
On Fri, Aug 22, 2014 at 12:51:47AM +0300, Kirill A. Shutemov wrote:
> > > One thing: there could be (I haven't checked) complications on
> > > vma_merge(): since vm_flags are identical it assumes that it can reuse
> > > vma->vm_page_prot of expanded vma. But VM_SOFTDIRTY is excluded from
> > > vm_f
On Fri, Aug 22, 2014 at 12:39:42AM +0300, Kirill A. Shutemov wrote:
> On Fri, Aug 22, 2014 at 12:51:15AM +0400, Cyrill Gorcunov wrote:
>
> Looks good to me.
>
> Would you mind to apply the same pgprot_modify() approach on the
> clear_refs_write(), test and post the patch?
>
> Feel free to use my
On Thu, Aug 21, 2014 at 02:45:43AM +0300, Kirill A. Shutemov wrote:
> On Wed, Aug 20, 2014 at 05:46:22PM -0400, Peter Feiner wrote:
> It basically means VM_SOFTDIRTY require writenotify on the vma.
>
> What about patch below? Untested. And it seems it'll introduce bug similar
27;; /* should dirty the page */
assert(soft_dirty(x)); /* fails */
With this patch, new PTEs created for read faults are write protected
if the VMA has VM_SOFTDIRTY clear.
Signed-off-by: Peter Feiner
---
mm/memory.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/memory.c
On Fri, Aug 01, 2014 at 03:20:42PM -0400, Naoya Horiguchi wrote:
> Page table walker has the information of the current vma in mm_walk, so
> we don't have to call find_vma() in each pagemap_hugetlb_range() call.
You could also get rid of a bunch of code in pagemap_pte_range:
---
fs/proc/task_mmu
, I found that a VMA
that covered a PMD's worth of address space was big enough.
This patch adds the necessary VMA lookup to the PTE hole callback in
/proc/pid/pagemap's page walk and sets soft-dirty according to the
VMAs' VM_SOFTDIRTY flag.
Signed-off-by: Pe
52 matches
Mail list logo