On 3/31/21 11:43 AM, Randy Dunlap wrote:
> On 3/30/21 2:35 PM, Anthony Yznaga wrote:
>> Preserved-across-kexec memory or PKRAM is a method for saving memory
>> pages of the currently executing kernel and restoring them after kexec
>> boot into a new one. This can be utilized
Reduce LRU lock contention when inserting shmem pages by staging pages
to be added to the same LRU and adding them en masse.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index c3fa72061d8a
skipping them when other reserved pages are initialized and
initializing them later with a separate kernel thread.
Signed-off-by: Anthony Yznaga
---
arch/x86/mm/init_64.c | 1 -
include/linux/mm.h| 2 +-
mm/memblock.c | 11 +--
mm/page_alloc.c | 55
, add them
to a local xarray, export the xarray node, and then take the lock on the
page cache xarray and insert the node.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 162 ++---
1 file changed, 156 insertions(+), 6 deletions(-)
diff --git
it.
Signed-off-by: Anthony Yznaga
---
Documentation/core-api/xarray.rst | 8 +++
include/linux/xarray.h| 2 +
lib/test_xarray.c | 45 +
lib/xarray.c | 100 ++
4 files changed, 155 insertions
and the next aligned page will not fit on the pkram_link
page.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index b63b2a3958e7..3f43809c8a85 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
@@ -911,9
, but improvements in performance
when multiple threads are adding to the same pagecache are achieved
by calling a new shmem_add_to_page_cache_fast() function that does
not check for conflicts and drops the xarray lock before updating stats.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 123
index of the first page are provided to the caller.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 4
mm/pkram.c| 46 ++
2 files changed, 50 insertions(+)
diff --git a/include/linux/pkram.h b/include/linux/pkram.h
index
. For now only unevictable pages are supported.
Signed-off-by: Anthony Yznaga
---
include/linux/swap.h | 13
mm/swap.c| 86
2 files changed, 99 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index
the shmem_inode_info lock
and prepare for future optimizations, introduce shmem_insert_pages()
which allows a caller to pass an array of pages to be inserted into a
shmem segment.
Signed-off-by: Anthony Yznaga
---
include/linux/shmem_fs.h | 3 +-
mm/shmem.c | 93
Make use of new interfaces for loading and inserting preserved pages
into a shmem file in bulk.
Signed-off-by: Anthony Yznaga
---
mm/shmem_pkram.c | 23 +--
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/mm/shmem_pkram.c b/mm/shmem_pkram.c
index 354c2b58962c
Work around the limitation that shmem pages must be in memory in order
to be preserved by preventing them from being swapped out in the first
place. Do this by marking shmem pages associated with a PKRAM node
as unevictable.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 2 ++
1 file changed
: Anthony Yznaga
---
mm/shmem_pkram.c | 94 +---
1 file changed, 89 insertions(+), 5 deletions(-)
diff --git a/mm/shmem_pkram.c b/mm/shmem_pkram.c
index e52722b3a709..354c2b58962c 100644
--- a/mm/shmem_pkram.c
+++ b/mm/shmem_pkram.c
@@ -115,6 +115,7
To support deferred initialization of page structs for preserved pages,
separate memblocks containing preserved pages by setting a new flag
when adding them to the memblock reserved list.
Signed-off-by: Anthony Yznaga
---
include/linux/memblock.h | 6 ++
mm/pkram.c | 2 ++
2
Add and remove pkram_link pages from a pkram_obj atomically to prepare
for multithreading.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 39 ---
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 08144c18d425
The kdump kernel should not preserve or restore pages.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 8700fd77dc67..aea069cc49be 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
@@ -1,4 +1,5
In order to facilitate fast initialization of page structs for
preserved pages, memblocks with preserved pages must not cross
numa node boundaries and must have a node id assigned to them.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 9 +
1 file changed, 9 insertions(+)
diff --git a
ff-by: Anthony Yznaga
---
mm/shmem.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 8dfe80aeee97..44cc158ab34d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -671,7 +671,7 @@ static inline bool is_huge_enabled(struct shmem_sb_info
*s
that does not overlap any preserved range,
and populate it with a new, merged regions array.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 241 +
1 file changed, 241 insertions(+)
diff --git a/mm/pkram.c b/mm/pkram.c
index 4cfa236a4126
When pages have been saved to be preserved by the current boot, set
a magic number on the super block to be validated by the next kernel.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/mm/pkram.c b/mm/pkram.c
index dab6657080bf
To prepare for multithreading the work to preserve a shmem file,
divide the work into subranges of the total index range of the file.
The chunk size is a rather arbitrary 256k indices.
Signed-off-by: Anthony Yznaga
---
mm/shmem_pkram.c | 64
.
Signed-off-by: Anthony Yznaga
---
arch/x86/include/asm/numa.h | 4
arch/x86/mm/numa.c | 32
2 files changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/numa.h b/arch/x86/include/asm/numa.h
index e3bae2b60a0d..632b5b6d8cb3
implified: it supports only regular files in the root directory without
multiple hard links, and it does not save swapped out files and aborts if
any are found. However, it can be elaborated to fully support tmpfs.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/shmem_
-off-by: Anthony Yznaga
---
include/linux/shmem_fs.h | 3 ++
mm/shmem.c | 77
2 files changed, 80 insertions(+)
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index d82b6f396588..3f0dd95efd46 100644
--- a/include
results in the need
for far fewer page table pages. As is this breaks AMD SEV-ES which expects
the mappings to be 2M. This could possibly be fixed by updating split
code to split 1GB page if the aren't any other issues with using 1GB
mappings.
Signed-off-by: Anthony Yznaga
---
arch/x86
Explicitly specify the mm to pass to shmem_insert_page() when
the pkram_stream is initialized rather than use the mm of the
current thread. This will allow for multiple kernel threads to
target the same mm when inserting pages in parallel.
Signed-off-by: Anthony Yznaga
---
mm/shmem_pkram.c | 6
-off-by: Anthony Yznaga
---
kernel/kexec_core.c | 3 +++
kernel/kexec_file.c | 5 +
2 files changed, 8 insertions(+)
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index a0b6780740c8..fda4abb865ff 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -37,6 +37,7 @@
#include
not prepared to avoid possible conflicts with
changes in a newer kernel and to avoid having to allocate a contiguous
range larger than a page.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 183 ++---
1 file changed, 176 insertions(+), 7
To free all space utilized for preserved memory, one can write 0 to
/sys/kernel/pkram. This will destroy all PKRAM nodes that are not
currently being read or written.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 39 ++-
1
Avoid regions of memory that contain preserved pages when computing
slots used to select where to put the decompressed kernel.
Signed-off-by: Anthony Yznaga
---
arch/x86/boot/compressed/Makefile | 3 ++
arch/x86/boot/compressed/kaslr.c | 10 +++-
arch/x86/boot/compressed/misc.h | 10
existing architecture definitions for
building a memory mapping pagetable except that a bitmap is used to
represent the presence or absence of preserved pages at the PTE level.
Signed-off-by: Anthony Yznaga
---
mm/Makefile | 2 +-
mm/pkram.c | 30 +++-
mm/pkram_pagetable.c
When a kernel is loaded for kexec the address ranges where the kexec
segments will be copied to may conflict with pages already set to be
preserved. Provide a way to determine if preserved pages exist in a
specified range.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 ++
mm
reboot.
Not yet handled is the case where pages have been preserved before a
kexec kernel is loaded. This will be covered by a later patch.
Signed-off-by: Anthony Yznaga
---
kernel/kexec.c | 9 +
kernel/kexec_file.c | 10 ++
2 files changed, 19 insertions(+)
diff --git a
Free the pages used to pass the preserved ranges to the new boot.
Signed-off-by: Anthony Yznaga
---
arch/x86/mm/init_64.c | 1 +
include/linux/pkram.h | 2 ++
mm/pkram.c| 20
3 files changed, 23 insertions(+)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm
Since page structs are used for linking PKRAM nodes and cleared
on boot, organize all PKRAM nodes into a list singly-linked by pfns
before reboot to facilitate restoring the node list in the new kernel.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 50
pkram_finish_load()
function must be called to free the node. Nodes are also deleted when a
save operation is discarded, i.e. pkram_discard_save() is called instead
of pkram_finish_save().
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 8 ++-
mm/pkram.c
() is called to free the object. Objects
are also deleted when a save operation is discarded.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 ++
mm/pkram.c| 72 ---
2 files changed, 70 insertions(+), 4 deletions(-)
diff
This patch adds the ability to save an arbitrary byte streams to a
a PKRAM object using pkram_write() to be restored later using pkram_read().
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 11 +
mm/pkram.c| 123
_files/kvmforum2019/66/VMM-fast-restart_kvmforum2019.pdf
[3] https://www.youtube.com/watch?v=pBsHnf93tcQ
https://static.sched.com/hosted_files/kvmforum2020/10/Device-Keepalive-State-KVMForum2020.pdf
Anthony Yznaga (43):
mm: add PKRAM API stubs and Kconfig
mm: PKRAM: implement node load and save funct
)
/* free the object */
pkram_finish_load_obj()
/* free the node */
pkram_finish_load()
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 47 +
mm/Kconfig| 9 +++
mm/Makefile | 1 +
mm/pkram.c| 179
freed
after the last user puts it.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 42 +++-
mm/pkram.c| 282 +-
2 files changed, 317 insertions(+), 7 deletions(-)
diff --git a/include/linux
pkram' boot param. For that purpose, the pkram super
block pfn is exported via /sys/kernel/pkram. If none is passed, any
preserved memory will not be kept, and a new super block will be
allocated.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pkr
Support preserving a transparent hugepage by recording the page order and
a flag indicating it is a THP. Use these values when the page is
restored to reconstruct the THP.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 20
1 file changed, 16 insertions(+), 4 deletions
Keep preserved pages from being recycled during boot by adding them
to the memblock reserved list during early boot. If memory reservation
fails (e.g. a region has already been reserved), all preserved pages
are dropped.
Signed-off-by: Anthony Yznaga
---
arch/x86/kernel/setup.c | 3 ++
arch
: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 +
mm/pkram.c| 205 ++
2 files changed, 207 insertions(+)
diff --git a/include/linux/pkram.h b/include/linux/pkram.h
index c2099a4f2004..97a7c2ac44a9 100644
On 7/29/20 6:52 AM, Kirill A. Shutemov wrote:
> On Mon, Jul 27, 2020 at 10:11:25AM -0700, Anthony Yznaga wrote:
>> A vma with the VM_EXEC_KEEP flag is preserved across exec. For anonymous
>> vmas only. For safety, overlap with fixed address VMAs created in the new
>> mm
On 7/28/20 6:38 AM, ebied...@xmission.com wrote:
> Anthony Yznaga writes:
>
>> A vma with the VM_EXEC_KEEP flag is preserved across exec. For anonymous
>> vmas only. For safety, overlap with fixed address VMAs created in the new
>> mm during exec (e.g. the stack and e
On 7/28/20 4:34 AM, Kirill Tkhai wrote:
> On 27.07.2020 20:11, Anthony Yznaga wrote:
>> This patchset adds support for preserving an anonymous memory range across
>> exec(3) using a new madvise MADV_DOEXEC argument. The primary benefit for
>> sharing memory in this man
reverts to MADV_DONTEXEC.
Signed-off-by: Steve Sistare
Signed-off-by: Anthony Yznaga
---
include/uapi/asm-generic/mman-common.h | 3 +++
mm/madvise.c | 25 +
2 files changed, 28 insertions(+)
diff --git a/include/uapi/asm-generic/mman-common.h
no conflicts. Comments welcome.)
Signed-off-by: Steve Sistare
Signed-off-by: Anthony Yznaga
---
arch/x86/Kconfig | 1 +
fs/exec.c | 20
include/linux/mm.h | 5 +
kernel/fork.c | 2 +-
mm/mmap.c | 47
et up. Patch 1 re-introduces the use of
MAP_FIXED_NOREPLACE to load ELF binaries that addresses the previous issues
and could be considered on its own.
Patches 3, 4, and 5 introduce the feature and an opt-in method for its use
using an ELF note.
Anthony Yznaga (5):
elf: reintroduce
nue to map at a
system-selected address in the mmap region.
Signed-off-by: Anthony Yznaga
---
fs/binfmt_elf.c | 112
1 file changed, 64 insertions(+), 48 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 9fe3b51c116a..6445
Don't copy preserved VMAs to the binary being exec'd unless the binary has
a "preserved-mem-ok" ELF note.
Signed-off-by: Anthony Yznaga
---
fs/binfmt_elf.c | 84 +
fs/exec.c | 17 +-
include/
there is no vma between the vma passed to it and the
address to expand to, so check before calling it.
Signed-off-by: Anthony Yznaga
---
fs/exec.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/exec.c b/fs/exec.c
index e6e8a9a70327..262112e5f9f8 100644
--- a/fs/exec.c
won't need updating. I can't find any reason for this not to
be the case, but I've taken a more cautious approach to fixing this by
dividing things into three patches.
Anthony Yznaga (3):
KVM: x86: remove unnecessary rmap walk of read-only memslots
KVM: x86: avoid unneces
There's no write access to remove. An existing memslot cannot be updated
to set or clear KVM_MEM_READONLY, and any mappings established in a newly
created or moved read-only memslot will already be read-only.
Signed-off-by: Anthony Yznaga
---
arch/x86/kvm/x86.c | 6 ++
1 file chang
properties of the new slot.
Signed-off-by: Anthony Yznaga
---
arch/x86/kvm/x86.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 23fd888e52ee..d211c8ced6bb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10138,7 +10138,7
Consolidate the code and correct the comments to show that the actions
taken to update existing mappings to disable or enable dirty logging
are not necessary when creating, moving, or deleting a memslot.
Signed-off-by: Anthony Yznaga
---
arch/x86/kvm/x86.c | 104
On 5/11/20 6:57 AM, Mike Rapoport wrote:
> On Wed, May 06, 2020 at 05:41:40PM -0700, Anthony Yznaga wrote:
>> The size of the memblock reserved array may be increased while preserved
>> pages are being reserved. When this happens, preserved pages that have
>> not yet been
On 5/7/20 10:51 AM, Kees Cook wrote:
> On Wed, May 06, 2020 at 05:41:47PM -0700, Anthony Yznaga wrote:
>> Avoid regions of memory that contain preserved pages when computing
>> slots used to select where to put the decompressed kernel.
> This is changing the slot-walki
On 5/7/20 9:30 AM, Randy Dunlap wrote:
> On 5/6/20 5:42 PM, Anthony Yznaga wrote:
>> Improve performance by multithreading the work to preserve and restore
>> shmem pages.
>>
>> Add 'pkram_max_threads=' kernel option to specify the maximum number
>> o
of a file to a pkram_obj
until no more no more chunks are available.
When restoring pages each thread loads pages using a copy of a
pkram_stream initialized by pkram_prepare_load_obj(). Under the hood
each thread ends up fetching and operating on pkram_link pages.
Signed-off-by: Anth
.
Signed-off-by: Anthony Yznaga
---
arch/x86/include/asm/numa.h | 4
arch/x86/mm/numa.c | 32
2 files changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/numa.h b/arch/x86/include/asm/numa.h
index bbfde3d2662f..f9e05f4eb1c6
To support deferred initialization of page structs for preserved pages,
separate memblocks containing preserved pages by setting a new flag
when adding them to the memblock reserved list.
Signed-off-by: Anthony Yznaga
---
include/linux/memblock.h | 7 +++
mm/memblock.c| 8
implified: it supports only regular files in the root directory without
multiple hard links, and it does not save swapped out files and aborts if
any are found. However, it can be elaborated to fully support tmpfs.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram
Add and remove pkram_link pages from a pkram_obj atomically to prepare
for multithreading.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 5f4e4d12865f..042c14dedc25
When a kernel is loaded for kexec the address ranges where the kexec
segments will be copied to may conflict with pages already set to be
preserved. Provide a way to determine if preserved pages exist in a
specified range.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 ++
mm
it.
Signed-off-by: Anthony Yznaga
---
Documentation/core-api/xarray.rst | 8 +++
include/linux/xarray.h| 2 +
lib/test_xarray.c | 45 +
lib/xarray.c | 100 ++
4 files changed, 155 insertions
. For now only unevictable pages are supported.
Signed-off-by: Anthony Yznaga
---
include/linux/swap.h | 11 ++
mm/swap.c| 101 +++
2 files changed, 112 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index
ff-by: Anthony Yznaga
---
mm/shmem.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 13475073fb52..1f3b43b8fa34 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -693,6 +693,7 @@ int shmem_insert_page(struct mm_struct *mm, struct inode
*
and the next aligned page will not fit on the pkram_link
page.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index ef092aa5ce7a..416c3ca4411b 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
pages in the array are provided
to the caller.
pkram_finish_load_pages()
Called when no more pages will be loaded from the pkram_obj.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 6 +++
mm/pkram.c| 106 ++
2 files
puts it.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 5 ++
mm/pkram.c| 219 +-
2 files changed, 221 insertions(+), 3 deletions(-)
diff --git a/include/linux/pkram.h b/include/linux/pkram.h
Support preserving a transparent hugepage by recording the page order and
a flag indicating it is a THP. Use these values when the page is
restored to reconstruct the THP.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 ++
mm/pkram.c| 20
2 files
, add them
to a local xarray, export the xarray node, and then take the lock on the
page cache xarray and insert the node.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 145 ++---
1 file changed, 138 insertions(+), 7 deletions(-)
diff --git
Reduce LRU lock contention when inserting shmem pages by staging pages
to be added to the same LRU and adding them en masse.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index ca5edf580f24
allocated from as will be seen in a later patch.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 15 +
mm/Makefile | 2 +-
mm/pkram_pagetable.c | 169 ++
3 files changed, 185 insertions(+), 1 deletion(-)
create mode
: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 +
mm/pkram.c| 210 ++
2 files changed, 212 insertions(+)
diff --git a/include/linux/pkram.h b/include/linux/pkram.h
index 409022e1472f..1ba48442ef8e 100644
, but improvements in performance
when multiple threads are adding to the same pagecache are achieved
by calling a new shmem_add_to_page_cache_fast() function that does
not check for conflicts and drops the xarray lock before updating stats.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 95
by the contiguous ranges present rather than a page at a time.
Signed-off-by: Anthony Yznaga
---
arch/x86/kernel/setup.c | 3 +
arch/x86/mm/init_64.c | 2 +
include/linux/pkram.h | 8 +++
mm/pkram.c | 179 +++-
4 files changed
the shmem_inode_info lock
and prepare for future optimizations, introduce shmem_insert_pages()
which allows a caller to pass an array of pages to be inserted into a
shmem segment.
Signed-off-by: Anthony Yznaga
---
include/linux/shmem_fs.h | 3 +-
mm/shmem.c | 114
Make use of new interfaces for loading and inserting preserved pages
into a shmem file in bulk.
Signed-off-by: Anthony Yznaga
---
mm/shmem_pkram.c | 23 +--
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/mm/shmem_pkram.c b/mm/shmem_pkram.c
index 4992b6c3e54e
() is called to free the object. Objects
are also deleted when a save operation is discarded.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 1 +
mm/pkram.c| 77 ---
2 files changed, 74 insertions(+), 4 deletions(-)
diff
skipping them when other reserved pages are initialized and
initializing them later with a separate kernel thread.
Signed-off-by: Anthony Yznaga
---
arch/x86/mm/init_64.c | 1 -
include/linux/mm.h| 2 +-
mm/memblock.c | 10 --
mm/page_alloc.c | 52
it initializes pkram_stream
with the index range of the next available range of pages to save.
find_get_pages_range() can then be used to get the pages in the range.
When no more index ranges are available, pkram_prepare_save_chunk()
returns -ENODATA.
Signed-off-by: Anthony Yznaga
---
include
In order to facilitate fast initialization of page structs for
preserved pages, memblocks with preserved pages must not cross
numa node boundaries and must have a node id assigned to them.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 10 ++
1 file changed, 10 insertions(+)
diff --git
To support deferred initialization of page structs for preserved
pages, add an iterator of the memblock reserved list that can select or
exclude ranges based on memblock flags.
Signed-off-by: Anthony Yznaga
---
include/linux/memblock.h | 10 ++
mm/memblock.c| 51
Add a pointer to the pagetable to the pkram_super_block page.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 20 +---
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 5a7b8f61a55d..54b2779d0813 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
general both metadata pages and data pages must be added to the
pagetable. A mapping for a metadata page can be added when the page is
allocated, but there is an exception: for the pagetable pages themselves
mappings are added after they are allocated to avoid recursion.
Signed-off-by: Anthony
Avoid regions of memory that contain preserved pages when computing
slots used to select where to put the decompressed kernel.
Signed-off-by: Anthony Yznaga
---
arch/x86/boot/compressed/Makefile | 3 +
arch/x86/boot/compressed/kaslr.c | 67 ++
arch/x86/boot/compressed/misc.h | 19
-off-by: Anthony Yznaga
---
kernel/kexec_core.c | 3 +++
kernel/kexec_file.c | 5 +
2 files changed, 8 insertions(+)
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index c19c0dad1ebe..8c24b546352e 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -37,6 +37,7 @@
#include
node/object */
pkram_load_page()[,...] /* for page stream, or
pkram_read()[,...]* ... for byte stream */
/* free the object */
pkram_finish_load_obj()
/* free the node */
pkram_finish_load()
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
-off-by: Anthony Yznaga
---
include/linux/shmem_fs.h | 3 ++
mm/shmem.c | 95
2 files changed, 98 insertions(+)
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 7a35a6901221..688b92cd4ec7 100644
--- a/include
Workaround the limitation that shmem pages must be in memory in order
to be preserved by preventing them from being swapped out in the first
place. Do this by marking shmem pages associated with a PKRAM node
as unevictable.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 2 ++
1 file changed, 2
pkram' boot param. For that purpose, the pkram super
block pfn is exported via /sys/kernel/pkram. If none is passed, any
preserved memory will not be kept, and a new super block will be
allocated.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pk
Explicitly specify the mm to pass to shmem_insert_page() when
the pkram_stream is initialized rather than use the mm of the
current thread. This will allow for multiple kernel threads to
target the same mm when inserting pages in parallel.
Signed-off-by: Anthony Yznaga
---
include/linux
The kdump kernel should not preserve or restore pages.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 95e691382721..4d4d836fea53 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
@@ -1,4 +1,5
To free all space utilized for preserved memory, one can write 0 to
/sys/kernel/pkram. This will destroy all PKRAM nodes that are not
currently being read or written.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 39 ++-
1
pkram_finish_load()
function must be called to free the node. Nodes are also deleted when a
save operation is discarded, i.e. pkram_discard_save() is called instead
of pkram_finish_save().
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 7 ++-
mm/pkram.c
around
memblock_find_in_range() walks the preserved pages pagetable to find
sufficiently sized ranges without preserved pages and passes them to
memblock_find_in_range().
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 3 +++
mm/memblock.c | 15 +--
mm/pkram.c
1 - 100 of 134 matches
Mail list logo