On 26/09/2019 15:26, Wei Liu wrote:
On Thu, Sep 26, 2019 at 10:46:34AM +0100, hong...@amazon.com wrote:
From: Hongyan Xia <hong...@amazon.com>
Signed-off-by: Hongyan Xia <hong...@amazon.com>
---
xen/arch/x86/setup.c | 4 ++--
xen/common/page_alloc.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index e964c032f6..3dc2fad987 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1367,7 +1367,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
if ( map_e < end )
{
- map_pages_to_xen((unsigned long)__va(map_e),
maddr_to_mfn(map_e),
+ map_pages_to_xen((unsigned long)__va(map_e), INVALID_MFN,
PFN_DOWN(end - map_e), PAGE_HYPERVISOR);
Why don't you just remove the calls to map_pages_to_xen?
My intention is to pre-populate the range so that we don't have to do so later
when there are xenheap allocations. But of course if there is superpage merging
or shattering, page tables will be removed or allocated anyway. I will remove
the calls in the next revision.
init_boot_pages(map_e, end);
map_e = end;
@@ -1382,7 +1382,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
}
if ( s < map_s )
{
- map_pages_to_xen((unsigned long)__va(s), maddr_to_mfn(s),
+ map_pages_to_xen((unsigned long)__va(s), INVALID_MFN,
PFN_DOWN(map_s - s), PAGE_HYPERVISOR);
init_boot_pages(s, map_s);
}
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index a00db4c0d9..deeeac065c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2157,7 +2157,7 @@ void *alloc_xenheap_pages(unsigned int order, unsigned
int memflags)
map_pages_to_xen((unsigned long)ret, page_to_mfn(pg),
1UL << order, PAGE_HYPERVISOR);
- return page_to_virt(pg);
+ return ret;
This hunk is a fix to a previous patch. It doesn't below here.
Noted.
Hongyan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel