Hi Hongyan,
On 26/04/2021 10:41, Xia, Hongyan wrote:
On Sun, 2021-04-25 at 21:13 +0100, Julien Grall wrote:
From: Wei Liu <wei.l...@citrix.com>
The basic idea is like Persistent Kernel Map (PKMAP) in Linux. We
pre-populate all the relevant page tables before the system is fully
set up.
We will need it on Arm in order to rework the arm64 version of
xenheap_setup_mappings() as we may need to use pages allocated from
the boot allocator before they are effectively mapped.
This infrastructure is not lock-protected therefore can only be used
before smpboot. After smpboot, map_domain_page() has to be used.
This is based on the x86 version [1] that was originally implemented
by Wei Liu.
Take the opportunity to switch the parameter attr from unsigned to
unsigned int.
[1] <
e92da4ad6015b6089737fcccba3ec1d6424649a5.1588278317.git.hongy...@amazon.com
Signed-off-by: Wei Liu <wei.l...@citrix.com>
Signed-off-by: Hongyan Xia <hongy...@amazon.com>
[julien: Adapted for Arm]
Signed-off-by: Julien Grall <jgr...@amazon.com>
[...]
diff --git a/xen/arch/arm/pmap.c b/xen/arch/arm/pmap.c
new file mode 100644
index 000000000000..702b1bde982d
--- /dev/null
+++ b/xen/arch/arm/pmap.c
@@ -0,0 +1,101 @@
+#include <xen/init.h>
+#include <xen/mm.h>
+
+#include <asm/bitops.h>
+#include <asm/flushtlb.h>
+#include <asm/pmap.h>
+
+/*
+ * To be able to use FIXMAP_PMAP_BEGIN.
+ * XXX: move fixmap definition in a separate header
+ */
+#include <xen/acpi.h>
+
+/*
+ * Simple mapping infrastructure to map / unmap pages in fixed map.
+ * This is used to set up the page table for mapcache, which is used
+ * by map domain page infrastructure.
+ *
+ * This structure is not protected by any locks, so it must not be
used after
+ * smp bring-up.
+ */
+
+/* Bitmap to track which slot is used */
+static unsigned long __initdata inuse;
+
+/* XXX: Find an header to declare it */
+extern lpae_t xen_fixmap[LPAE_ENTRIES];
+
+void *__init pmap_map(mfn_t mfn)
+{
+ unsigned long flags;
+ unsigned int idx;
+ vaddr_t linear;
+ unsigned int slot;
+ lpae_t *entry, pte;
+
+ BUILD_BUG_ON(sizeof(inuse) * BITS_PER_LONG < NUM_FIX_PMAP);
This seems wrong to me. It should multiply with something like
BITS_PER_BYTE.
Good spot! I have updated my tree.
I noticed this line was already present before the Arm version so
probably my fault :(, which also needs to be fixed.
This should be taken care as the next version will create the pmap in
common code :).
+
+ ASSERT(system_state < SYS_STATE_smp_boot);
+
+ local_irq_save(flags);
+
+ idx = find_first_zero_bit(&inuse, NUM_FIX_PMAP);
+ if ( idx == NUM_FIX_PMAP )
+ panic("Out of PMAP slots\n");
+
+ __set_bit(idx, &inuse);
+
+ slot = idx + FIXMAP_PMAP_BEGIN;
+ ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
+
From here...
+ linear = FIXMAP_ADDR(slot);
+ /*
+ * We cannot use set_fixmap() here. We use PMAP when there is no
direct map,
+ * so map_pages_to_xen() called by set_fixmap() needs to map
pages on
+ * demand, which then calls pmap() again, resulting in a loop.
Modify the
+ * PTEs directly instead. The same is true for pmap_unmap().
+ */
+ entry = &xen_fixmap[third_table_offset(linear)];
+
+ ASSERT(!lpae_is_valid(*entry));
+
+ pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
+ pte.pt.table = 1;
+ write_pte(entry, pte);
+
...to here, I wonder if we can move this chunk into arch (like void
*arch_write_pmap_slot(slot)). Such an arch function hides how fixmap is
handled and how page table entry is written behind arch, and the rest
can just be common.
This is similar to what I had in mind. Let me give a try for the next
version.
+ local_irq_restore(flags);
+
+ return (void *)linear;
+}
+
+void __init pmap_unmap(const void *p)
+{
+ unsigned long flags;
+ unsigned int idx;
+ lpae_t *entry;
+ lpae_t pte = { 0 };
+ unsigned int slot = third_table_offset((vaddr_t)p);
+
+ ASSERT(system_state < SYS_STATE_smp_boot);
+ ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
+
+ idx = slot - FIXMAP_PMAP_BEGIN;
+ local_irq_save(flags);
+
+ __clear_bit(idx, &inuse);
+ entry = &xen_fixmap[third_table_offset((vaddr_t)p)];
+ write_pte(entry, pte);
+ flush_xen_tlb_range_va_local((vaddr_t)p, PAGE_SIZE);
and the same for the above, something like arch_clear_pmap(void *) and
the rest into common.
From a quick glance, I don't think x86 and Arm share any useful TLB
flush helpers? So the TLB flush probably should be behind arch as well.
We could potential define flush_tlb_one_local() on Arm. But, I am not
sure this is worth it because the page table manipulation is mainly
happening in arch code so far. Although, this might change in the future.
+
+ local_irq_restore(flags);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
[...]
diff --git a/xen/include/asm-arm/pmap.h b/xen/include/asm-arm/pmap.h
new file mode 100644
index 000000000000..8e1dce93f8e4
--- /dev/null
+++ b/xen/include/asm-arm/pmap.h
@@ -0,0 +1,10 @@
+#ifndef __ASM_PMAP_H__
+#define __ARM_PMAP_H__
This line doesn't seem to match the #ifndef, but if the functions are
moved to common, this header can be moved to common as well.
Stefano pointed out the same. I have fixed it in my tree but I will
likely rename the guard as the header will be moved in common.
Cheers,
--
Julien Grall