On 07/04/2013 07:54:12 AM, Kevin Hao wrote:
For a relocatable kernel since it can be loaded at any place, there
is no any relation between the kernel start addr and the memstart_addr.
So we can't calculate the memstart_addr from kernel start addr. And
also we can't wait to do the relocation after we get the real
memstart_addr from device tree because it is so late. So introduce
a new function we can use to get the first memblock address and size
in a very early stage (before machine_init).

Signed-off-by: Kevin Hao <haoke...@gmail.com>
---
A new patch in v2.

 arch/powerpc/kernel/prom.c | 24 ++++++++++++++++++++++++
 include/linux/of_fdt.h     |  1 +
 2 files changed, 25 insertions(+)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index eb23ac9..9a69d2d 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -753,6 +753,30 @@ void __init early_init_devtree(void *params)
        DBG(" <- early_init_devtree()\n");
 }

+#ifdef CONFIG_RELOCATABLE
+/*
+ * This function run before early_init_devtree, so we have to init
+ * initial_boot_params. Since early_init_dt_scan_memory_ppc will be
+ * executed again in early_init_devtree, we have to reinitialize the
+ * memblock data before return.
+ */
+void __init early_get_first_memblock_info(void *params, phys_addr_t *size)
+{
+       /* Setup flat device-tree pointer */
+       initial_boot_params = params;
+
+       /* Scan memory nodes and rebuild MEMBLOCKs */
+       of_scan_flat_dt(early_init_dt_scan_root, NULL);
+       of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
+
+       if (size)
+               *size = first_memblock_size;
+
+       /* Undo what early_init_dt_scan_memory_ppc does to memblock */
+       memblock_reinit();
+}
+#endif

Wouldn't it be simpler to set a flag so that early_init_dt_add_memory_arch() doesn't mess with memblocks on the first pass?

-Scott
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to