ср, 20 февр. 2019 г., 22:14 Julien Grall <julien.gr...@arm.com>:

> Hi Amit,


Hi, Julien, Amit.

Sorry for formatting, writing from my mobile.

If I am not mistaken, the diff between BSP's and mainline device trees is
in reserved memory area. BSP device tree (1) contains reserved memory
regions, but the mainline one (2) doesn't.
>From the log you provided, I see that Xen is trying to copy device-tree to
the address which is located in reserved area (0x58000000). FYI, we always
remove these reserved area nodes from the device-tree. Maybe that's why we
didn't face an issue. Julien, what do you think, can this be a reason?

(1)
https://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas-bsp.git/tree/arch/arm64/boot/dts/renesas/r8a7795-h3ulcb.dts?h=v4.14.75-ltsi/rcar-3.9.3.rc1

(2)
https://elixir.bootlin.com/linux/v5.0-rc7/source/arch/arm64/boot/dts/renesas/r8a7795-h3ulcb.dts


> Thank you for the report.
>
> On 2/19/19 4:46 PM, Amit Tomer wrote:
> > (XEN) CPU7 MIDR (0x410fd034) does not match boot CPU MIDR (0x411fd073),
> > (XEN) disable cpu (see big.LITTLE.txt under docs/).
> > (XEN) CPU7 never came online
> > (XEN) Failed to bring up CPU 7 (error -5)
> > (XEN) Brought up 4 CPUs
> > (XEN) P2M: 44-bit IPA with 44-bit PA and 8-bit VMID
> > (XEN) P2M: 4 levels with order-0 root, VTCR 0x80043594
> > (XEN) I/O virtualisation disabled
> > (XEN) build-id: 74f80103afa98953c029eea87d69696bcd5ef69d
> > (XEN) alternatives: Patching with alt table 00000000002abba8 ->
> 00000000002ac1f0
> > (XEN) CPU0 will call ARM_SMCCC_ARCH_WORKAROUND_1 on exception entry
> > (XEN) CPU2 will call ARM_SMCCC_ARCH_WORKAROUND_1 on exception entry
> > (XEN) CPU3 will call ARM_SMCCC_ARCH_WORKAROUND_1 on exception entry
> > (XEN) CPU1 will call ARM_SMCCC_ARCH_WORKAROUND_1 on exception entry
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN) Loading Domd0 kernel from boot module @ 000000007a000000
> > (XEN) Allocating 1:1 mappings totalling 512MB for dom0:
> > (XEN) BANK[0] 0x00000050000000-0x00000070000000 (512MB)
> > (XEN) Grant table range: 0x00000048000000-0x00000048040000
> > (XEN) Allocating PPI 16 for event channel interrupt
> > (XEN) Loading zImage from 000000007a000000 to
> 0000000050080000-0000000051880000
> > (XEN) Loading dom0 DTB to 0x0000000058000000-0x0000000058010a48
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Unable to copy the DTB to dom0 memory (left = 68168 bytes)
> > (XEN) ****************************************
>
> This is a bit odd. The function copy_to_guest_phys_flush_dcache can
> only fail when the P2M entry is invalid or it is not a RAM page.
>
> From the log, it can't even copy the first page. However, this seems
> to belong to the RAM (see BANK[0] message). Would you mind to apply the
> following patch and send the log?
>
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index d9836779d1..08b9cd2c44 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1805,6 +1805,8 @@ static void __init dtb_load(struct kernel_info
> *kinfo)
>      printk("Loading dom0 DTB to 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
>             kinfo->dtb_paddr, kinfo->dtb_paddr +
> fdt_totalsize(kinfo->fdt));
>
> +    dump_p2m_lookup(kinfo->d, kinfo->dtb_paddr);
> +
>      left = copy_to_guest_phys_flush_dcache(kinfo->d, kinfo->dtb_paddr,
>                                             kinfo->fdt,
>                                             fdt_totalsize(kinfo->fdt));
>
> Cheers,
>
> --
> Julien Grall
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to