On Thu, Oct 26, 2023 at 03:09:04PM +0300, Xenia Ragiadakou wrote:
> On 26/10/23 14:41, Jan Beulich wrote:
> > On 26.10.2023 12:31, Andrew Cooper wrote:
> > > On 26/10/2023 9:34 am, Xenia Ragiadakou wrote:
> > > > On 26/10/23 10:35, Jan Beulich wrote:
> > > > > On 26.10.2023 08:45, Xenia Ragiadakou wrote:
> > > > > > Given that start < kernel_end and end > kernel_start, the logic that
> > > > > > determines the best placement for dom0 initrd and metadata, does not
> > > > > > take into account the two cases below:
> > > > > > (1) start > kernel_start && end > kernel_end
> > > > > > (2) start < kernel_start && end < kernel_end
> > > > > > 
> > > > > > In case (1), the evaluation will result in end = kernel_start
> > > > > > i.e. end < start, and will load initrd in the middle of the kernel.
> > > > > > In case (2), the evaluation will result in start = kernel_end
> > > > > > i.e. end < start, and will load initrd at kernel_end, that is out
> > > > > > of the memory region under evaluation.
> > > > > I agree there is a problem if the kernel range overlaps but is not 
> > > > > fully
> > > > > contained in the E820 range under inspection. I'd like to ask though
> > > > > under what conditions that can happen, as it seems suspicious for the
> > > > > kernel range to span multiple E820 ranges.
> > > > We tried to boot Zephyr as pvh dom0 and its load address was under 1MB.
> > > > 
> > > > I know ... that maybe shouldn't have been permitted at all, but
> > > > nevertheless we hit this issue.
> > > 
> > > Zephyr is linked to run at 4k.  That's what the ELF Headers say, and the
> > > entrypoint is not position-independent.
> > Very interesting. What size is their kernel? And, Xenia, can you provide
> > the E820 map that you were finding the collision with?
> 
> Sure.
> 
> Xen-e820 RAM map:
> 
>  [0000000000000000, 000000000009fbff] (usable)
>  [000000000009fc00, 000000000009ffff] (reserved)
>  [00000000000f0000, 00000000000fffff] (reserved)
>  [0000000000100000, 000000007ffdefff] (usable)
>  [000000007ffdf000, 000000007fffffff] (reserved)
>  [00000000b0000000, 00000000bfffffff] (reserved)
>  [00000000fed1c000, 00000000fed1ffff] (reserved)
>  [00000000fffc0000, 00000000ffffffff] (reserved)
>  [0000000100000000, 000000027fffffff] (usable)
> 
> (XEN) ELF: phdr: paddr=0x1000 memsz=0x8000
> (XEN) ELF: phdr: paddr=0x100000 memsz=0x28a90
> (XEN) ELF: phdr: paddr=0x128aa0 memsz=0x7560
> (XEN) ELF: memory: 0x1000 -> 0x130000

Interesting, so far we have accommodated for the program headers
containing physical addresses for a mostly contiguous region, and the
assumption was that it would all fit into a single RAM region.

If we have to support elfs with such scattered loaded regions we
should start using a rangeset or similar in find_memory() in order to
have a clear picture of the available memory ranges suitable to load
the kernel metadata.

Thanks, Roger.

Reply via email to