On 08.04.2025 15:46, Oleksii Kurochko wrote:
> On 4/8/25 2:02 PM, Jan Beulich wrote:
>> On 08.04.2025 13:51, Oleksii Kurochko wrote:
>>> On 4/7/25 12:09 PM, Jan Beulich wrote:
>>>> On 04.04.2025 18:04, Oleksii Kurochko wrote:
>>>>> --- a/xen/arch/riscv/include/asm/mm.h
>>>>> +++ b/xen/arch/riscv/include/asm/mm.h
>>>>> @@ -43,13 +43,19 @@ static inline void *maddr_to_virt(paddr_t ma)
>>>>>     */
>>>>>    static inline unsigned long virt_to_maddr(unsigned long va)
>>>>>    {
>>>>> +    const unsigned int vpn1_shift = PAGETABLE_ORDER + PAGE_SHIFT;
>>>>> +    const unsigned long va_vpn = va >> vpn1_shift;
>>>>> +    const unsigned long xen_virt_start_vpn =
>>>>> +        _AC(XEN_VIRT_START, UL) >> vpn1_shift;
>>>>> +    const unsigned long xen_virt_end_vpn =
>>>>> +        xen_virt_start_vpn + ((XEN_VIRT_SIZE >> vpn1_shift) - 1);
>>>>> +
>>>>>        if ((va >= DIRECTMAP_VIRT_START) &&
>>>>>            (va <= DIRECTMAP_VIRT_END))
>>>>>            return directmapoff_to_maddr(va - directmap_virt_start);
>>>>>    
>>>>> -    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(2));
>>>>> -    ASSERT((va >> (PAGETABLE_ORDER + PAGE_SHIFT)) ==
>>>>> -           (_AC(XEN_VIRT_START, UL) >> (PAGETABLE_ORDER + PAGE_SHIFT)));
>>>>> +    BUILD_BUG_ON(XEN_VIRT_SIZE > GB(1));
>>>>> +    ASSERT((va_vpn >= xen_virt_start_vpn) && (va_vpn <= 
>>>>> xen_virt_end_vpn));
>>>> Not all of the range is backed by memory, and for the excess space the
>>>> translation is therefore (likely) wrong. Which better would be caught by
>>>> the assertion?
>>> Backed here means that the memory is actually mapped?
>>>
>>> IIUC it is needed to check only for the range [XEN_VIRT_START, 
>>> XEN_VIRT_START+xen_phys_size]
>>> where xen_phys_size=(unsigned long)_end - (unsigned long)_start.
>>>
>>> Did I understand you correctly?
>> I think so, yes. Depending on what you (intend to) do to .init.* at the
>> end of boot, that range may later also want excluding.
> 
> I planned to release everything between __init_begin and __init_end in the 
> following way:
>    destroy_xen_mappings((unsigned long)__init_begin, (unsigned 
> long)__init_end);
> 
> So yes, then I think I have to come up with new ASSERT, add 
> is_init_memory_freed variable and
> if is_init_memory_freed=true then also check that `va` isn't from .init.* 
> range.
> 
> But I'm not quire sure that mapping for .got* should be destroyed after the 
> end of boot. (now it is
> part of [__init_begin,__init_end] range.

Isn't this a non-issue considering

ASSERT(!SIZEOF(.got),      ".got non-empty")
ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")

near the bottom of xen.lds.S?

>>>>> --- a/xen/arch/riscv/mm.c
>>>>> +++ b/xen/arch/riscv/mm.c
>>>>> @@ -31,20 +31,27 @@ unsigned long __ro_after_init phys_offset; /* = 
>>>>> load_start - XEN_VIRT_START */
>>>>>    #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
>>>>>    
>>>>>    /*
>>>>> - * It is expected that Xen won't be more then 2 MB.
>>>>> + * It is expected that Xen won't be more then XEN_VIRT_SIZE MB.
>>>>>     * The check in xen.lds.S guarantees that.
>>>>> - * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
>>>>> - * One for each page level table with PAGE_SIZE = 4 Kb.
>>>>>     *
>>>>> - * One L0 page table can cover 2 MB(512 entries of one page table * 
>>>>> PAGE_SIZE).
>>>>> + * Root page table is shared with the initial mapping and is declared
>>>>> + * separetely. (look at stage1_pgtbl_root)
>>>>>     *
>>>>> - * It might be needed one more page table in case when Xen load address
>>>>> - * isn't 2 MB aligned.
>>>>> + * An amount of page tables between root page table and L0 page table
>>>>> + * (in the case of Sv39 it covers L1 table):
>>>>> + *   (CONFIG_PAGING_LEVELS - 2) are needed for an identity mapping and
>>>>> + *   the same amount are needed for Xen.
>>>>>     *
>>>>> - * CONFIG_PAGING_LEVELS page tables are needed for the identity mapping,
>>>>> - * except that the root page table is shared with the initial mapping
>>>>> + * An amount of L0 page tables:
>>>>> + *   (512 entries of one L0 page table covers 2MB == 
>>>>> 1<<XEN_PT_LEVEL_SHIFT(1))
>>>>> + *   XEN_VIRT_SIZE >> XEN_PT_LEVEL_SHIFT(1) are needed for Xen and
>>>>> + *   one L0 is needed for indenity mapping.
>>>>> + *
>>>>> + *   It might be needed one more page table in case when Xen load
>>>>> + *   address isn't 2 MB aligned.
>>>> Shouldn't we guarantee that?
>>> I think it's sufficient to guarantee 4KB alignment.
>>>
>>> The only real benefit I see in enforcing larger alignment is that it likely 
>>> enables
>>> the use of superpages for mapping, which would reduce TLB pressure.
>>> But perhaps I'm missing something?
>> No, it's indeed mainly that.
> 
> But then the linker address and the load address should both be aligned to a 
> 2MB or 1GB boundary.
> This likely isn't an issue at all, but could it be a problem if we require 
> 1GB alignment for the
> load address? In that case, might it be difficult for the platform to find a 
> suitable place in
> memory to load Xen for some reason? (I don't think so but maybe I'm missing 
> something)

Why would load address need to be 1Gb aligned? That (as well as 2Mb-)alignment
matters only once you set up paging?

> These changes should probably be part of a separate patch, as 
> currently,|setup_initial_mapping() |only works with 4KB mapping.

That's fine; it's just that - as said - the calculation of how many page tables
you may need has to cover for the worst case.

Jan

Reply via email to