On 26.07.2023 20:39, Oleksii wrote:
> On Wed, 2023-07-26 at 17:59 +0200, Jan Beulich wrote:
>> On 26.07.2023 17:54, Oleksii wrote:
>>> On Wed, 2023-07-26 at 17:00 +0200, Jan Beulich wrote:
>>>> On 26.07.2023 15:12, Oleksii wrote:
>>>>> On Wed, 2023-07-26 at 13:58 +0200, Jan Beulich wrote:
>>>>>> On 26.07.2023 13:23, Oleksii wrote:
>>>>>>> I would like to ask for advice on whether it would be
>>>>>>> easier,
>>>>>>> less
>>>>>>> bug-
>>>>>>> provoking ( during identity mapping to remove of whole Xen
>>>>>>> ) to
>>>>>>> have a
>>>>>>> separate identity section that won't be more than
>>>>>>> PAGE_SIZE.
>>>>>>
>>>>>> I'm afraid you can't safely do this in C, or at least not
>>>>>> without
>>>>>> further checking on what the compiler actually did.
>>>>>>
>>>>>>> @@ -264,6 +268,19 @@ void __init enable_mmu(void)
>>>>>>>                RV_STAGE1_MODE << SATP_MODE_SHIFT);
>>>>>>>  }
>>>>>>>  
>>>>>>> +void __attribute__((naked)) __section(".ident")
>>>>>>> turn_on_mmu(unsigned
>>>>>>> long ra)
>>>>>>
>>>>>> Did you read what gcc doc says about "naked"? Extended asm()
>>>>>> isn't
>>>>>> supported there. Since ...
>>>>>>
>>>>>>> +{
>>>>>>> +    /* Ensure page table writes precede loading the SATP
>>>>>>> */
>>>>>>> +    sfence_vma();
>>>>>>> +
>>>>>>> +    /* Enable the MMU and load the new pagetable for Xen
>>>>>>> */
>>>>>>> +    csr_write(CSR_SATP,
>>>>>>> +              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
>>>>>>> +              RV_STAGE1_MODE << SATP_MODE_SHIFT);
>>>>>>> +
>>>>>>> +    asm volatile( "jr %0\n" : : "r"(ra) );
>>>>>>> +}
>>>>>>
>>>>>> ... none of this really requires C, I think we're at the
>>>>>> point
>>>>>> where
>>>>>> (iirc) Andrew's and my suggestion wants following, moving
>>>>>> this to
>>>>>> assembly code (at which point it doesn't need to be a
>>>>>> separate
>>>>>> function). You can still build page tables in C, of course.
>>>>>> (Likely
>>>>>> you then also won't need a separate section; some minimal
>>>>>> alignment
>>>>>> guarantees ought to suffice to make sure the critical code is
>>>>>> confined to a single page.)
>>>>>
>>>>> Thanks. I'll move all of this to assembly code.
>>>>> Regarding alignment it is needed alignment on start and end of
>>>>> function:
>>>>>     .balign PAGE_SIZE
>>>>>     GLOBAL(turn_on_mmu)
>>>>>         ...
>>>>>     .balign PAGE_SIZE
>>>>>     ENDPROC(turn_on_mmu)
>>>>>
>>>>> Does the better way exist?
>>>>
>>>> The function is only going to be a handful of instructions. Its
>>>> alignment doesn't need to be larger than the next power of 2. I
>>>> expect you'll be good with 64-byte alignment. (In no case do you
>>>> need to align the end of the function: Putting other stuff there
>>>> is not a problem at all.) What you want in any event is a build
>>>> time check that the within-a-page constraint is met.
>>> But shouldn't be an address be aligned to a boundary equal to page
>>> size?
>>>
>>> According to the RISC-V privileged spec:
>>> Any level of PTE may be a leaf PTE, so in addition to 4 KiB pages,
>>> Sv39
>>> supports 2 MiB megapages
>>> and 1 GiB gigapages, each of which must be virtually and physically
>>> aligned to a boundary equal
>>> to its size. A page-fault exception is raised if the physical
>>> address
>>> is insufficiently aligned.
>>
>> You'd simply map the page containing the chunk, i.e. masking off the
>> low 12 bits. If far enough away from the Xen virtual range, you could
>> as well map a 2M page masking off the low 21 bits, or a 1G page with
>> the low 30 bits of the address cleared.
> Agree, then it will work.
> 
> But still it doesn't clear what to do if turn_on_mmu will be bigger
> then 64 ( ASSERT( (turn_on_mmu_end - turn_on_mmu) <= 64 ) somewhere in
> xen.lds.S ). Right now turn_on_mmu() function is 0x22 bytes and it is
> enough ( we are sure that we don't cross 4k boundary ) to be 64-byte
> aligned. But if the size will be more then 64 bytes then the alignment
> need to be changed to 0x128.
> Am i right?

Well, to 128 (without 0x), but yes. That function isn't very likely to
change much, though.

Jan

Reply via email to