On 29/02/16 12:59, Balbir Singh wrote:
>
> On 26/02/16 14:53, Aneesh Kumar K.V wrote:
>> This enables us to share the same page table code for
>> both radix and hash. Radix use a hardware defined big endian
>                              ^uses
>> page table
>>
>> Asm -> C conversion makes it simpler to build code for both little
>> and big endian page table.
>> Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.vnet.ibm.com>
>> ---
>> Note:
>> Any suggestion on how we can do that pte update better so that we can build
>> a LE and BE page table kernel will be helpful.
> Ideally this should not break software compatibility for VM migration, but 
> might be worth testing. Basically a hypervisor with BE page tables and 
> software endian older kernel instance. Also look for any tools that work off 
> of saved dump images with PTE entries in them - crash/kdump/etc
>>  arch/powerpc/include/asm/book3s/64/hash.h   |  75 ++++++++++++--------
>>  arch/powerpc/include/asm/kvm_book3s_64.h    |  12 ++--
>>  arch/powerpc/include/asm/page.h             |   4 ++
>>  arch/powerpc/include/asm/pgtable-be-types.h | 104 
>> ++++++++++++++++++++++++++++
>>  arch/powerpc/mm/hash64_4k.c                 |   6 +-
>>  arch/powerpc/mm/hash64_64k.c                |  11 +--
>>  arch/powerpc/mm/hugepage-hash64.c           |   5 +-
>>  arch/powerpc/mm/hugetlbpage-hash64.c        |   5 +-
>>  arch/powerpc/mm/pgtable-hash64.c            |  42 +++++------
>>  9 files changed, 197 insertions(+), 67 deletions(-)
>>  create mode 100644 arch/powerpc/include/asm/pgtable-be-types.h
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h 
>> b/arch/powerpc/include/asm/book3s/64/hash.h
>> index 9b451cb8294a..9153bda5f395 100644
>> --- a/arch/powerpc/include/asm/book3s/64/hash.h
>> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
>> @@ -1,6 +1,9 @@
>>  #ifndef _ASM_POWERPC_BOOK3S_64_HASH_H
>>  #define _ASM_POWERPC_BOOK3S_64_HASH_H
>>  #ifdef __KERNEL__
>> +#ifndef __ASSEMBLY__
>> +#include <asm/cmpxchg.h>
>> +#endif
> Do we still need PTE_ATOMIC_UPDATE as 1 after these changes?
>>  
>>  /*
>>   * Common bits between 4K and 64K pages in a linux-style PTE.
>> @@ -249,27 +252,35 @@ static inline unsigned long pte_update(struct 
>> mm_struct *mm,
>>                                     unsigned long set,
>>                                     int huge)
>>  {
>> -    unsigned long old, tmp;
>> -
>> -    __asm__ __volatile__(
>> -    "1:     ldarx   %0,0,%3         # pte_update\n\
>> -    andi.   %1,%0,%6\n\
>> -    bne-    1b \n\
>> -    andc    %1,%0,%4 \n\
>> -    or      %1,%1,%7\n\
>> -    stdcx.  %1,0,%3 \n\
>> -    bne-    1b"
>> -    : "=&r" (old), "=&r" (tmp), "=m" (*ptep)
>> -    : "r" (ptep), "r" (clr), "m" (*ptep), "i" (_PAGE_BUSY), "r" (set)
>> -    : "cc" );
>> +    pte_t pte;
>> +    unsigned long old_pte, new_pte;
>> +
>> +    do {
>> +reload:
>> +            pte = READ_ONCE(*ptep);
>> +            old_pte = pte_val(pte);
>> +
>> +            /* If PTE busy, retry */
>> +            if (unlikely(old_pte & _PAGE_BUSY))
>> +                    goto reload;
> A loop within another? goto to upward labels can be ugly..
>
>     do {
>         pte = READ_ONCE(*ptep);
>         old_pte = pte_val(pte);
>
>         while (unlikely(old_pte & _PAGE_BUSY)) {
>             cpu_relax(); /* Do we need this? */
>             pte = READ_ONCE(*ptep);
>             old_pte = pte_val(pte);
>         }
>
> The above four lines can be abstracted further to loop_while_page_busy() if 
> required :)
>> +            /*
>> +             * Try to lock the PTE, add ACCESSED and DIRTY if it was
>> +             * a write access. Since this is 4K insert of 64K page size
>> +             * also add _PAGE_COMBO
>> +             */
>> +            new_pte = (old_pte | set) & ~clr;
>> +
>> +    } while (cpu_to_be64(old_pte) != __cmpxchg_u64((unsigned long *)ptep,
>> +                                               cpu_to_be64(old_pte),
>> +                                               cpu_to_be64(new_pte)));
>>
Another minor nit-pick

(I presume that is the case, but anyway)
Can you check if the compiler is optimizing this such that

cpu_to_be64(old_pte) and cpu_to_be64(new_pte) is called just once?


<snip>
Balbir Singh.
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to