Re: [PATCH 1/1] powerpc/40x: Add new PPC440EPx based board HCU5 of Netstal Maschinen
On Fri, Oct 9, 2009 at 2:11 AM, Niklaus Giger wrote: > Adds support for a HCU5 PPC405EPx based board from Netstal Maschinen AG. > > Signed-off-by: Niklaus Giger > --- > arch/powerpc/boot/dts/hcu5.dts | 254 +++ > arch/powerpc/configs/44x/hcu5_defconfig | 1166 > +++ Do you really need your own defconfig? Can you instead add support for your board to ppc44x_defconfig? Josh is the maintainer here, so it is his decision, but on 5200 stuff I've been pushing back on adding yet-another-board-specific-defconfig when there isn't a really compelling reason to do so. It adds a lot of lines for not a lot of benefit. Besides, if you add your board to the ppc44x_defconfig, you gain the advantage of your code getting compiled by others more frequently. g. ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Fail building 2.6.31.x for CHRP using gcc-4.4.x
Hi, we are having trouble while building kernel 2.6.31.x and also 2.6.32rc for CHRP platforms using gcc-4.4.x, the error that give us the compilation is: CC arch/powerpc/sysdev/indirect_pci.o CC arch/powerpc/sysdev/i8259.o LD arch/powerpc/sysdev/built-in.o CC arch/powerpc/platforms/chrp/setup.o cc1: warnings being treated as errors arch/powerpc/platforms/chrp/setup.c: In function 'chrp_event_scan': arch/powerpc/platforms/chrp/setup.c:378: error: the frame size of 1040 bytes is larger than 1024 bytes make[2]: *** [arch/powerpc/platforms/chrp/setup.o] Error 1 make[1]: *** [arch/powerpc/platforms/chrp] Error 2 make: *** [arch/powerpc/platforms] Error The same kernel, with the same configuration builds fine using gcc-4.3.x. Regards, Giuseppe ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Accessing flash directly from User Space [SOLVED]
mmio[0] = address; mmio[1] = data; mb(); eieio is enough here. mmio[3] |= 0x01; /* This triggers an operation -> address=data */ /* probably also need an mb() here, if the following code * depends on the operation to be triggered. */ No, a sync does not guarantee the device has seen the store yet; you need something specific to the device to guarantee this. Usually a load (from the same register!) followed by code that makes sure the load has finished is sufficient (and necessary). hmm, the mmio[0] and mmio[1] are written in order I hope? We do not care in this example, as the write to [3] does trigger the device operation. We do only care that [0] and [1] are set when [3] is written. We do not care in what order [0] and [1] are written. In this example yes, I was wondering in general. The writes will not be reordered, but they can be combined, unless you put an eieio (or sync) inbetween. So what does guarded memory mapping on ppc mean really? If I need that much mb(), guarded does not seem to do much. Loosely speaking, guarded means no prefetch is done. Segher ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Fail building 2.6.31.x for CHRP using gcc-4.4.x
Hi Giuseppe, On Sun, 1 Nov 2009 10:17:00 +0100 Giuseppe Coviello wrote: > > Hi, we are having trouble while building kernel 2.6.31.x and also > 2.6.32rc for CHRP platforms using gcc-4.4.x, the error that give us > the compilation is: > > CC arch/powerpc/sysdev/indirect_pci.o > CC arch/powerpc/sysdev/i8259.o > LD arch/powerpc/sysdev/built-in.o > CC arch/powerpc/platforms/chrp/setup.o > cc1: warnings being treated as errors > arch/powerpc/platforms/chrp/setup.c: In function 'chrp_event_scan': > arch/powerpc/platforms/chrp/setup.c:378: error: the frame size of 1040 > bytes is larger than 1024 bytes > make[2]: *** [arch/powerpc/platforms/chrp/setup.o] Error 1 > make[1]: *** [arch/powerpc/platforms/chrp] Error 2 > make: *** [arch/powerpc/platforms] Error > > The same kernel, with the same configuration builds fine using gcc-4.3.x. For now, just enable CONFIG_PPC_DISABLE_WERROR. -- Cheers, Stephen Rothwells...@canb.auug.org.au http://www.canb.auug.org.au/~sfr/ pgp7bWsRXK9jk.pgp Description: PGP signature ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH 08/27] Add SLB switching code for entry/exit
> This is the really low level of guest entry/exit code. > > Book3s_64 has an SLB, which stores all ESID -> VSID mappings we're > currently aware of. > > The segments in the guest differ from the ones on the host, so we need > to switch the SLB to tell the MMU that we're in a new context. > > So we store a shadow of the guest's SLB in the PACA, switch to that on > entry and only restore bolted entries on exit, leaving the rest to the > Linux SLB fault handler. > > That way we get a really clean way of switching the SLB. > > Signed-off-by: Alexander Graf > --- > arch/powerpc/kvm/book3s_64_slb.S | 277 ++ > 1 files changed, 277 insertions(+), 0 deletions(-) > create mode 100644 arch/powerpc/kvm/book3s_64_slb.S > > diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_sl b.S > new file mode 100644 > index 000..00a8367 > --- /dev/null > +++ b/arch/powerpc/kvm/book3s_64_slb.S > @@ -0,0 +1,277 @@ > +/* > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License, version 2, as > + * published by the Free Software Foundation. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * You should have received a copy of the GNU General Public License > + * along with this program; if not, write to the Free Software > + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. > + * > + * Copyright SUSE Linux Products GmbH 2009 > + * > + * Authors: Alexander Graf > + */ > + > +/*** *** > + * * > + * Entry code * > + * * > + *** **/ > + > +.global kvmppc_handler_trampoline_enter > +kvmppc_handler_trampoline_enter: > + > + /* Required state: > + * > + * MSR = ~IR|DR > + * R13 = PACA > + * R9 = guest IP > + * R10 = guest MSR > + * R11 = free > + * R12 = free > + * PACA[PACA_EXMC + EX_R9] = guest R9 > + * PACA[PACA_EXMC + EX_R10] = guest R10 > + * PACA[PACA_EXMC + EX_R11] = guest R11 > + * PACA[PACA_EXMC + EX_R12] = guest R12 > + * PACA[PACA_EXMC + EX_R13] = guest R13 > + * PACA[PACA_EXMC + EX_CCR] = guest CR > + * PACA[PACA_EXMC + EX_R3] = guest XER > + */ > + > + mtsrr0 r9 > + mtsrr1 r10 > + > + mtspr SPRN_SPRG_SCRATCH0, r0 > + > + /* Remove LPAR shadow entries */ > + > +#if SLB_NUM_BOLTED == 3 You could alternatively check the persistent entry in the slb_shawdow buffer. This would give you a run time check. Not sure what's best though. > + > + ld r12, PACA_SLBSHADOWPTR(r13) > + ld r10, 0x10(r12) > + ld r11, 0x18(r12) Can you define something in asm-offsets.c for these magic constants 0x10 and 0x18. Similarly below. > + /* Invalid? Skip. */ > + rldicl. r0, r10, 37, 63 > + beq slb_entry_skip_1 > + xoris r9, r10, slb_esi...@h > + std r9, 0x10(r12) > +slb_entry_skip_1: > + ld r9, 0x20(r12) > + /* Invalid? Skip. */ > + rldicl. r0, r9, 37, 63 > + beq slb_entry_skip_2 > + xoris r9, r9, slb_esi...@h > + std r9, 0x20(r12) > +slb_entry_skip_2: > + ld r9, 0x30(r12) > + /* Invalid? Skip. */ > + rldicl. r0, r9, 37, 63 > + beq slb_entry_skip_3 > + xoris r9, r9, slb_esi...@h > + std r9, 0x30(r12) Can these 3 be made into a macro? > +slb_entry_skip_3: > + > +#else > +#error unknown number of bolted entries > +#endif > + > + /* Flush SLB */ > + > + slbia > + > + /* r0 = esid & ESID_MASK */ > + rldicr r10, r10, 0, 35 > + /* r0 |= CLASS_BIT(VSID) */ > + rldic r12, r11, 56 - 36, 36 > + or r10, r10, r12 > + slbie r10 > + > + isync > + > + /* Fill SLB with our shadow */ > + > + lbz r12, PACA_KVM_SLB_MAX(r13) > + mulli r12, r12, 16 > + addir12, r12, PACA_KVM_SLB > + add r12, r12, r13 > + > + /* for (r11 = kvm_slb; r11 < kvm_slb + kvm_slb_size; r11+=slb_entry) */ > + li r11, PACA_KVM_SLB > + add r11, r11, r13 > + > +slb_loop_enter: > + > + ld r10, 0(r11) > + > + rldicl. r0, r10, 37, 63 > + beq slb_loop_enter_skip > + > + ld r9, 8(r11) > + slbmte r9, r10 If you're updating the first 3 slbs, you need to make sure the slb shadow is updated at the same time (BTW dumb question: can we run this under PHYP?) > + > +slb_loop_enter_skip: > +
Re: [PATCH 11/27] Add book3s_64 Host MMU handling
> +static void invalidate_pte(struct hpte_cache *pte) > +{ > + dprintk_mmu("KVM: Flushing SPT %d: 0x%llx (0x%llx) -> 0x%llx\n", > + i, pte->pte.eaddr, pte->pte.vpage, pte->host_va); > + > + ppc_md.hpte_invalidate(pte->slot, pte->host_va, > +MMU_PAGE_4K, MMU_SEGSIZE_256M, > +false); Are we assuming 256M segments here (and elsewhere)? > +static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid) > +{ > + int i; > + int max_slb_size = 64; > + int found_inval = -1; > + int r; > + > + if (!get_paca()->kvm_slb_max) > + get_paca()->kvm_slb_max = 1; > + > + /* Are we overwriting? */ > + for (i = 1; i < get_paca()->kvm_slb_max; i++) { > + if (!(get_paca()->kvm_slb[i].esid & SLB_ESID_V)) > + found_inval = i; > + else if ((get_paca()->kvm_slb[i].esid & ESID_MASK) == esid) > + return i; > + } > + > + /* Found a spare entry that was invalidated before */ > + if (found_inval > 0) > + return found_inval; > + > + /* No spare invalid entry, so create one */ > + > + if (mmu_slb_size < 64) > + max_slb_size = mmu_slb_size; Can we just use the global mmu_slb_size eliminate max_slb_size? Mikey ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
[PATCH] powerpc: Avoid giving out RTC dates below EPOCH
Doing so causes xtime to be negative which crashes the timekeeping code in funny ways when doing suspend/resume Signed-off-by: Benjamin Herrenschmidt --- diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index 92dc844..6a7ce0e 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -777,7 +777,7 @@ int update_persistent_clock(struct timespec now) return ppc_md.set_rtc_time(&tm); } -void read_persistent_clock(struct timespec *ts) +static void __read_persistent_clock(struct timespec *ts) { struct rtc_time tm; static int first = 1; @@ -800,10 +800,23 @@ void read_persistent_clock(struct timespec *ts) return; } ppc_md.get_rtc_time(&tm); + ts->tv_sec = mktime(tm.tm_year+1900, tm.tm_mon+1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec); } +void read_persistent_clock(struct timespec *ts) +{ + __read_persistent_clock(&ts); + + /* Sanitize it in case real time clock is set below EPOCH */ + if (ts->tv_sec < 0) { + ts->tv_sec = 0; + ts->tv_nsec = 0; + } + +} + /* clocksource code */ static cycle_t rtc_read(struct clocksource *cs) { ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: [PATCH] powerpc: Avoid giving out RTC dates below EPOCH
On Mon, 2009-11-02 at 16:11 +1100, Benjamin Herrenschmidt wrote: > Doing so causes xtime to be negative which crashes the timekeeping > code in funny ways when doing suspend/resume > > Signed-off-by: Benjamin Herrenschmidt > --- > +void read_persistent_clock(struct timespec *ts) > +{ > + __read_persistent_clock(&ts); Should read + __read_persistent_clock(ts); Forgot a quilt ref ;-) Cheers, Ben. > + /* Sanitize it in case real time clock is set below EPOCH */ > + if (ts->tv_sec < 0) { > + ts->tv_sec = 0; > + ts->tv_nsec = 0; > + } > + > +} > + > /* clocksource code */ > static cycle_t rtc_read(struct clocksource *cs) > { > > > > ___ > Linuxppc-dev mailing list > Linuxppc-dev@lists.ozlabs.org > https://lists.ozlabs.org/listinfo/linuxppc-dev ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev