On Tue, 2015-10-06 at 11:25 -0700, Laura Abbott wrote:
> On 10/05/2015 08:35 PM, Michael Ellerman wrote:
> > On Fri, 2015-10-02 at 08:43 -0700, Laura Abbott wrote:
> >> Hi,
> >>
> >> We received a report (https://bugzilla.redhat.com/show_bug.cgi?id=1267395)
> >> of bad assembly
> >> when compiling
From: Christophe Lombard
The scheduled process area is currently allocated before assigning the
correct maximum processes to the AFU, which will mean we only ever
allocate a fixed number of pages for the scheduled process area. This
will limit us to 958 processes with 2 x 64K pages. If we try to
On Wed, 2015-10-07 at 14:51 +1100, Ian Munsie wrote:
> The explanation probably still needs to be expanded more (e.g. this
> could cause a crash for an AFU that supports more than about a thousand
> processes) - see my other email in reply to v1 for more, but I'm happy
> for this to go in as is (bu
The explanation probably still needs to be expanded more (e.g. this
could cause a crash for an AFU that supports more than about a thousand
processes) - see my other email in reply to v1 for more, but I'm happy
for this to go in as is (but ultimately that's mpe's call).
It should also be CCd to st
From: Tiejun Chen
Allow KEXEC for book3e, and bypass or convert non-book3e stuff
in kexec code.
Signed-off-by: Tiejun Chen
[scottw...@freescale.com: move code to minimize diff, and cleanup]
Signed-off-by: Scott Wood
---
arch/powerpc/Kconfig | 2 +-
arch/powerpc/kernel/machi
book3e_secondary_core_init will only create a TLB entry if r4 = 0,
so do so.
Signed-off-by: Scott Wood
---
arch/powerpc/kernel/misc_64.S | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 246ad8c..ddbc535 100644
--- a/arch/po
The way VIRT_PHYS_OFFSET is not correct on book3e-64, because
it does not account for CONFIG_RELOCATABLE other than via the
32-bit-only virt_phys_offset.
book3e-64 can (and if the comment about a GCC miscompilation is still
relevant, should) use the normal ppc64 __va/__pa.
At this point, only boo
The SMP release mechanism for FSL book3e is different from when booting
with normal hardware. In theory we could simulate the normal spin
table mechanism, but not at the addresses U-Boot put in the device tree
-- so there'd need to be even more communication between the kernel and
kexec to set tha
From: Tiejun Chen
book3e has no real MMU mode so we have to create an identity TLB
mapping to make sure we can access the real physical address.
Signed-off-by: Tiejun Chen
[scottwood: cleanup, and split off some changes]
Signed-off-by: Scott Wood
---
arch/powerpc/kernel/misc_64.S | 52 +++
This limit only makes sense on book3s, and on book3e it can cause
problems with kdump if we don't have any memory under 256 MiB.
Signed-off-by: Scott Wood
---
arch/powerpc/kernel/paca.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/paca.c b/arch/pow
While book3e doesn't have "real mode", we still want to wait for
all the non-crash cpus to complete their shutdown.
Signed-off-by: Scott Wood
---
arch/powerpc/kernel/crash.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/crash.c b/arch/powerpc/kerne
From: Tiejun Chen
book3e is different with book3s since 3s includes the exception
vectors code in head_64.S as it relies on absolute addressing
which is only possible within this compilation unit. So we have
to get that label address with got.
And when boot a relocated kernel, we should reset ip
From: Tiejun Chen
Convert r4/r5, not r6, to a virtual address when calling
copy_and_flush. Otherwise, r3 is already virtual, and copy_to_flush
tries to access r3+r6, PAGE_OFFSET gets added twice.
This isn't normally seen because on book3e we normally enter with
the kernel at zero and thus skip
From: Tiejun Chen
Rename 'interrupt_end_book3e' to '__end_interrupts' so that the symbol
can be used by both book3s and book3e.
Signed-off-by: Tiejun Chen
[scottwood: edit changelog]
Signed-off-by: Scott Wood
---
arch/powerpc/kernel/exceptions-64e.S | 8
1 file changed, 4 insertions(
The new kernel will be expecting secondary threads to be disabled,
not spinning.
Signed-off-by: Scott Wood
---
v2: minor cleanup
arch/powerpc/kernel/head_64.S | 16 ++
arch/powerpc/platforms/85xx/smp.c | 46 +++
2 files changed, 62 insertions(
From: Tiejun Chen
Unlike 32-bit 85xx kexec, we don't do a core reset.
Signed-off-by: Tiejun Chen
[scottwood: edit changelog, and cleanup]
Signed-off-by: Scott Wood
---
arch/powerpc/platforms/85xx/smp.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch/powerpc/platforms/85x
This is required for kdump to work when loaded at at an address that
does not fall within the first TLB entry -- which can easily happen
because while the lower limit is enforced via reserved memory, which
doesn't affect how much is mapped, the upper limit is enforced via a
different mechanism that
Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to
be loaded at once. This avoids the need to keep the translation that
code is executing from in the same TLB entry in the final TLB
configuration as during early boot, which in turn is helpful for
relocatable kernels (e.g. kdump)
Otherwise, because the top end of the crash kernel is treated as the
absolute top of memory rather than the beginning of a reserved region,
in-flight DMA from the previous kernel that targets areas above the
crash kernel can trigger a storm of PCI errors. We only do this for
kdump, not normal kexe
85xx currently uses the generic timebase sync mechanism when
CONFIG_KEXEC is enabled, because 32-bit 85xx kexec support does a hard
reset of each core. 64-bit 85xx kexec does not do this, so we neither
need nor want this (nor is the generic timebase sync code built on
ppc64).
FWIW, I don't like t
Problems have been observed in coreint (EPR) mode if interrupts are
left pending (due to the lack of device quiescence with kdump) after
having tried to deliver to a CPU but unable to deliver due to MSR[EE]
-- interrupts no longer get reliably delivered in the new kernel. I
tried various ways of f
This allows SMP kernels to work as kdump crash kernels. While crash
kernels don't really need to be SMP, this prevents things from breaking
if a user does it anyway (which is not something you want to only find
out once the main kernel has crashed in the field, especially if
whether it works or no
This patchset adds support for kexec and kdump to e5500 and e6500 based
systems running 64-bit kernels. It depends on the kexec-tools patch
http://patchwork.ozlabs.org/patch/527050/ ("ppc64: Add a flag to tell the
kernel it's booting from kexec").
Scott Wood (12):
powerpc/fsl-booke-64: Allow bo
Excerpts from Michael Ellerman's message of 2015-10-06 17:19:02 +1100:
> On Fri, 2015-10-02 at 16:01 +0200, Christophe Lombard wrote:
> > This moves the initialisation of the num_procs to before the SPA
> > allocation.
>
> Why? What does it fix? I can't tell from the diff or the change log.
This
Currently, we rely on the existence of struct pci_driver::err_handler
to judge if the corresponding PCI device should be unplugged during
EEH recovery (partially hotplug case). However, it's not elaborate.
some device drivers are implementing part of the EEH error handlers
to collect diag-data. Tha
When one of below two flags or both of them are marked in the PE
state, the PE's IO path is regarded as enabled: EEH_STATE_MMIO_ACTIVE
or EEH_STATE_MMIO_ENABLED.
Signed-off-by: Gavin Shan
---
arch/powerpc/kernel/eeh.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/power
This cleans up pseries_eeh_get_state(), no functional changes:
* Return EEH_STATE_NOT_SUPPORT early when the 2nd RTAS output
argument is zero to avoid nested if statements.
* Skip clearing bits in the PE state represented by variable
"result" to simplify the code.
Signed-off-by: G
On fenced PHB, the error handlers in the drivers of its subordinate
devices could return PCI_ERS_RESULT_CAN_RECOVER, indicating no reset
will be issued during the recovery. It's conflicting with the fact
that fenced PHB won't be recovered without reset.
This limits the return value from the error
On PowerNV platform, the PE is kept in frozen state until the PE
reset is completed to avoid recursive EEH error caused by MMIO
access during the period of EEH reset. The PE's frozen state is
cleared after BARs of PCI device included in the PE are restored
and enabled. However, we needn't clear the
On Fri, 2015-10-02 at 20:07 +1000, Alexey Kardashevskiy wrote:
> On 08/19/2015 12:01 PM, Wei Yang wrote:
> > In original design, it tries to group VFs to enable more number of VFs in
> > the
> > system, when VF BAR is bigger than 64MB. This design has a flaw in which one
> > error on a VF will int
Powerpc provides hcall events that also provides insights into guest
behaviour. Enhance perf kvm stat to record and analyze hcall events.
- To trace hcall events :
perf kvm stat record
- To show the results :
perf kvm stat report --event=hcall
The result shows the number of hypervisor call
perf kvm can be used to analyze guest exit reasons. This support already
exists in x86. Hence, porting it to powerpc.
- To trace KVM events :
perf kvm stat record
If many guests are running, we can track for a specific guest by using
--pid as in : perf kvm stat record --pid
- To see the
This patch removes the "const" qualifier from kvm_events_tp declaration
to account for the fact that some architectures may need to update this
variable dynamically. For instance, powerpc will need to update this
variable dynamically depending on the machine type.
Signed-off-by: Hemant Kumar
---
Its better to remove the dependency on uapi/kvm_perf.h to allow dynamic
discovery of kvm events (if its needed). To do this, some extern
variables have been introduced with which we can keep the generic
functions generic.
Signed-off-by: Hemant Kumar
---
Changelog:
v8 to v9:
- Removed the macro de
On Wed, 2015-09-30 at 16:45 -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Sep 30, 2015 at 09:09:09PM +0200, Jiri Olsa escreveu:
> > On Wed, Sep 30, 2015 at 11:28:36AM -0700, Sukadev Bhattiprolu wrote:
> > > From e29a7236122c4d807ec9ebc721b5d7d75c8d Mon Sep 17 00:00:00 2001
> > > From: Sukade
On Wed, 2015-07-08 at 13:49 +1000, Samuel Mendoza-Jonas wrote:
> On 08/07/15 13:37, Scott Wood wrote:
> > On Wed, 2015-07-08 at 13:29 +1000, Samuel Mendoza-Jonas wrote:
> > > Older big-endian ppc64 kernels don't include the FIXUP_ENDIAN check,
> > > meaning if we kexec from a little-endian kernel t
It needs to know this because the SMP release mechanism for Freescale
book3e is different from when booting with normal hardware. In theory
we could simulate the normal spin table mechanism, but not (easily) at
the addresses U-Boot put in the device tree -- so there'd need to be
even more communic
Commit a304e2d82a8c3 ("ppc64: purgatory: Reset primary cpu endian to
big-endian) changed bctr to rfid. rfid is book3s-only and will cause a
fatal exception on book3e.
Purgatory is an isolated environment which makes importing information
about the subarch awkward, so instead rely on the fact that
Produce a warning-free build on ppc64 (at least, when built as 64-bit
userspace -- if a 64-bit binary for ppc64 is a requirement, why is -m64
set only on purgatory?). Mostly unused (or write-only) variable
warnings, but also one nasty one where reserve() was used without a
prototype, causing long
On Tue, 2015-10-06 at 22:30 +0200, christophe leroy wrote:
> Le 06/10/2015 18:46, Scott Wood a écrit :
> > On Tue, 2015-10-06 at 15:35 +0200, Christophe Leroy wrote:
> > > Le 29/09/2015 00:07, Scott Wood a écrit :
> > > > On Tue, Sep 22, 2015 at 06:50:29PM +0200, Christophe Leroy wrote:
> > > > > W
Le 06/10/2015 18:46, Scott Wood a écrit :
On Tue, 2015-10-06 at 15:35 +0200, Christophe Leroy wrote:
Le 29/09/2015 00:07, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:29PM +0200, Christophe Leroy wrote:
We are spending between 40 and 160 cycles with a mean of 65 cycles in
the TLB handl
This updates the powerpc code to use the CONFIG_GENERIC_CMDLINE
option.
Cc: xe-ker...@external.cisco.com
Cc: Daniel Walker
Signed-off-by: Daniel Walker
---
arch/powerpc/Kconfig| 23 +--
arch/powerpc/kernel/prom.c | 4
arch/powerpc/kernel/prom_init.c |
On 10/05/2015 08:35 PM, Michael Ellerman wrote:
On Fri, 2015-10-02 at 08:43 -0700, Laura Abbott wrote:
Hi,
We received a report (https://bugzilla.redhat.com/show_bug.cgi?id=1267395) of
bad assembly
when compiling on powerpc with little endian
...
After some discussion with the binutils fol
On Tue, 2015-10-06 at 16:35 +0200, Christophe Leroy wrote:
> Le 29/09/2015 02:03, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:58PM +0200, Christophe Leroy wrote:
> > > Move 8xx SPRN defines into reg_8xx.h and add some missing ones
> > >
> > > Signed-off-by: Christophe Leroy
> > > ---
>
On Tue, 2015-10-06 at 16:12 +0200, Christophe Leroy wrote:
> Le 29/09/2015 02:00, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:54PM +0200, Christophe Leroy wrote:
> > > We are spending between 40 and 160 cycles with a mean of 65 cycles
> > > in the TLB handling routines (measured with mft
On Tue, 2015-10-06 at 15:35 +0200, Christophe Leroy wrote:
> Le 29/09/2015 00:07, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:29PM +0200, Christophe Leroy wrote:
> > > We are spending between 40 and 160 cycles with a mean of 65 cycles in
> > > the TLB handling routines (measured with mft
On Tue, 2015-10-06 at 15:35 +0200, Christophe Leroy wrote:
> Le 29/09/2015 00:07, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:29PM +0200, Christophe Leroy wrote:
> > > We are spending between 40 and 160 cycles with a mean of 65 cycles in
> > > the TLB handling routines (measured with mft
As preparation for eliminating the indirect access to the various
global cpu_*_bits bitmaps via the pointer variables cpu_*_mask, rename
the cpu_online_mask member of struct fadump_crash_info_header to
simply online_mask, thus allowing cpu_online_mask to become a macro.
Acked-by: Michael Ellerman
v2: fix build failure on ppc, add acks.
The four cpumasks cpu_{possible,online,present,active}_bits are
exposed readonly via the corresponding const variables
cpu_xyz_mask. But they are also accessible for arbitrary writing via
the exposed functions set_cpu_xyz. There's quite a bit of code
through
On Tue, 2015-10-06 at 16:10 +0200, Christophe Leroy wrote:
> Le 29/09/2015 01:58, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:50PM +0200, Christophe Leroy wrote:
> > > On recent kernels, with some debug options like for instance
> > > CONFIG_LOCKDEP, the BSS requires more than 8M memory,
On Tue, 2015-10-06 at 16:02 +0200, Christophe Leroy wrote:
> Le 29/09/2015 01:47, Scott Wood a écrit :
> > On Tue, Sep 22, 2015 at 06:50:42PM +0200, Christophe Leroy wrote:
> > > x_mapped_by_bats() and x_mapped_by_tlbcam() serve the same kind of
> > > purpose, so lets group them into a single funct
Le 29/09/2015 01:58, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:50PM +0200, Christophe Leroy wrote:
On recent kernels, with some debug options like for instance
CONFIG_LOCKDEP, the BSS requires more than 8M memory, allthough
the kernel code fits in the first 8M.
Today, it is necessary
Le 29/09/2015 02:03, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:58PM +0200, Christophe Leroy wrote:
Move 8xx SPRN defines into reg_8xx.h and add some missing ones
Signed-off-by: Christophe Leroy
---
No change in v2
Why are they being moved? Why are they being separated from the bit
This moves the initialisation of the num_procs to before the SPA
allocation.
The field 'num_procs' of the structure cxl_afu is not updated to the
right value (maximum number of processes that can be supported by
the AFU) when the pages are allocated (i.e. when cxl_alloc_spa() is called).
The numbe
Le 29/09/2015 02:00, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:54PM +0200, Christophe Leroy wrote:
We are spending between 40 and 160 cycles with a mean of 65 cycles
in the TLB handling routines (measured with mftbl) so make it more
simple althought it adds one instruction
Signed-off
Le 29/09/2015 01:47, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:42PM +0200, Christophe Leroy wrote:
x_mapped_by_bats() and x_mapped_by_tlbcam() serve the same kind of
purpose, so lets group them into a single function.
Signed-off-by: Christophe Leroy
---
No change in v2
arch/power
Le 29/09/2015 01:41, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:40PM +0200, Christophe Leroy wrote:
iounmap() cannot vunmap() area mapped by TLBCAMs either
Signed-off-by: Christophe Leroy
---
No change in v2
arch/powerpc/mm/pgtable_32.c | 4 +++-
1 file changed, 3 insertions(+), 1
Le 29/09/2015 00:07, Scott Wood a écrit :
On Tue, Sep 22, 2015 at 06:50:29PM +0200, Christophe Leroy wrote:
We are spending between 40 and 160 cycles with a mean of 65 cycles in
the TLB handling routines (measured with mftbl) so make it more
simple althought it adds one instruction.
Signed-of
On Tue, 2015-10-06 at 12:40 +0300, Denis Kirjanov wrote:
> On 10/6/15, Michael Ellerman wrote:
> > Does anyone build their kernels using CROSS32_COMPILE ?
>
> I didn't even know that such macro exists..
Good, I want to remove it :)
cheers
___
Linuxp
On 10/06/2015 03:55 PM, Michael Ellerman wrote:
On Sun, 2015-09-27 at 23:59 +0530, Raghavendra K T wrote:
Problem description:
Powerpc has sparse node numbering, i.e. on a 4 node system nodes are
numbered (possibly) as 0,1,16,17. At a lower level, we map the chipid
got from device tree is natura
On Fri, 2015-21-08 at 04:24:27 UTC, Sam bobroff wrote:
> The paca display is already more than 24 lines, which can be problematic
> if you have an old school 80x24 terminal, or more likely you are on a
> virtual terminal which does not scroll for whatever reason.
>
> This patch adds a new command
On 06/10/15 12:05, Michael Ellerman wrote:
> On Mon, 2015-09-21 at 12:07 +0200, Thomas Huth wrote:
>> On 21/09/15 09:18, Michael Ellerman wrote:
>>> On Fri, 2015-09-18 at 16:17 +0200, Thomas Huth wrote:
It looks somewhat weird that you can enable TUNE_CELL on little
endian systems, so let
On 10/06/2015 03:47 PM, Michael Ellerman wrote:
On Sun, 2015-27-09 at 18:29:09 UTC, Raghavendra K T wrote:
We access numa_cpu_lookup_table array directly in all the places
to read/update numa cpu lookup information. Instead use a helper
function to update.
This is helpful in changing the way nu
On Sun, 2015-09-27 at 23:59 +0530, Raghavendra K T wrote:
> Problem description:
> Powerpc has sparse node numbering, i.e. on a 4 node system nodes are
> numbered (possibly) as 0,1,16,17. At a lower level, we map the chipid
> got from device tree is naturally mapped (directly) to nid.
>
> Potentia
On Sun, 2015-27-09 at 18:29:09 UTC, Raghavendra K T wrote:
> We access numa_cpu_lookup_table array directly in all the places
> to read/update numa cpu lookup information. Instead use a helper
> function to update.
>
> This is helpful in changing the way numa<-->cpu mapping in single
> place when
On Mon, 2015-09-21 at 12:07 +0200, Thomas Huth wrote:
> On 21/09/15 09:18, Michael Ellerman wrote:
> > On Fri, 2015-09-18 at 16:17 +0200, Thomas Huth wrote:
> >> It looks somewhat weird that you can enable TUNE_CELL on little
> >> endian systems, so let's disable this option with CPU_LITTLE_ENDIAN.
Do we need a function here or can we just have a IOMMU_PAGE_SHIFT define
with an #ifndef in common code?
Also not all architectures use dma-mapping-common.h yet, so you either
need to update all of those as well, or just add the #ifndef directly
to linux/dma-mapping.h.
On Fri, 2015-02-10 at 14:33:48 UTC, "Aneesh Kumar K.V" wrote:
> This avoid errors like
>
> unsigned int usize = 1 << 30;
> int size = 1 << 30;
> unsigned long addr = 64UL << 30 ;
>
> value = _ALIGN_DOWN(addr, usize); -> 0
> value = _ALIGN_DOWN(addr, size);
On 10/6/15, Michael Ellerman wrote:
> Does anyone build their kernels using CROSS32_COMPILE ?
I didn't even know that such macro exists..
>
> cheers
>
>
> ___
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listin
The field 'num_procs' of the structure cxl_afu is not updated to the
right value (maximum number of processes that can be supported by
the AFU) when the pages are allocated (i.e. when cxl_alloc_spa() is called).
The number of allocates pages depends on the max number of processes.
Thanks
On 06
70 matches
Mail list logo