On 24/05/14 01:15, Anshuman Khandual wrote:
> This patch enables get and set of transactional memory related register
> sets through PTRACE_GETREGSET/PTRACE_SETREGSET interface by implementing
> four new powerpc specific register sets i.e REGSET_TM_SPR, REGSET_TM_CGPR,
> REGSET_TM_CFPR, REGSET_CVMX
On 24/05/14 01:15, Anshuman Khandual wrote:
> This patch series adds five new ELF core note sections which can be
> used with existing ptrace request PTRACE_GETREGSET/SETREGSET for accessing
> various transactional memory and miscellaneous register sets on PowerPC
> platform. Please find a te
On powrnv platform, resource position in M64 implies the PE# the resource
belongs to. In some particular case, adjustment of a resource is necessary to
locate it to a correct position in M64.
This patch introduce a function to shift the 'real' VF BAR address according to
an offset.
Signed-off-by:
When IOV BAR is big, each of it is covered by 4 M64 window. This leads to
several VF PE sits in one PE in terms of M64.
This patch group VF PEs according to the M64 allocation.
Signed-off-by: Wei Yang
---
arch/powerpc/include/asm/pci-bridge.h |2 +-
arch/powerpc/platforms/powernv/pci-io
VFs are created, when pci device is enabled.
This patch tries best to assign maximum resources and PEs for VF when pci
device is enabled. Enough M64 assigned to cover the IOV BAR, IOV BAR is
shifted to meet the PE# indicated by M64. VF's pdn->pdev and pdn->pe_number
are fixed.
Signed-off-by: Wei
M64 aperture size is limited on PHB3. When the IOV BAR is too big, this will
exceed the limitation and failed to be assigned.
This patch introduce a different expanding based on the IOV BAR size:
IOV BAR size is smaller than 64M, expand to total_pe.
IOV BAR size is bigger than 64M, roundup power2
This patch implements the pcibios_sriov_resource_alignment() on powernv
platform.
Signed-off-by: Wei Yang
---
arch/powerpc/include/asm/machdep.h|3 +++
arch/powerpc/kernel/pci-common.c | 14 ++
arch/powerpc/platforms/powernv/pci-ioda.c | 18 ++
On PowerNV platform, it will support dynamic PE allocation and deallocation.
This patch adds a function to release those resources related to a PE. Also
fix a bug when it is the root bus, there is no bridge associated.
Signed-off-by: Wei Yang
---
arch/powerpc/platforms/powernv/pci-ioda.c | 90
On PHB3, VF resources will be covered by M64 BAR to have better PE isolation.
Mostly the total_pe number is different from the total_VFs, which will lead to
a conflict between MMIO space and the PE number.
This patch expands the VF resource size to reserve total_pe number of VFs'
resource, which p
Current iommu_table of a PE is a static field. This will have a problem when
iommu_free_table is called.
This patch allocate iommu_table dynamically.
Signed-off-by: Wei Yang
---
arch/powerpc/include/asm/iommu.h |3 +++
arch/powerpc/platforms/powernv/pci-ioda.c | 26 ++
On powernv platform, the IOV BAR size will be adjusted to meet the alignment
requirement from hardware. This leads to the VF resource size need to be
retrieved from hardware directly.
This patch adds this flag for IOV BAR on powernv platform.
Signed-off-by: Wei Yang
---
arch/powerpc/platforms/p
From: Gavin Shan
The PCI config accessors rely on device node. Unfortunately, VFs
don't have corresponding device nodes. So we have to switch to
pci_dn for PCI config access.
Signed-off-by: Gavin Shan
---
arch/powerpc/platforms/powernv/eeh-powernv.c | 24 -
arch/powerpc/platforms/pow
From: Gavin Shan
pci_dn is the extension of PCI device node and it's created from
device node. Unfortunately, VFs that are enabled dynamically by
PF's driver and they don't have corresponding device nodes, and
pci_dn. The patch refactors pci_dn to support VFs:
* pci_dn is organized as a hiera
When driver remove a pci_dev, it will call pcibios_disable_device() which is
platform dependent. This gives flexibility to platforms.
This patch defines this weak function on powerpc architecture.
Signed-off-by: Wei Yang
---
arch/powerpc/include/asm/machdep.h |5 -
arch/powerpc/kernel/p
If we're going to reassign resources with flag PCI_REASSIGN_ALL_RSRC, all
resources will be cleaned out during device header fixup time and then get
reassigned by PCI core. However, the VF resources won't be reassigned and
thus, we shouldn't clean them out.
This patch adds a condition. If the pci_
At resource sizing/assigning stage, resources are divided into two lists,
requested list and additional list, while the alignement of the additional
IOV BAR is not taken into the sizeing and assigning procedure.
This is reasonable in the original implementation, since IOV BAR's alignment is
mostly
When implementing the SR-IOV on PowerNV platform, some resource reservation is
needed for VFs which don't exist at the bootup stage. To do the match between
resources and VFs, the code need to get the VF's BDF in advance.
In this patch, it exports the interface to retrieve VF's BDF:
* Make the
The sriov resource alignment is designed to be the individual size of a sriov
resource. This works fine for many platforms, but on powernv platform it needs
some change.
The original alignment works, since at sizing and assigning stage the
requirement is from an individual VF's resource size inste
Current implementation calculates VF BAR size from dividing the total size of
IOV BAR by total VF number. It won't work on PowerNV platform because we're
going to expand IOV BAR size for finely alignment.
The patch enforces getting IOV BAR size from hardware and then calculate
the VF BAR size base
This patch set enables the SRIOV on POWER8.
The gerneral idea is put each VF into one individual PE and allocate required
resources like DMA/MSI.
One thing special for VF PE is we use M64BT to cover the IOV BAR. M64BT is one
hardware on POWER platform to map MMIO address to PE. By using M64BT, we
On 07/23/2014 01:36 PM, Gavin Shan wrote:
> On Wed, Jul 23, 2014 at 01:05:49PM +1000, Alexey Kardashevskiy wrote:
>> Signed-off-by: Alexey Kardashevskiy
>> ---
>> arch/powerpc/kvm/book3s_64_vio.c | 35 ++-
>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>
>> diff
This patch enables support for hardware instruction breakpoints on POWER8 with
the help of a new register called CIABR (Completed Instruction Address
Breakpoint
Register). With this patch, single hardware instruction breakpoint can be added
and cleared during any active xmon debug session. This ha
From: Andy Fleming
The general idea is that each core will release all of its
threads into the secondary thread startup code, which will
eventually wait in the secondary core holding area, for the
appropriate bit in the PACA to be set. The kick_cpu function
pointer will set that bit in the PACA,
This ensures that all MSR definitions are consistently unsigned long,
and that MSR_CM does not become 0x8000 (this is usually
harmless because MSR is 32-bit on booke and is mainly noticeable when
debugging, but still I'd rather avoid it).
Signed-off-by: Scott Wood
---
arch/powerpc/in
On book3e, guest last instruction is read on the exit path using load
external pid (lwepx) dedicated instruction. This load operation may fail
due to TLB eviction and execute-but-not-read entries.
This patch lay down the path for an alternative solution to read the guest
last instruction, by allow
On book3e, KVM uses load external pid (lwepx) dedicated instruction to read
guest last instruction on the exit path. lwepx exceptions (DTLB_MISS, DSI
and LRAT), generated by loading a guest address, needs to be handled by KVM.
These exceptions are generated in a substituted guest translation contex
In the context of replacing kvmppc_ld() function calls with a version of
kvmppc_get_last_inst() which allow to fail, Alex Graf suggested this:
"If we get EMULATE_AGAIN, we just have to make sure we go back into the guest.
No need to inject an ISI into the guest - it'll do that all by itself.
With
Add mising defines MAS0_GET_TLBSEL() and MAS1_GET_TSIZE() for Book3E.
Signed-off-by: Mihai Caraman
---
v6-v2:
- no change
arch/powerpc/include/asm/mmu-book3e.h | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/mmu-book3e.h
b/arch/powerpc/inc
The commit 1d628af7 "add load inst fixup" made an attempt to handle
failures generated by reading the guest current instruction. The fixup
code that was added works by chance hiding the real issue.
Load external pid (lwepx) instruction, used by KVM to read guest
instructions, is executed in a subs
Read guest last instruction from kvmppc_get_last_inst() allowing the function
to fail in order to emulate again. On bookehv architecture search for
the physical address and kmap it, instead of using Load External PID (lwepx)
instruction. This fixes an infinite loop caused by lwepx's data TLB miss
e
Resending because the original email was rejected by some gateways.
Add RapidIO DMA interface routines that directly use reference to the mport
device object and/or target device destination ID as parameters.
This allows to perform RapidIO DMA transfer requests by modules that do not
have an acces
On Fri, Jul 11, 2014 at 02:09:39PM +0200, Christophe Leroy wrote:
> Here is a pre-patch for the support of the SEC ENGINE of MPC88x/MPC82xx
> I have tried to make use of defines in order to keep a single driver for the
> two
> TALITOS variants as suggested by Kim, but I'm not too happy about the q
Vasant Hegde writes:
> We can continue to read the error log (up to MAX size) even if
> we get the elog size more than MAX size. Hence change BUG_ON to
> WARN_ON.
>
> Also updated error message.
>
> Reported-by: Gopesh Kumar Chaudhary
> Signed-off-by: Vasant Hegde
> Signed-off-by: Ananth N Mavin
Vasant Hegde writes:
> PowerNV platform is capable of capturing host memory region when system
> crashes (because of host/firmware). We have new OPAL API to register
> memory region to be capture when system crashes.
>
> This patch adds support for new API and also registers kernel log
> buffer.
Vasant Hegde writes:
> Presently we only support initiating Service Processor dump from host.
> Hence update sysfs message. Also update couple of other error/info
> messages.
>
> Signed-off-by: Vasant Hegde
Acked-by: Stewart Smith
___
Linuxppc-dev ma
From: Nicolin Chen
The previous enable flow:
1, Enable TE&RE (SAI starts to consume tx FIFO and feed rx FIFO)
2, Mask IRQ of Tx/Rx to enable its interrupt.
3, Enable DMA request of Tx/Rx.
As this flow would enable DMA request later than TERE, the Tx FIFO
would be easily emptied into underrun whi
From: Nicolin Chen
TE/RE bit of T/RCSR will remain set untill the current frame is physically
finished. The FIFO reset operation should wait this bit's totally cleared
rather than ignoring its status which might cause TE/RE disabling failed.
This patch adds delay and timeout to wait for its comp
From: Nicolin Chen
For trigger start, we don't need to check if it's the first time to
enable TE/RE or second time. It doesn't hurt to enable them any way,
which in the meantime can reduce race condition for TE/RE enabling.
For trigger stop, we will definitely clear FRDE of current direction.
Th
The series of patches focus on issue fix inside fsl_sai_trigger().
Nicolin Chen (3):
ASoC: fsl_sai: Reduce race condition during TE/RE enabling
ASoC: fsl_sai: Don't reset FIFO until TE/RE bit is unset
ASoC: fsl_sai: Improve enable flow in fsl_sai_trigger()
sound/soc/fsl/fsl_sai.c | 40
Power8 has a new register (MMCR2), which contains individual freeze bits
for each counter. This is an improvement on previous chips as it means
we can have multiple events on the PMU at the same time with different
exclude_{user,kernel,hv} settings. Previously we had to ensure all
events on the PMU
To support per-event exclude settings on Power8 we need access to the
struct perf_events in compute_mmcr().
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/perf_event_server.h | 3 ++-
arch/powerpc/perf/core-book3s.c | 2 +-
arch/powerpc/perf/power4-pmu.c
Because we reuse cpuhw->mmcr on each call to compute_mmcr() there's a
risk that we could forget to set one of the values and use whatever
value was in there previously.
Currently all the implementations are careful to set all the values, but
it's safer to clear them all before we call compute_mmcr
On Wed, Jul 23, 2014 at 11:07:46AM +0100, Mark Brown wrote:
> On Wed, Jul 23, 2014 at 05:52:32PM +0800, Nicolin Chen wrote:
>
> > I found this two patches are merged into for-next branch, although I haven't
> > got the 'applied' email.
>
> > Is that possible for you to drop this one? If not, I'l
On Wed, Jul 23, 2014 at 05:52:32PM +0800, Nicolin Chen wrote:
> I found this two patches are merged into for-next branch, although I haven't
> got the 'applied' email.
> Is that possible for you to drop this one? If not, I'll send another patch
> to fix this.
Please send a patch, I'd already ap
> > Right. With this do you acknowledge that v5 (definitely overwritten
> approach)
> > is ok?
>
> I think I'm starting to understand your logic of v5. You write
> fetch_failed into *inst unswapped if the fetch failed.
"v5
- don't swap when load fails" :)
>
> I think that's ok, but I definite
Sir,
I found this two patches are merged into for-next branch, although I haven't
got the 'applied' email.
Is that possible for you to drop this one? If not, I'll send another patch
to fix this.
Thank you,
Nicolin
On Fri, Jul 18, 2014 at 06:18:12PM +0800, Nicolin Chen wrote:
> Mark,
>
>
On 07/19/2014 12:40 AM, Scott Wood wrote:
From: Andy Fleming
The general idea is that each core will release all of its
threads into the secondary thread startup code, which will
eventually wait in the secondary core holding area, for the
appropriate bit in the PACA to be set. The kick_cpu func
Platforms like IBM Power Systems supports service processor
assisted dump. It provides interface to add memory region to
be captured when system is crashed.
During initialization/running we can add kernel memory region
to be collected.
Presently we don't have a way to get the log buffer base addr
PowerNV platform is capable of capturing host memory region when system
crashes (because of host/firmware). We have new OPAL API to register
memory region to be capture when system crashes.
This patch adds support for new API and also registers kernel log buffer.
Signed-off-by: Vasant Hegde
---
Presently we only support initiating Service Processor dump from host.
Hence update sysfs message. Also update couple of other error/info
messages.
Signed-off-by: Vasant Hegde
---
arch/powerpc/platforms/powernv/opal-dump.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
d
We can continue to read the error log (up to MAX size) even if
we get the elog size more than MAX size. Hence change BUG_ON to
WARN_ON.
Also updated error message.
Reported-by: Gopesh Kumar Chaudhary
Signed-off-by: Vasant Hegde
Signed-off-by: Ananth N Mavinakayanahalli
Acked-by: Deepthi Dharwa
Am 23.07.2014 um 10:24 schrieb "mihai.cara...@freescale.com"
:
>> -Original Message-
>> From: kvm-ppc-ow...@vger.kernel.org [mailto:kvm-ppc-
>> ow...@vger.kernel.org] On Behalf Of Alexander Graf
>> Sent: Wednesday, July 23, 2014 12:21 AM
>> To: Caraman Mihai Claudiu-B02008
>> Cc: kvm-..
> -Original Message-
> From: kvm-ppc-ow...@vger.kernel.org [mailto:kvm-ppc-
> ow...@vger.kernel.org] On Behalf Of Alexander Graf
> Sent: Wednesday, July 23, 2014 12:21 AM
> To: Caraman Mihai Claudiu-B02008
> Cc: kvm-...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
> k...@vger.kernel.org
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/Makefile | 2 +-
.../selftests/powerpc/pmu/per_event_excludes.c | 114 +
2 files changed, 115 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/powerpc/pmu/per_event_excl
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/lib.c | 48 +++
tools/testing/selftests/powerpc/pmu/lib.h | 1 +
2 files changed, 49 insertions(+)
diff --git a/tools/testing/selftests/powerpc/pmu/lib.c
b/tools/testing/selftests/powerpc/pmu/li
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/ebb/Makefile | 3 +-
.../powerpc/pmu/ebb/cycles_with_mmcr2_test.c | 91 ++
2 files changed, 93 insertions(+), 1 deletion(-)
create mode 100644
tools/testing/selftests/powerpc/pmu/ebb/cycles_with
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/ebb/ebb.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/powerpc/pmu/ebb/ebb.c
b/tools/testing/selftests/powerpc/pmu/ebb/ebb.c
index b7ee607c0fca..d7a72ce696b5 100644
-
Although we expect some small discrepancies for very large counts, we
seem to be able to count up to 64 billion instructions without too much
skew, so do so.
Also switch to using decimals for the instruction counts. This just
makes it easier to visually compare the expected vs actual values, as
we
Have a task eat some cpu while we are counting instructions to create
some scheduler pressure. The idea being to try and unearth any bugs we
have in counting that only appear when context switching is happening.
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/Makefile
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/Makefile | 2 +-
tools/testing/selftests/powerpc/pmu/l3_bank_test.c | 48 ++
2 files changed, 49 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/powerpc/pmu/l3_bank_test.c
There is at least one bug in core_busy_loop(), we use r0, but it's
not in the clobber list. We were getting away with this it seems but
that was luck.
It's also fishy to be touching the stack, even if we do it below the
stack pointer. It seems we get away with it, but looking at the
generated code
start and end should be unsigned long.
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/pmu/lib.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/powerpc/pmu/lib.c
b/tools/testing/selftests/powerpc/pmu/lib.c
index 0f6a4731d546..11e2
Currently we ignore errors from our sub Makefiles. We inherited that
from the top-level selftests Makefile which aims to build and run as
many tests as possible and damn the torpedoes.
For the powerpc tests we'd instead like any errors to fail the build, so
we can automatically catch build failure
In the recent commit b50a6c584bb4 "Clear MMCR2 when enabling PMU", I
screwed up the handling of MMCR2 for tasks using EBB.
We must make sure we set MMCR2 *before* ebb_switch_in(), otherwise we
overwrite the value of MMCR2 that userspace may have written. That
potentially breaks a task that uses EB
64 matches
Mail list logo