At the moment we allow bypassing DMA ops only when we can do this for
the entire RAM. However there are configs with mixed type memory
where we could still allow bypassing IOMMU in most cases;
POWERPC with persistent memory is one example.
This adds an arch hook to determine where bypass can still
This allows mixing direct DMA (to/from RAM) and
IOMMU (to/from apersistent memory) on the PPC64/pseries
platform.
This replaces https://lkml.org/lkml/2020/10/27/418
which replaces https://lkml.org/lkml/2020/10/20/1085
This is based on sha1
4525c8781ec0 Linus Torvalds "scsi: qla2xxx: remove incor
So far we have been using huge DMA windows to map all the RAM available.
The RAM is normally mapped to the VM address space contiguously, and
there is always a reasonable upper limit for possible future hot plugged
RAM which makes it easy to map all RAM via IOMMU.
Now there is persistent memory ("
On Okt 28 2020, Michael Ellerman wrote:
> What config and compiler are you using?
gcc 4.9.
Andreas.
--
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
Commit 7053f80d9696 ("powerpc/64: Prevent stack protection in early boot")
introduced a couple of uses of __attribute__((optimize)) with function
scope, to disable the stack protector in some early boot code.
Unfortunately, and this is documented in the GCC man pages [0], overriding
function attri
On Wed, 28 Oct 2020 at 09:04, Ard Biesheuvel wrote:
>
> Commit 7053f80d9696 ("powerpc/64: Prevent stack protection in early boot")
> introduced a couple of uses of __attribute__((optimize)) with function
> scope, to disable the stack protector in some early boot code.
>
> Unfortunately, and this i
I noticed that iounmap() of msgr_block_addr before return from
mpic_msgr_probe() in the error handling case is missing. So use
devm_ioremap() instead of just ioremap() when remapping the message
register block, so the mapping will be automatically released on
probe failure.
Signed-off-by: Qinglang
On Tue, Oct 27, 2020 at 10:44:21PM +, Edgecombe, Rick P wrote:
> On Tue, 2020-10-27 at 10:49 +0200, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:57:32PM +, Edgecombe, Rick P wrote:
> > > On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote:
> > > > On Mon, Oct 26, 2020 at 12:38:32A
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> On 27.10.20 09:38, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> >
> > > Beyond whatever you are seeing, for the latter case of new things
> > > getting introduced to an interface w
On 28.10.20 12:09, Mike Rapoport wrote:
On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
On 27.10.20 09:38, Mike Rapoport wrote:
On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
Beyond whatever you are seeing, for the latter case of new things
getting intro
On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote:
> On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
> > > On Mon, Oct 26, 2020 at 01:13:52AM +, Edgecombe, Rick P wrote:
> > > > On Sun, 2020-10-25 at 12:1
On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote:
> On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote:
> > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote:
> > > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
> > > > On Mon, Oct 26, 2020 at 01:13:52
Thanks for your contribution, unfortunately we've found some issues.
Your patch was successfully applied on branch powerpc/merge
(8cb17737940b156329cb5210669b9c9b23f4dd56)
The test build-ppc64le reported the following: Build failed!
Full log:
https://openpower.xyz/job/snowpatch/job/snowpatch-
From: "Steven Rostedt (VMware)"
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
The default
On Wed, Oct 28, 2020 at 12:17:35PM +0100, David Hildenbrand wrote:
> On 28.10.20 12:09, Mike Rapoport wrote:
> > On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> > > On 27.10.20 09:38, Mike Rapoport wrote:
> > > > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P wrote
is_kvm_guest() will be reused in subsequent patch in a new avatar. Hence
rename is_kvm_guest to check_kvm_guest. No additional changes.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlst
Currently, vcpu_is_preempted will return the yield_count for
shared_processor. On a PowerVM LPAR, Phyp schedules at SMT8 core boundary
i.e all CPUs belonging to a core are either group scheduled in or group
scheduled out. This can be used to better predict non-preempted CPUs on
PowerVM shared LPARs
Only code/declaration movement, in anticipation of doing a kvm-aware
vcpu_is_preempted. No additional changes.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
If its a shared lpar but not a KVM guest, then see if the vCPU is
related to the calling vCPU. On PowerVM, only cores can be preempted.
So if one vCPU is a non-preempted state, we can decipher that all other
vCPUs sharing the same core are in non-preempted state.
Signed-off-by: Srikar Dronamraju
Introduce a static branch that would be set during boot if the OS
happens to be a KVM guest. Subsequent checks to see if we are on KVM
will rely on this static branch. This static branch would be used in
vcpu_is_preempted in a subsequent patch.
Signed-off-by: Srikar Dronamraju
Cc: linuxppc-dev
C
From: Mauro Carvalho Chehab
Several entries at the stable ABI files won't parse if we pass
them directly to the ReST output.
Adjust them, in order to allow adding their contents as-is at
the stable ABI book.
Signed-off-by: Mauro Carvalho Chehab
Signed-off-by: Mauro Carvalho Chehab
---
Docume
Lockdep complains that a possible deadlock below in
eeh_addr_cache_show() because it is acquiring a lock with IRQ enabled,
but eeh_addr_cache_insert_dev() needs to acquire the same lock with IRQ
disabled. Let's just make eeh_addr_cache_show() acquire the lock with
IRQ disabled as well.
CPU
On Wed, Oct 28, 2020 at 05:55:23PM +1100, Alexey Kardashevskiy wrote:
>
> It is passing an address of the end of the mapped area so passing a page
> struct means passing page and offset which is an extra parameter and we do
> not want to do anything with the page in those hooks anyway so I'd keep
On Wed, Oct 28, 2020 at 06:00:29PM +1100, Alexey Kardashevskiy wrote:
> At the moment we allow bypassing DMA ops only when we can do this for
> the entire RAM. However there are configs with mixed type memory
> where we could still allow bypassing IOMMU in most cases;
> POWERPC with persistent memo
The call to rcu_cpu_starting() in start_secondary() is not early enough
in the CPU-hotplug onlining process, which results in lockdep splats as
follows:
WARNING: suspicious RCU usage
-
kernel/locking/lockdep.c:3497 RCU-list traversed in non-reader section!!
other i
On Wed, 2020-10-28 at 13:09 +0200, Mike Rapoport wrote:
> On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> > On 27.10.20 09:38, Mike Rapoport wrote:
> > > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P
> > > wrote:
> > >
> > > > Beyond whatever you are seeing, for
Here's another batch of DWC PCI host refactoring. This series primarily
moves more of the MSI, link up, and resource handling to the core
code.
No doubt I've probably broken something. Please test. A git branch is
here[1].
Rob
[1] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git
pci
No other host driver sets the PCI_MSI_FLAGS_ENABLE bit, so it must not
be necessary. If it is, a comment is needed.
Cc: Richard Zhu
Cc: Lucas Stach
Cc: Lorenzo Pieralisi
Cc: Bjorn Helgaas
Cc: Shawn Guo
Cc: Sascha Hauer
Cc: Pengutronix Kernel Team
Cc: Fabio Estevam
Cc: NXP Linux Team
Signe
The ATU offset should be a register range in DT called 'atu', not driver
match data. Any future platforms with a different ATU offset should add
it to their DT.
This is also in preparation to do DBI resource setup in the core DWC
code, so let's move setting atu_base later in intel_pcie_rc_setup().
Most DWC drivers use the common register resource names "dbi", "dbi2", and
"addr_space", so let's move their setup into the DWC common code.
This means 'dbi_base' in particular is setup later, but it looks like no
drivers touch DBI registers before dw_pcie_host_init or dw_pcie_ep_init.
Cc: Kishon
Remove some of the pointless levels of functions that just wrap or group
a series of other functions.
Cc: Lorenzo Pieralisi
Cc: Bjorn Helgaas
Signed-off-by: Rob Herring
---
drivers/pci/controller/dwc/pcie-intel-gw.c | 47 --
1 file changed, 16 insertions(+), 31 deletions(-)
The Layerscape driver clears the ATU registers which may have been
configured by the bootloader. Any driver could have the same issue
and doing it for all drivers doesn't hurt, so let's move it into the
common DWC code.
Cc: Minghuan Lian
Cc: Mingkai Hu
Cc: Roy Zang
Cc: Lorenzo Pieralisi
Cc: Bj
The dra7xx MSI irq_chip implementation is identical to the default DWC one.
The only difference is the interrupt handler as the MSI interrupt is muxed
with other interrupts, but that doesn't affect the irq_chip part of it.
Cc: Kishon Vijay Abraham I
Cc: Lorenzo Pieralisi
Cc: Bjorn Helgaas
Cc: l
There's no reason for the .set_num_vectors() host op. Drivers needing a
non-default value can just initialize pcie_port.num_vectors directly.
Cc: Jingoo Han
Cc: Gustavo Pimentel
Cc: Lorenzo Pieralisi
Cc: Bjorn Helgaas
Cc: Thierry Reding
Cc: Jonathan Hunter
Cc: linux-te...@vger.kernel.org
Sig
Platforms using the built-in DWC MSI controller all have a dedicated
interrupt with "msi" name or at index 0, so let's move setting up the
interrupt to the common DWC code.
spear13xx and dra7xx are the 2 oddballs with muxed interrupts, so
we need to prevent configuring the MSI interrupt by setting
There are 3 possible MSI implementations for the DWC host. The first is
using the built-in DWC MSI controller. The 2nd is a custom MSI
controller as part of the PCI host (keystone only). The 3rd is an
external MSI controller (typically GICv3 ITS). Currently, the last 2
are distinguished with a .msi
All the DWC drivers do link setup and checks at roughly the same time.
Let's use the existing .start_link() hook (currently only used in EP
mode) and move the link handling to the core code.
The behavior for a link down was inconsistent as some drivers would fail
probe in that case while others su
The host drivers which call dw_pcie_msi_init() are all the ones using
the built-in MSI controller, so let's move it into the common DWC code.
Cc: Kishon Vijay Abraham I
Cc: Lorenzo Pieralisi
Cc: Bjorn Helgaas
Cc: Jingoo Han
Cc: Kukjin Kim
Cc: Krzysztof Kozlowski
Cc: Richard Zhu
Cc: Lucas St
Many calls to dw_pcie_host_init() are in a wrapper function with
nothing else now. Let's remove the pointless extra layer.
Cc: Richard Zhu
Cc: Lucas Stach
Cc: Lorenzo Pieralisi
Cc: Bjorn Helgaas
Cc: Shawn Guo
Cc: Sascha Hauer
Cc: Pengutronix Kernel Team
Cc: Fabio Estevam
Cc: NXP Linux Team
All RC complex drivers must call dw_pcie_setup_rc(). The ordering of the
call shouldn't be too important other than being after any RC resets.
There's a few calls of dw_pcie_setup_rc() left as drivers implementing
suspend/resume need it.
Cc: Kishon Vijay Abraham I
Cc: Lorenzo Pieralisi
Cc: Bjor
On Wed, Oct 28, 2020 at 03:23:18PM +0100, Mauro Carvalho Chehab wrote:
> diff --git a/Documentation/ABI/testing/sysfs-uevent
> b/Documentation/ABI/testing/sysfs-uevent
> index aa39f8d7bcdf..d0893dad3f38 100644
> --- a/Documentation/ABI/testing/sysfs-uevent
> +++ b/Documentation/ABI/testing/sysfs-
On Wed, Oct 28, 2020 at 02:23:34PM -0400, Qian Cai wrote:
> The call to rcu_cpu_starting() in start_secondary() is not early enough
> in the CPU-hotplug onlining process, which results in lockdep splats as
> follows:
>
> WARNING: suspicious RCU usage
> -
> kernel/loc
On Wed, 2020-10-28 at 13:30 +0200, Mike Rapoport wrote:
> On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote:
> > On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote:
> > > On Mon, Oct 26, 2020 at 06:05:30PM +, Edgecombe, Rick P
> > > wrote:
> > > > On Mon, 2020-10-26 at 11:0
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> + unsigned long addr = (unsigned
> long)page_address(page);
> + int ret;
> +
> + if (enable)
> + ret = set_direct_map
On 29/10/2020 04:21, Christoph Hellwig wrote:
On Wed, Oct 28, 2020 at 05:55:23PM +1100, Alexey Kardashevskiy wrote:
It is passing an address of the end of the mapped area so passing a page
struct means passing page and offset which is an extra parameter and we do
not want to do anything with
On 29/10/2020 04:22, Christoph Hellwig wrote:
On Wed, Oct 28, 2020 at 06:00:29PM +1100, Alexey Kardashevskiy wrote:
At the moment we allow bypassing DMA ops only when we can do this for
the entire RAM. However there are configs with mixed type memory
where we could still allow bypassing IOMMU
On 10/28/20 8:35 AM, Srikar Dronamraju wrote:
Currently, vcpu_is_preempted will return the yield_count for
shared_processor. On a PowerVM LPAR, Phyp schedules at SMT8 core boundary
i.e all CPUs belonging to a core are either group scheduled in or group
scheduled out. This can be used to better pr
Qian Cai writes:
> The call to rcu_cpu_starting() in start_secondary() is not early enough
> in the CPU-hotplug onlining process, which results in lockdep splats as
> follows:
Since when?
What kernel version?
I haven't seen this running CPU hotplug tests with PROVE_LOCKING=y on
v5.10-rc1. Am I m
Rob Herring writes:
> No other host driver sets the PCI_MSI_FLAGS_ENABLE bit, so it must not
> be necessary. If it is, a comment is needed.
Yeah, but git blame directly points to:
75cb8d20c112 ("PCI: imx: Enable MSI from downstream components")
Which has a pretty long explanation. The relevan
On Thu, Oct 29, 2020 at 11:09:07AM +1100, Michael Ellerman wrote:
> Qian Cai writes:
> > The call to rcu_cpu_starting() in start_secondary() is not early enough
> > in the CPU-hotplug onlining process, which results in lockdep splats as
> > follows:
>
> Since when?
> What kernel version?
>
> I h
Alexey Kardashevskiy writes:
> diff --git a/arch/powerpc/platforms/pseries/iommu.c
> b/arch/powerpc/platforms/pseries/iommu.c
> index e4198700ed1a..91112e748491 100644
> --- a/arch/powerpc/platforms/pseries/iommu.c
> +++ b/arch/powerpc/platforms/pseries/iommu.c
> @@ -,11 +1112,13 @@ static vo
On 29/10/2020 11:40, Michael Ellerman wrote:
Alexey Kardashevskiy writes:
diff --git a/arch/powerpc/platforms/pseries/iommu.c
b/arch/powerpc/platforms/pseries/iommu.c
index e4198700ed1a..91112e748491 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/io
On 10/27/20 10:21 PM, Michael Ellerman wrote:
> Tyrel Datwyler writes:
>> After a loss of tranport due to an adatper migration or crash/disconnect from
>> the host partner there is a tiny window where we can race adjusting the
>> request_limit of the adapter. The request limit is atomically inc/de
This allows mixing direct DMA (to/from RAM) and
IOMMU (to/from apersistent memory) on the PPC64/pseries
platform.
This replaces https://lkml.org/lkml/2020/10/28/929
which replaces https://lkml.org/lkml/2020/10/27/418
which replaces https://lkml.org/lkml/2020/10/20/1085
This is based on sha1
45
So far we have been using huge DMA windows to map all the RAM available.
The RAM is normally mapped to the VM address space contiguously, and
there is always a reasonable upper limit for possible future hot plugged
RAM which makes it easy to map all RAM via IOMMU.
Now there is persistent memory ("
At the moment we allow bypassing DMA ops only when we can do this for
the entire RAM. However there are configs with mixed type memory
where we could still allow bypassing IOMMU in most cases;
POWERPC with persistent memory is one example.
This adds an arch hook to determine where bypass can still
On 28/10/2020 03:09, Marc Zyngier wrote:
Hi Alexey,
On 2020-10-27 09:06, Alexey Kardashevskiy wrote:
PCI devices share 4 legacy INTx interrupts from the same PCI host bridge.
Device drivers map/unmap hardware interrupts via irq_create_mapping()/
irq_dispose_mapping(). The problem with that t
On Thu, Oct 29, 2020 at 2:27 AM Qian Cai wrote:
>
> Lockdep complains that a possible deadlock below in
> eeh_addr_cache_show() because it is acquiring a lock with IRQ enabled,
> but eeh_addr_cache_insert_dev() needs to acquire the same lock with IRQ
> disabled. Let's just make eeh_addr_cache_show
58 matches
Mail list logo