On Mon, 11 Jan 2016, Aneesh Kumar K.V wrote:
> Hugh Dickins writes:
> > On Mon, 11 Jan 2016, Aneesh Kumar K.V wrote:
> >> Hugh Dickins writes:
> >>
> >> > Swapoff after swapping hangs on the G5, when CONFIG_CHECKPOINT_RESTORE=y
> >> > but CONFIG_MEM_SOFT_DIRTY is not set. That's because the non
Hugh Dickins writes:
> On Mon, 11 Jan 2016, Aneesh Kumar K.V wrote:
>> Hugh Dickins writes:
>>
>> > Swapoff after swapping hangs on the G5, when CONFIG_CHECKPOINT_RESTORE=y
>> > but CONFIG_MEM_SOFT_DIRTY is not set. That's because the non-zero
>> > _PAGE_SWP_SOFT_DIRTY bit, added by CONFIG_HAV
On Mon, 11 Jan 2016, Aneesh Kumar K.V wrote:
> "Aneesh Kumar K.V" writes:
> > Hugh Dickins writes:
> >
> >> Swapoff after swapping hangs on the G5. That's because the _PAGE_PTE
> >> bit, added by set_pte_at(), is not expected by swapoff: so swap ptes
> >> cannot be recognized.
> >>
> >> I'm not
On Mon, 11 Jan 2016, Aneesh Kumar K.V wrote:
> Hugh Dickins writes:
>
> > Swapoff after swapping hangs on the G5, when CONFIG_CHECKPOINT_RESTORE=y
> > but CONFIG_MEM_SOFT_DIRTY is not set. That's because the non-zero
> > _PAGE_SWP_SOFT_DIRTY bit, added by CONFIG_HAVE_ARCH_SOFT_DIRTY=y, is not
>
The perf infrastructure uses a bit mask to find out valid
registers to display. Define a register mask for supported
registers defined in asm/perf_regs.h. The bit positions also
correspond to register IDs which is used by perf infrastructure
to fetch the register values. CONFIG_HAVE_PERF_REGS enabl
The enum definition assigns an 'id' to each register in "struct pt_regs"
of arch/powerpc. The order of these values in the enum definition are
based on the corresponding macros in arch/powerpc/include/uapi/asm/ptrace.h.
Signed-off-by: Anju T
Reviewed-by : Madhavan Srinivasan
---
arch/powerpc/i
This short patch series adds the ability to sample the interrupted
machine state for each hardware sample.
To test this patchset,
Eg:
$ perf record -I? # list supported registers
output:
available registers: gpr0 gpr1 gpr2 gpr3 gpr4 gpr5 gpr6 gpr7 gpr8 gpr9 gpr10
gpr11 gpr12 gpr13 gpr14
Map ID values with corresponding register names. These names are then
displayed when user issues perf record with the -I option
followed by perf report/script with -D option.
To test this patchset,
Eg:
$ perf record -I ls # record machine state at interrupt
$ perf script -D # read the perf
From: Madhavan Srinivasan
Add sample_reg_mask array with pt_regs registers.
This is needed for printing supported regs ( -I? option).
Signed-off-by: Madhavan Srinivasan
---
tools/perf/arch/powerpc/util/Build | 1 +
tools/perf/arch/powerpc/util/perf_regs.c | 48 ++
P8+ hardware reports all errors on PE#0. This patch ensures PE#0 is
not assigned to NPU devices so that it can be used for EEH.
Signed-off-by: Alistair Popple
---
arch/powerpc/platforms/powernv/pci-ioda.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/plat
The P8+ hardware supports four partitionable endpoints (PEs) however
the hardware reports all errors as occurring on PE#0. This means we
need to reserve this PE for error handling (EEH) and not assign it to
a NPU device, implying that some devices will need to share PEs.
This patch changes the PE
"Aneesh Kumar K.V" writes:
> Hugh Dickins writes:
>
>> Swapoff after swapping hangs on the G5. That's because the _PAGE_PTE
>> bit, added by set_pte_at(), is not expected by swapoff: so swap ptes
>> cannot be recognized.
>>
>> I'm not sure whether a swap pte should or should not have _PAGE_PTE
The hypervisor needs to know a guest is capable of using the HPT resizing
PAPR extension in order to make full advantage of it for memory hotplug.
If the hypervisor knows the guest is HPT resize aware, it can size the
initial HPT based on the initial guest RAM size, relying on the guest to
resize
This patch adds a special file /sys/kernel/debug/powerpc/pft-size
which can be used to view the current size of the hash page table (as
a bit shift) and to trigger a resize of the hash table on PAPR guests.
Signed-off-by: David Gibson
---
arch/powerpc/platforms/pseries/lpar.c | 26 ++
This adds the hypercall numbers and wrapper functions for the hash page
table resizing hypercalls.
These are experimental "platform specific" values for now, until we have a
formal PAPR update.
It also adds a new firmware feature flat to track the presence of the
HPT resizing calls.
Signed-off-b
I've discussed with Paul and Ben previously the possibility of
extending PAPR to allow changing the size of a running guest's hash
page table (HPT). This would allow for much more flexible memory
hotplug, since the HPT wouldn't have to be sized in advance for the
maximum possible memory size of th
This adds support for using experimental hypercalls to change the size
of the main hash page table while running as a PAPR guest. For now these
hypercalls are only in experimental qemu versions.
The interface is two part: first H_RESIZE_HPT_PREPARE is used to allocate
and prepare the new hash tab
On Mon, 11 Jan 2016, Aneesh Kumar K.V wrote:
> Hugh Dickins writes:
>
> > Swapoff after swapping hangs on the G5. That's because the _PAGE_PTE
> > bit, added by set_pte_at(), is not expected by swapoff: so swap ptes
> > cannot be recognized.
> >
> > I'm not sure whether a swap pte should or shou
Hugh Dickins writes:
> Swapoff after swapping hangs on the G5, when CONFIG_CHECKPOINT_RESTORE=y
> but CONFIG_MEM_SOFT_DIRTY is not set. That's because the non-zero
> _PAGE_SWP_SOFT_DIRTY bit, added by CONFIG_HAVE_ARCH_SOFT_DIRTY=y, is not
> discounted when CONFIG_MEM_SOFT_DIRTY is not set: so sw
Hugh Dickins writes:
> Both s390 and powerpc have hit the issue of swapoff hanging, when
> CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were
> not quite as x86_64 had them. I think it would be much clearer if
> HAVE_ARCH_SOFT_DIRTY was just a Kconfig option set by architectures
>
Hugh Dickins writes:
> Swapoff after swapping hangs on the G5. That's because the _PAGE_PTE
> bit, added by set_pte_at(), is not expected by swapoff: so swap ptes
> cannot be recognized.
>
> I'm not sure whether a swap pte should or should not have _PAGE_PTE set:
> this patch assumes not, and fi
On Fri, 2016-01-08 at 17:50 -0500, Steven Rostedt wrote:
> On Wed, 16 Dec 2015 12:24:19 -0500
> Steven Rostedt wrote:
> > On Wed, 09 Dec 2015 12:03:05 +1100
> > Michael Ellerman wrote:
> > > > > Should I take this via powerpc or do you want it to go in via tracing?
> > > >
> > > > You can take it
On Fri, 2016-01-08 at 09:45 -0800, dwal...@fifo99.com wrote:
> Hi,
>
> A powerpc machine I'm working on has this problem where the
> simple_alloc_init() area is trampling the initrd. The two are placed fairly
> close together.
Which machine / platform?
> I have a fix for this proposed to add a s
In order to support Power9 we need two new HWCAP bits. We are merging
these ahead of the cputable entry so that glibc can start referring to
them.
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/uapi/asm/cputable.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/inclu
Daniel Axtens writes:
> As sparse suggests, these should be made static.
>
> Signed-off-by: Daniel Axtens
Reviewed-by: Stewart Smith
--
Stewart Smith
OPAL Architect, IBM.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozl
Russell Currey writes:
> PCI in powernv now supports quite a bit more than p5ioc2, so remove the
> outdated comment.
>
> Signed-off-by: Russell Currey
Acked-by: Stewart Smith
--
Stewart Smith
OPAL Architect, IBM.
___
Linuxppc-dev mailing list
Linux
On Mon, 2016-01-11 at 09:13 +1100, Julian Calaby wrote:
> On Mon, Jan 11, 2016 at 6:31 AM, Michael S. Tsirkin wrote:
> > Add virt_ barriers to list of barriers to check for
> > presence of a comment.
[]
> > diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
[]
> > @@ -5133,7 +5133,8 @@ sub
Hi Michael,
On Mon, Jan 11, 2016 at 6:31 AM, Michael S. Tsirkin wrote:
> Add virt_ barriers to list of barriers to check for
> presence of a comment.
>
> Signed-off-by: Michael S. Tsirkin
> ---
> scripts/checkpatch.pl | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/sc
Maciej S. Szmigiero wrote:
There is no guarantee that on fsl_ssi module load
SSI registers will have their power-on-reset values.
In fact, if the driver is reloaded the values in
registers will be whatever they were set to previously.
This fixes hard lockup on fsl_ssi module reload,
at least in
Maciej S. Szmigiero wrote:
Mark some registers precious since their
reads have side effects (like clearing flags).
Signed-off-by: Maciej S. Szmigiero
Acked-by: Timur Tabi
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozla
Maciej S. Szmigiero wrote:
+ regmap_write(regs, CCSR_SSI_SACNT,
+ ssi_private->regcache_sacnt);
So I'm not familiar with all of the regcache features, but I understand
this patch. I was wondering if it makes sense to write the same exact
value that was read previo
Add virt_ barriers to list of barriers to check for
presence of a comment.
Signed-off-by: Michael S. Tsirkin
---
scripts/checkpatch.pl | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 15cfca4..4466579 100755
--- a/scripts/c
Introduction of __smp barriers cleans up a bunch of duplicate code, but
it gives people an additional handle onto a "new" set of barriers - just
because they're prefixed with __* unfortunately doesn't stop anyone from
using it (as happened with other arch stuff before.)
Add a checkpatch test so it
SMP-only barriers were missing in checkpatch.pl
Refactor code slightly to make adding more variants easier.
Signed-off-by: Michael S. Tsirkin
---
scripts/checkpatch.pl | 22 +-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/chec
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This replaces the checkpatch patches in my series
arch: barrier cleanup + barriers for virt
and will be included in the next version of the series.
changes from v2
On Sun, Jan 10, 2016 at 07:17:31AM -0800, Joe Perches wrote:
> On Sun, 2016-01-10 at 07:07 -0800, Joe Perches wrote:
> > On Sun, 2016-01-10 at 13:56 +0200, Michael S. Tsirkin wrote:
> > > SMP-only barriers were missing in checkpatch.pl
> > >
> > > Refactor code slightly to make adding more variant
On Sun, Jan 10, 2016 at 07:07:05AM -0800, Joe Perches wrote:
> On Sun, 2016-01-10 at 13:56 +0200, Michael S. Tsirkin wrote:
> > SMP-only barriers were missing in checkpatch.pl
> >
> > Refactor code slightly to make adding more variants easier.
> []
> > diff --git a/scripts/checkpatch.pl b/scripts/
On Sun, 2016-01-10 at 07:07 -0800, Joe Perches wrote:
> On Sun, 2016-01-10 at 13:56 +0200, Michael S. Tsirkin wrote:
> > SMP-only barriers were missing in checkpatch.pl
> >
> > Refactor code slightly to make adding more variants easier.
> []
> > diff --git a/scripts/checkpatch.pl b/scripts/checkpa
On Sun, 2016-01-10 at 13:57 +0200, Michael S. Tsirkin wrote:
> Introduction of __smp barriers cleans up a bunch of duplicate code, but
> it gives people an additional handle onto a "new" set of barriers - just
> because they're prefixed with __* unfortunately doesn't stop anyone from
> using it (as
On Sun, 2016-01-10 at 13:56 +0200, Michael S. Tsirkin wrote:
> SMP-only barriers were missing in checkpatch.pl
>
> Refactor code slightly to make adding more variants easier.
[]
> diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
[]
> @@ -5116,7 +5116,25 @@ sub process {
>
As per: lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
atomics imply a barrier on s390, so s390 should change
smp_mb__before_atomic and smp_mb__after_atomic to barrier() instead of
smp_mb() and hence should not use the generic versions.
Suggested-by: Peter Zijlstra
Suggested-by: Martin Schwid
The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n.
Suggested-by: Martin Schwidefsky
Signed-off-by: Michael S. Tsirk
drivers/xen/events/events_fifo.c uses rmb() to communicate with the
other side.
For guests compiled with CONFIG_SMP, smp_rmb would be sufficient, so
rmb() here is only needed if a non-SMP guest runs on an SMP host.
Switch to the virt_rmb barrier which serves this exact purpose.
Pull in asm/barri
include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.
For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.
Switch to virt_xxx barriers which serve this ex
drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.
For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.
Switch to virt_xxx barriers which serve this
Add virt_ barriers to list of barriers to check for
presence of a comment.
Signed-off-by: Michael S. Tsirkin
---
scripts/checkpatch.pl | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index a96adcb..5ca272b 100755
--- a/scripts/c
Introduction of __smp barriers cleans up a bunch of duplicate code, but
it gives people an additional handle onto a "new" set of barriers - just
because they're prefixed with __* unfortunately doesn't stop anyone from
using it (as happened with other arch stuff before.)
Add a checkpatch test so it
SMP-only barriers were missing in checkpatch.pl
Refactor code slightly to make adding more variants easier.
Signed-off-by: Michael S. Tsirkin
---
scripts/checkpatch.pl | 20 +++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkp
We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding. As usual, we need a
wrapper to account for strong barriers.
It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.
Sig
Looks like future sh variants will support a 4-byte cas which will be
used to implement 1 and 2 byte xchg.
This is exactly what we do for llsc now, move the portable part of the
code into a separate header so it's easy to reuse.
Suggested-by: Rich Felker
Signed-off-by: Michael S. Tsirkin
---
This completes the xchg implementation for sh architecture. Note: The
llsc variant is tricky since this only supports 4 byte atomics, the
existing implementation of 1 byte xchg is wrong: we need to do a 4 byte
cmpxchg and retry if any bytes changed meanwhile.
Write this in C for clarity.
Suggest
virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.
This is exactly what virt_xxx barriers do,
so switch to these instead of homegrown ifdef hacks.
Cc: Peter Zijlstra
Cc: Alexander Duyck
Signed-off-b
This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.
While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.
A better way to optimize this is with the new __smp_XXX
barriers.
As a first step, go back to full rmb/wmb barriers
for !SM
Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support. This is an artifact of
interfacing with an SMP host while running an UP kernel. Using mandatory
barriers for this use-case would be possible but is often suboptimal.
This defines __smp_xxx barriers for x86,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
---
arch/x86/include/asm/barrier.h | 31 ---
1 file cha
This defines __smp_xxx barriers for xtensa,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
---
arch/xtensa/include/asm/barrier.h | 4 ++--
1 file changed, 2 insertions(+),
This defines __smp_xxx barriers for tile,
for use by virtualization.
Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.
Signed-off-by: Michael S. Tsirkin
This defines __smp_xxx barriers for sparc,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
Acked-by: David S. Miller
---
arch/sparc/include/asm/barrier_64.h | 8
sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.
define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd
This defines __smp_xxx barriers for s390,
for use by virtualization.
Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.
Signed-off-by: Michael S. Tsirkin
A
This defines __smp_xxx barriers for mips,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barrie
This defines __smp_xxx barriers for metag,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently betwee
This defines __smp_xxx barriers for ia64,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
This reduces the amount of arch-specific boiler-plate code.
Signed-off-by: Michael S. Tsirkin
Acked-by: Tony Luck
Acked-by: Arnd Bergmann
-
This defines __smp_xxx barriers for blackfin,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
---
arch/blackfin/include/asm/barrier.h | 4 ++--
1 file changed, 2 insertions
This defines __smp_xxx barriers for arm,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
This reduces the amount of arch-specific boiler-plate code.
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
Acked-by: Russell King
This defines __smp_xxx barriers for arm64,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd B
This defines __smp_xxx barriers for powerpc
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
This reduces the amount of arch-specific boiler-plate code.
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
Acked-by: Boqun Feng
On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.
Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.
Bes
As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty. Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin
Acke
On x86/um CONFIG_SMP is never defined. As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin
Acked-by: Arnd Bergmann
Acked-
On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signe
On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Sign
On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-of
On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.
This is in preparation to refactori
On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
nop uses __asm__ __volatile but is otherwise identical to
the ge
On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by:
On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-b
On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin
Acked-
asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().
There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.
Signed-off-by: Michael S. Tsirkin
Acked-by: Tony Luck
Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.
This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.
Signed-off-by: Michael S. Tsirk
From: Davidlohr Bueso
With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
sa
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russe
On Sat, Jan 09, 2016 at 04:59:42PM -0800, Hugh Dickins wrote:
> Both s390 and powerpc have hit the issue of swapoff hanging, when
> CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were
> not quite as x86_64 had them. I think it would be much clearer if
> HAVE_ARCH_SOFT_DIRTY was just
On Sat, Jan 09, 2016 at 04:54:59PM -0800, Hugh Dickins wrote:
> Swapoff after swapping hangs on the G5, when CONFIG_CHECKPOINT_RESTORE=y
> but CONFIG_MEM_SOFT_DIRTY is not set. That's because the non-zero
> _PAGE_SWP_SOFT_DIRTY bit, added by CONFIG_HAVE_ARCH_SOFT_DIRTY=y, is not
> discounted when
Add virt_ barriers to list of barriers to check for
presence of a comment.
Signed-off-by: Michael S. Tsirkin
---
scripts/checkpatch.pl | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index a96adcb..5ca272b 100755
--- a/scripts/c
Introduction of __smp barriers cleans up a bunch of duplicate code, but
it gives people an additional handle onto a "new" set of barriers - just
because they're prefixed with __* unfortunately doesn't stop anyone from
using it (as happened with other arch stuff before.)
Add a checkpatch test so it
SMP-only barriers were missing in checkpatch.pl
Refactor code slightly to make adding more variants easier.
Signed-off-by: Michael S. Tsirkin
---
scripts/checkpatch.pl | 20 +++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkp
As part of memory barrier cleanup, this patchset
extends checkpatch to make it easier to stop
incorrect memory barrier usage.
This applies on top of my series
arch: barrier cleanup + barriers for virt
and will be included in the next version of the series.
Changes from v2:
catch o
On Mon, Jan 04, 2016 at 02:15:50PM -0800, Joe Perches wrote:
> On Mon, 2016-01-04 at 22:45 +0200, Michael S. Tsirkin wrote:
> > On Mon, Jan 04, 2016 at 08:07:40AM -0800, Joe Perches wrote:
> > > On Mon, 2016-01-04 at 13:36 +0200, Michael S. Tsirkin wrote:
> > > > SMP-only barriers were missing in c
- Original Message -
> From: "Raghavendra K T"
> To: "Jan Stancek"
> Cc: linuxppc-dev@lists.ozlabs.org, vdavy...@parallels.com,
> b...@kernel.crashing.org, pau...@samba.org,
> m...@ellerman.id.au, an...@samba.org, n...@linux.vnet.ibm.com,
> gk...@linux.vnet.ibm.com, "grant likely"
>
90 matches
Mail list logo