Hi,
This patchset is a first step towards removing stop_machine() from the
CPU hotplug offline path. It introduces a set of APIs (as a replacement to
preempt_disable()/preempt_enable()) to synchronize with CPU hotplug from
atomic contexts.
The motivation behind getting rid of stop_machine() is to
The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.
There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize with
We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().
But there is yet another set, which might appear tem
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
So add documentation to recommend using the new get/put_online_cpus_atomic()
APIs to prevent CPUs from going offline, while invoking from atom
Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.
This debugging infrastructure proves useful in the tree-wide conv
When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.
But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the
Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.
Cc: Rusty Russell
Cc: Alex Shi
Cc: KOSAKI Motohiro
Cc: Tejun Heo
Cc: Andrew M
Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.
Cc: Andrew Morton
Cc: "Paul E. McKenney"
Cc: Akinobu Mita
Cc: Catalin Marinas
Cc: Michel Lespinasse
Signed-off-by: Srivatsa S. Bhat
Convert the macros in the CPU hotplug code to static inline C functions.
Cc: Thomas Gleixner
Cc: Andrew Morton
Cc: Tejun Heo
Cc: "Rafael J. Wysocki"
Signed-off-by: Srivatsa S. Bhat
---
include/linux/cpu.h |9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/includ
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Andrew Morton
Cc: Wang
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Cc: Peter
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Cc: Peter
We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.
Cc: Ingo Molnar
Cc: Peter Zijlstra
Signed-off-by: Srivatsa S. Bhat
---
kernel/sched/core.c |4 ++--
1 file changed, 2 insertions(+),
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Signed
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ingo Molnar
Cc: Peter
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. Use the get/put_online_cpus_atomic()
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Signed
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: John Stultz
Cc: Thomas
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Frederic Weisbecker
Cc
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Signed
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Jens Axboe
Signed-off-
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: "David S. Miller"
Cc:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Al Viro
Signed-off-by:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Hoang-Nam Nguyen
Cc: C
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Robert Love
Cc: "James
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Thomas Gleixner
Cc: In
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Greg Kroah-Hartman
Cc:
The CPU_DYING notifier modifies the per-cpu pointer pmu->box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Gleb Natapov
Cc: Paolo
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Gleb Natapov
Cc: Paolo
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Konrad Rzeszutek Wilk
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Also, remove the non-ASCII
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Mike Frysinger
Cc: Bob
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Mikael Starvik
Cc: Jes
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Richard Kuo
Cc: Thomas
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Tony Luck
Cc: Fenghua
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Tony Luck
Cc: Fenghua
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Hirokazu Takata
Cc: li
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Ralf Baechle
Cc: David
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: David Howells
Cc: Koic
The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Ian Munsie
Cc: Steven Rostedt
Cc: Michael Ellerman
Cc:
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Benjamin Herrenschmidt
Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.
It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily powerpc
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Paul Mundt
Cc: Thomas
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: Chris Metcalf
Signed-o
Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.
Cc: "David S. Miller"
Cc:
On Sun, Jun 23, 2013 at 6:45 AM, Srivatsa S. Bhat
wrote:
> Once stop_machine() is gone from the CPU offline path, we won't be able
> to depend on disabling preemption to prevent CPUs from going offline
> from under us.
>
> Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
> offl
On Sun, Jun 23, 2013 at 07:13:33PM +0530, Srivatsa S. Bhat wrote:
> Once stop_machine() is gone from the CPU offline path, we won't be able
> to depend on disabling preemption to prevent CPUs from going offline
> from under us.
>
> Use the get/put_online_cpus_atomic() APIs to prevent CPUs from goi
On 06/23/2013 11:20 PM, Matt Turner wrote:
> On Sun, Jun 23, 2013 at 6:45 AM, Srivatsa S. Bhat
> wrote:
>> Once stop_machine() is gone from the CPU offline path, we won't be able
>> to depend on disabling preemption to prevent CPUs from going offline
>> from under us.
>>
>> Use the get/put_online_
On 06/23/2013 11:47 PM, Greg Kroah-Hartman wrote:
> On Sun, Jun 23, 2013 at 07:13:33PM +0530, Srivatsa S. Bhat wrote:
>> Once stop_machine() is gone from the CPU offline path, we won't be able
>> to depend on disabling preemption to prevent CPUs from going offline
>> from under us.
>>
>> Use the ge
On 06/23/2013 08:38 PM, Sergei Shtylyov wrote:
> Hello.
>
> On 23-06-2013 17:39, Srivatsa S. Bhat wrote:
>
>> Now that we have all the pieces of the CPU hotplug debug infrastructure
>> in place, expose the feature by growing a new Kconfig option,
>> CONFIG_DEBUG_HOTPLUG_CPU.
>
>> Cc: Andrew Mort
On Mon, 2013-06-24 at 00:25 +0530, Srivatsa S. Bhat wrote:
> On 06/23/2013 11:47 PM, Greg Kroah-Hartman wrote:
> > On Sun, Jun 23, 2013 at 07:13:33PM +0530, Srivatsa S. Bhat wrote:
[]
> >> diff --git a/drivers/staging/octeon/ethernet-rx.c
> >> b/drivers/staging/octeon/ethernet-rx.c
[]
> Honestly,
Hello.
On 23-06-2013 17:39, Srivatsa S. Bhat wrote:
Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.
Cc: Andrew Morton
Cc: "Paul E. McKenney"
Cc: Akinobu Mita
Cc: Catalin Marinas
On Jun 22, 2013, at 5:00 PM, Benjamin Herrenschmidt
wrote:
> On Fri, 2013-06-21 at 10:14 -0700, Peter LaDow wrote:
>>
> Afaik e300 is slightly out of order, maybe it's missing a memory barrier
> somewhere One thing to try is to add some to the dma_map/unmap ops.
>
> Also audit the driver
On Sun, 2013-06-23 at 17:56 -0700, Peter LaDow wrote:
>
> On Jun 22, 2013, at 5:00 PM, Benjamin Herrenschmidt
> wrote:
>
> > On Fri, 2013-06-21 at 10:14 -0700, Peter LaDow wrote:
> >>
> > Afaik e300 is slightly out of order, maybe it's missing a memory
> barrier
> > somewhere One thing to t
> Enable PSTORE in pseries_defconfig
Please add a "why" to your changelogs eg. "Now we have pstore support for
nvram on pseries, enable it in the default config"
"Why" you are changing something is more important than "what", since
you can always determine "what" is being changed, by looking at t
On Fri, 2013-06-21 at 09:54 -0700, Guenter Roeck wrote:
> > v2: Ensure that PCI bus fixup code has been executed before calling
> > device setup code.
> >
> Hi Ben,
>
> any comments/feedback on this approach ?
>
> It is much less invasive than before and should address your concerns.
And a
>
>
> On Jun 23, 2013, at 6:16 PM, Benjamin Herrenschmidt
> wrote:
>> Also dbl check that the MMU is indeed mapping all these pages with the
>> "M" bit.
>
> Just to be clear, do you mean the e1000 registers in PCI space? Or the RAM
> pages?
>
> Thanks,
> Pete
On Sun, 2013-06-23 at 20:47 -0700, Peter LaDow wrote:
> >
> >
> > On Jun 23, 2013, at 6:16 PM, Benjamin Herrenschmidt
> > wrote:
> >> Also dbl check that the MMU is indeed mapping all these pages with the
> >> "M" bit.
> >
> > Just to be clear, do you mean the e1000 registers in PCI space? Or
On Sat, Jun 22, 2013 at 08:28:06AM -0600, Alex Williamson wrote:
> On Sat, 2013-06-22 at 22:03 +1000, David Gibson wrote:
> > On Thu, Jun 20, 2013 at 08:55:13AM -0600, Alex Williamson wrote:
> > > On Thu, 2013-06-20 at 18:48 +1000, Alexey Kardashevskiy wrote:
> > > > On 06/20/2013 05:47 PM, Benjami
On Sun, Jun 23, 2013 at 09:28:13AM +1000, Benjamin Herrenschmidt wrote:
> On Sat, 2013-06-22 at 22:03 +1000, David Gibson wrote:
> > I think the interface should not take the group fd, but the container
> > fd. Holding a reference to *that* would keep the necessary things
> > around. But more to
On Mon, 2013-06-24 at 13:54 +1000, David Gibson wrote:
> > DDW means an API by which the guest can request the creation of
> > additional iommus for a given device (typically, in addition to the
> > default smallish 32-bit one using 4k pages, the guest can request
> > a larger window in 64-bit spac
On Mon, Jun 24, 2013 at 11:49:49AM +1000, Benjamin Herrenschmidt wrote:
> On Fri, 2013-06-21 at 09:54 -0700, Guenter Roeck wrote:
>
> > > v2: Ensure that PCI bus fixup code has been executed before calling
> > > device setup code.
> > >
> > Hi Ben,
> >
> > any comments/feedback on this appro
On Mon, 2013-06-24 at 13:52 +1000, David Gibson wrote:
> On Sat, Jun 22, 2013 at 08:28:06AM -0600, Alex Williamson wrote:
> > On Sat, 2013-06-22 at 22:03 +1000, David Gibson wrote:
> > > On Thu, Jun 20, 2013 at 08:55:13AM -0600, Alex Williamson wrote:
> > > > On Thu, 2013-06-20 at 18:48 +1000, Alex
In 9422de3 "powerpc: Hardware breakpoints rewrite to handle non DABR breakpoint
registers" we changed the way we mark extraneous irqs with this:
- info->extraneous_interrupt = !((bp->attr.bp_addr <= dar) &&
- (dar - bp->attr.bp_addr < bp->attr.bp_len));
+ if (!((b
The smallest match region for both the DABR and DAWR is 8 bytes, so the
kernel needs to filter matches when users want to look at regions smaller than
this.
Currently we set the length of PPC_BREAKPOINT_MODE_EXACT breakpoints to 8.
This is wrong as in exact mode we should only match on 1 address,
The patchset takes care of compressing oops messages while writing to NVRAM,
so that more oops data can be captured in the given space.
big_oops_buf (2.22 * oops_data_sz) is allocated for compression.
oops_data_sz is oops header size less of oops partition size.
Pstore will internally call kmsg_d
nvram_compress() and zip_oops() is used by the nvram_pstore_write
API to compress oops messages hence re-organise the functions
accordingly to avoid forward declarations.
Signed-off-by: Aruna Balakrishnaiah
---
arch/powerpc/platforms/pseries/nvram.c | 104
1 fil
pstore_get_header_size will return the size of the header added by pstore
while logging messages to the registered buffer.
Signed-off-by: Aruna Balakrishnaiah
---
fs/pstore/platform.c |7 ++-
include/linux/pstore.h |6 ++
2 files changed, 12 insertions(+), 1 deletion(-)
diff -
The patch set supports compression of oops messages while writing to NVRAM,
this helps in capturing more of oops data to lnx,oops-log. The pstore file
for oops messages will be in decompressed format making it readable.
In case compression fails, the patch takes care of copying the header added
by
Since now we have pstore support for nvram in pseries, enable it
in the default config. With this config option enabled, pstore
infra-structure will be used to read/write the messages from/to nvram.
Signed-off-by: Aruna Balakrishnaiah
---
arch/powerpc/configs/pseries_defconfig |1 +
1 file c
Hi Michael,
On Monday 24 June 2013 06:51 AM, Michael Neuling wrote:
Enable PSTORE in pseries_defconfig
Please add a "why" to your changelogs eg. "Now we have pstore support for
nvram on pseries, enable it in the default config"
"Why" you are changing something is more important than "what", si
On Sun, Jun 23, 2013 at 07:15:39PM +0530, Srivatsa S. Bhat wrote:
> Once stop_machine() is gone from the CPU offline path, we won't be able
> to depend on disabling preemption to prevent CPUs from going offline
> from under us.
>
> Use the get/put_online_cpus_atomic() APIs to prevent CPUs from goi
Since now we have pstore support for nvram in pseries, enable it
in the default config. With this config option enabled, pstore
infra-structure will be used to read/write the messages from/to nvram.
Signed-off-by: Aruna Balakrishnaiah
---
v3:
Move pstore config to right place
v2:
74 matches
Mail list logo