> On Thu, 27 Nov 2014, David Hildenbrand wrote:
> > > OTOH, there is no reason why we need to disable preemption over that
> > > page_fault_disabled() region. There are code pathes which really do
> > > not require to disable preemption for that.
> > >
> > > We have that seperated in preempt-rt fo
This patch enables support for hardware instruction breakpoint in
xmon on POWER8 platform with the help of a new register called the
CIABR (Completed Instruction Address Breakpoint Register). With this
patch, a single hardware instruction breakpoint can be added and
cleared during any active xmon d
On Fri, 2014-11-28 at 08:45 +0530, Madhavan Srinivasan wrote:
> > Can't we just unconditionally clear at as long as we do that after we've
> > saved it ? In that case, it's just a matter for the fixup code to check
> > the saved version rather than the actual CR..
> >
> I use CR bit setting in the
On Friday 28 November 2014 06:26 AM, Benjamin Herrenschmidt wrote:
> On Thu, 2014-11-27 at 17:48 +0530, Madhavan Srinivasan wrote:
>> This patch create the infrastructure to handle the CR based
>> local_* atomic operations. Local atomic operations are fast
>> and highly reentrant per CPU counters
On Friday 28 November 2014 07:28 AM, Benjamin Herrenschmidt wrote:
> On Thu, 2014-11-27 at 10:56 -0600, Segher Boessenkool wrote:
>> On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
>>> Here is the design of this patch. Since local_* operations
>>> are only need to be atomic to
On Thursday 27 November 2014 10:26 PM, Segher Boessenkool wrote:
> On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
>> Here is the design of this patch. Since local_* operations
>> are only need to be atomic to interrupts (IIUC), patch uses
>> one of the Condition Register (CR
On Thu, 2014-11-27 at 14:50 -0600, Segher Boessenkool wrote:
> On Thu, Nov 27, 2014 at 11:41:40AM -0600, Peter Bergner wrote:
> > On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote:
> > > On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
> > > > Nope, you don't get a SIGILL wh
On Thu, 2014-11-27 at 10:56 -0600, Segher Boessenkool wrote:
> On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
> > Here is the design of this patch. Since local_* operations
> > are only need to be atomic to interrupts (IIUC), patch uses
> > one of the Condition Register (CR)
On Thu, 2014-11-27 at 10:28 +0100, Greg Kurz wrote:
> On Thu, 27 Nov 2014 10:39:23 +1100
> Benjamin Herrenschmidt wrote:
>
> > On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > > The first argument to vphn_unpack_associativity() is a const long *, but
> > > the
> > > parsing code expects _
On Fri, 2014-11-28 at 11:11 +1100, Michael Ellerman wrote:
> On Wed, 2014-26-11 at 04:10:04 UTC, Benjamin Herrenschmidt wrote:
> > This adds files in debugfs that can be used to retrieve the
> > OPALv3 firmware "live binary traces" which can then be parsed
> > using a userspace tool.
> >
> > Mostl
On Thu, 2014-11-27 at 17:48 +0530, Madhavan Srinivasan wrote:
> This patch create the infrastructure to handle the CR based
> local_* atomic operations. Local atomic operations are fast
> and highly reentrant per CPU counters. Used for percpu
> variable updates. Local atomic operations only gua
On Thu, 2014-11-27 at 13:46 +0530, Anshuman Khandual wrote:
> On 11/26/2014 01:55 PM, Michael Ellerman wrote:
> > Something like this, untested:
>
> Yeah it is working on LPAR and also on bare metal platform as well. The new
> patch
> will use some of your suggested code, so can I add your signed
On Wed, 2014-26-11 at 04:10:04 UTC, Benjamin Herrenschmidt wrote:
> This adds files in debugfs that can be used to retrieve the
> OPALv3 firmware "live binary traces" which can then be parsed
> using a userspace tool.
>
> Mostly from Rusty with some updates by myself (BenH)
>
> Signed-off-by: Rus
On Tue, Nov 25, 2014 at 04:47:58PM +0530, Shreyas B. Prabhu wrote:
[snip]
> +2:
> + /* Sleep or winkle */
> + li r7,1
> + mfspr r8,SPRN_PIR
> + /*
> + * The last 3 bits of PIR represents the thread id of a cpu
> + * in power8. This will need adjusting for power7.
>
On Thu, 27 Nov 2014, David Hildenbrand wrote:
> > OTOH, there is no reason why we need to disable preemption over that
> > page_fault_disabled() region. There are code pathes which really do
> > not require to disable preemption for that.
> >
> > We have that seperated in preempt-rt for obvious re
On Thu, Nov 27, 2014 at 11:41:40AM -0600, Peter Bergner wrote:
> On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote:
> > On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
> > > Nope, you don't get a SIGILL when executing 64-bit instructions in
> > > 32-bit mode, so it'll happi
Segher Boessenkool writes:
> On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
>> On Thu, 2014-11-27 at 09:38 +1100, Michael Ellerman wrote:
>> > On Thu, 2014-11-27 at 08:11 +1100, Anton Blanchard wrote:
>> > > I used some 64 bit instructions when adding the 32 bit getcpu VDSO
>> > >
On Thu, Nov 27, 2014 at 07:08:42PM +0100, David Hildenbrand wrote:
> > > > -
> > > > - __might_sleep(__FILE__, __LINE__, 0);
> > > > + if (unlikely(!pagefault_disabled()))
> > > > + __might_sleep(__FILE__, __LINE__, 0);
> > > >
> >
> > Same here: so maybe make might_f
> > > -
> > > - __might_sleep(__FILE__, __LINE__, 0);
> > > + if (unlikely(!pagefault_disabled()))
> > > + __might_sleep(__FILE__, __LINE__, 0);
> > >
>
> Same here: so maybe make might_fault a wrapper
> around __might_fault as well.
Yes, I also noticed that. It was part of the origin
On Thu, 2014-11-27 at 10:08 -0600, Segher Boessenkool wrote:
> On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
> > Nope, you don't get a SIGILL when executing 64-bit instructions in
> > 32-bit mode, so it'll happily just execute the instruction, doing
> > a full 64-bit compare. I'm
On Thu, Nov 27, 2014 at 07:24:49PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 27, 2014 at 06:10:17PM +0100, David Hildenbrand wrote:
> > Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks
> > for all user access code (that uses might_fault()).
> >
> > The reason was t
On Thu, Nov 27, 2014 at 06:10:17PM +0100, David Hildenbrand wrote:
> Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks
> for all user access code (that uses might_fault()).
>
> The reason was to disable wrong "sleep in atomic" warnings in the following
> scenario:
>
Commit 662bbcb2747c2422cf98d3d97619509379eee466 removed might_sleep() checks
for all user access code (that uses might_fault()).
The reason was to disable wrong "sleep in atomic" warnings in the following
scenario:
pagefault_disable();
rc = copy_to_user(...);
pagefault_enab
Simple prototype to enable might_sleep() checks in might_fault(), avoiding false
positives for scenarios involving explicit pagefault_disable().
So this should work:
spin_lock(&lock); /* also if left away */
pagefault_disable()
rc = copy_to_user(...)
pagefault_enabl
Let's track the levels of pagefault_disable() calls in a separate part of the
preempt counter. Also update the regular preempt counter to keep the existing
pagefault infrastructure working (can be demangeled and cleaned up later).
This change is needed to detect whether we are running in a simple
On Thu, Nov 27, 2014 at 05:48:40PM +0530, Madhavan Srinivasan wrote:
> Here is the design of this patch. Since local_* operations
> are only need to be atomic to interrupts (IIUC), patch uses
> one of the Condition Register (CR) fields as a flag variable. When
> entering the local_*, specific bi
> From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com]
> > > From: David Hildenbrand
> > > ...
> > > > Although it might not be optimal, but keeping a separate counter for
> > > > pagefault_disable() as part of the preemption counter seems to be the
> > > > only
> > > > doable thing right now.
From: David Hildenbrand [mailto:d...@linux.vnet.ibm.com]
> > From: David Hildenbrand
> > ...
> > > Although it might not be optimal, but keeping a separate counter for
> > > pagefault_disable() as part of the preemption counter seems to be the only
> > > doable thing right now. I am not sure if a c
On Wed, Nov 26, 2014 at 05:50:27PM -0600, Peter Bergner wrote:
> On Thu, 2014-11-27 at 09:38 +1100, Michael Ellerman wrote:
> > On Thu, 2014-11-27 at 08:11 +1100, Anton Blanchard wrote:
> > > I used some 64 bit instructions when adding the 32 bit getcpu VDSO
> > > function. Fix it.
> >
> > Ouch. T
> From: David Hildenbrand
> ...
> > Although it might not be optimal, but keeping a separate counter for
> > pagefault_disable() as part of the preemption counter seems to be the only
> > doable thing right now. I am not sure if a completely separated counter is
> > even
> > possible, increasing t
From: David Hildenbrand
...
> Although it might not be optimal, but keeping a separate counter for
> pagefault_disable() as part of the preemption counter seems to be the only
> doable thing right now. I am not sure if a completely separated counter is
> even
> possible, increasing the size of thr
> OTOH, there is no reason why we need to disable preemption over that
> page_fault_disabled() region. There are code pathes which really do
> not require to disable preemption for that.
>
> We have that seperated in preempt-rt for obvious reasons and IIRC
> Peter Zijlstra tried to distangle it in
On Thu, 27 Nov 2014, Heiko Carstens wrote:
> On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
> > > Code like
> > > spin_lock(&lock);
> > > if (copy_to_user(...))
> > > rc = ...
> > > spin_unlock(&lock);
> > > really *should* generate warnings like it did before.
>
Scott,
On 26 November 2014 at 23:21, Scott Wood wrote:
> On Wed, 2014-11-26 at 15:17 +0100, Alessio Igor Bogani wrote:
>> + board_soc: soc: soc@ffe0 {
>
> There's no need for two labels on the same node.
I'll remove board_soc label.
[...]
>> + eeprom-vpd@54 {
>> +
From: Madhavan Srinivasan
> This patchset create the infrastructure to handle the CR based
> local_* atomic operations. Local atomic operations are fast
> and highly reentrant per CPU counters. Used for percpu
> variable updates. Local atomic operations only guarantee
> variable modification atomi
This patch re-write the current local_* functions to CR5 based one.
Base flow for each function is
{
set cr5(eq)
load
..
store
clear cr5(eq)
}
Above set of instructions are followed by a fixup section which points
to the entry of the function incase of int
This patchset create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters. Used for percpu
variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the
data
This patch create the infrastructure to handle the CR based
local_* atomic operations. Local atomic operations are fast
and highly reentrant per CPU counters. Used for percpu
variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the
dat
> On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
> > > Code like
> > > spin_lock(&lock);
> > > if (copy_to_user(...))
> > > rc = ...
> > > spin_unlock(&lock);
> > > really *should* generate warnings like it did before.
> > >
> > > And *only* code like
> > > sp
On Thu, Nov 27, 2014 at 09:03:01AM +0100, David Hildenbrand wrote:
> > Code like
> > spin_lock(&lock);
> > if (copy_to_user(...))
> > rc = ...
> > spin_unlock(&lock);
> > really *should* generate warnings like it did before.
> >
> > And *only* code like
> > spin_lock(&l
On Thu, 27 Nov 2014 10:39:23 +1100
Benjamin Herrenschmidt wrote:
> On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > The first argument to vphn_unpack_associativity() is a const long *, but the
> > parsing code expects __be64 values actually. This is inconsistent. We should
> > either pass
On 11/26/2014 01:55 PM, Michael Ellerman wrote:
> On Tue, 2014-25-11 at 10:08:48 UTC, Anshuman Khandual wrote:
>> This patch enables support for hardware instruction breakpoints
>> on POWER8 with the help of a new register CIABR (Completed
>> Instruction Address Breakpoint Register). With this patc
> Code like
> spin_lock(&lock);
> if (copy_to_user(...))
> rc = ...
> spin_unlock(&lock);
> really *should* generate warnings like it did before.
>
> And *only* code like
> spin_lock(&lock);
Is only code like this valid or also with the spin_lock() dropped?
(
43 matches
Mail list logo