On Tue, 1 Aug 2017 16:32:03 -0700
"Paul E. McKenney" wrote:
> On Tue, Aug 01, 2017 at 04:16:54PM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 01, 2017 at 06:23:09AM -0700, Paul E. McKenney wrote:
> > > On Tue, Aug 01, 2017 at 12:22:03PM +0200, Peter Zijlstra wrote:
> > >
> > > [ . . . ]
> > >
On Tue, Aug 01, 2017 at 04:16:54PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 01, 2017 at 06:23:09AM -0700, Paul E. McKenney wrote:
> > On Tue, Aug 01, 2017 at 12:22:03PM +0200, Peter Zijlstra wrote:
> >
> > [ . . . ]
> >
> > > As to scheduler IPIs, those are limited to the CPUs the user is limit
On Tue, Aug 01, 2017 at 06:23:09AM -0700, Paul E. McKenney wrote:
> On Tue, Aug 01, 2017 at 12:22:03PM +0200, Peter Zijlstra wrote:
>
> [ . . . ]
>
> > As to scheduler IPIs, those are limited to the CPUs the user is limited
> > to and are rate limited by the wakeup-latency of the tasks. After all
On Tue, Aug 01, 2017 at 12:22:03PM +0200, Peter Zijlstra wrote:
[ . . . ]
> As to scheduler IPIs, those are limited to the CPUs the user is limited
> to and are rate limited by the wakeup-latency of the tasks. After all,
> all the time a task is runnable but not running, wakeups are no-ops.
Can'
On Tue, 1 Aug 2017 13:00:23 +0200
Peter Zijlstra wrote:
> On Tue, Aug 01, 2017 at 08:39:28PM +1000, Nicholas Piggin wrote:
> > Right, I just don't see what real problem this opens up that you don't
> > already have when you are not hard partitioned, therefore it doesn't
> > make sense to add a s
On Tue, Aug 01, 2017 at 08:39:28PM +1000, Nicholas Piggin wrote:
> On Tue, 1 Aug 2017 12:22:03 +0200
> Peter Zijlstra wrote:
> > But you're bouncing the rq->lock around the system at fairly high rates.
> > For big enough systems this is enough to severely hurt things.
>
> If you already have sch
On Tue, Aug 01, 2017 at 01:32:43PM +0300, Avi Kivity wrote:
> I hate to propose a way to make this more complicated, but this could be
> fixed by a process first declaring its intent to use expedited process-wide
> membarrier; if it does, then every context switch updates a process-wide
> cpumask i
On Tue, 1 Aug 2017 12:22:03 +0200
Peter Zijlstra wrote:
> On Tue, Aug 01, 2017 at 07:57:17PM +1000, Nicholas Piggin wrote:
> > On Tue, 1 Aug 2017 10:12:30 +0200
> > Peter Zijlstra wrote:
> >
> > > On Tue, Aug 01, 2017 at 12:00:47PM +1000, Nicholas Piggin wrote:
> > > > Thanks for this, I'll
On 08/01/2017 01:22 PM, Peter Zijlstra wrote:
If mm cpumask is used, I think it's okay. You can cause quite similar
kind of iteration over CPUs and lots of IPIs, tlb flushes, etc using
munmap/mprotect/etc, or context switch IPIs, etc. Are we reaching the
stage where we're controlling those ki
On Tue, Aug 01, 2017 at 07:57:17PM +1000, Nicholas Piggin wrote:
> On Tue, 1 Aug 2017 10:12:30 +0200
> Peter Zijlstra wrote:
>
> > On Tue, Aug 01, 2017 at 12:00:47PM +1000, Nicholas Piggin wrote:
> > > Thanks for this, I'll take a look. This should be a good start as a stress
> > > test, but I'd
On Tue, 1 Aug 2017 10:12:30 +0200
Peter Zijlstra wrote:
> On Tue, Aug 01, 2017 at 12:00:47PM +1000, Nicholas Piggin wrote:
> > Thanks for this, I'll take a look. This should be a good start as a stress
> > test, but I'd also be interested in some application. The reason being that
> > for example
On Tue, Aug 01, 2017 at 12:00:47PM +1000, Nicholas Piggin wrote:
> Thanks for this, I'll take a look. This should be a good start as a stress
> test, but I'd also be interested in some application. The reason being that
> for example using runqueue locks may give reasonable maximum throughput
> num
On Tue, 1 Aug 2017 01:33:09 + (UTC)
Mathieu Desnoyers wrote:
> - On Jul 31, 2017, at 8:35 PM, Nicholas Piggin npig...@gmail.com wrote:
>
> > On Mon, 31 Jul 2017 23:20:59 +1000
> > Michael Ellerman wrote:
> >
> >> Peter Zijlstra writes:
> >>
> >> > On Fri, Jul 28, 2017 at 10:55:32
- On Jul 31, 2017, at 8:35 PM, Nicholas Piggin npig...@gmail.com wrote:
> On Mon, 31 Jul 2017 23:20:59 +1000
> Michael Ellerman wrote:
>
>> Peter Zijlstra writes:
>>
>> > On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote:
>> >> diff --git a/kernel/sched/core.c b/kernel/sched/c
On Mon, 31 Jul 2017 23:20:59 +1000
Michael Ellerman wrote:
> Peter Zijlstra writes:
>
> > On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote:
> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> >> index e9785f7aed75..33f34a201255 100644
> >> --- a/kernel/sched/core.c
> >
- On Jul 28, 2017, at 9:58 PM, Nicholas Piggin npig...@gmail.com wrote:
> On Fri, 28 Jul 2017 17:06:53 + (UTC)
> Mathieu Desnoyers wrote:
>
>> - On Jul 28, 2017, at 12:46 PM, Peter Zijlstra pet...@infradead.org
>> wrote:
>>
>> > On Fri, Jul 28, 2017 at 03:38:15PM +, Mathieu Des
On Mon, Jul 31, 2017 at 11:20:59PM +1000, Michael Ellerman wrote:
> Peter Zijlstra writes:
> > In fact, I'm fairly sure its only PPC.
> >
> > Because only ARM64 and PPC actually implement ACQUIRE/RELEASE with
> > anything other than smp_mb() (for now, Risc-V is in this same boat and
> > MIPS coul
Peter Zijlstra writes:
> On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote:
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index e9785f7aed75..33f34a201255 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2641,8 +2641,18 @@ static struct rq *finis
On Sat, Jul 29, 2017 at 07:48:56PM +1000, Nicholas Piggin wrote:
> On Sat, 29 Jul 2017 19:45:43 +1000
> Nicholas Piggin wrote:
>
> > hmm, we might be able to restrict iteration
> > to mm_cpumask(current->mm), no?
>
> Oh that's been discussed too. I'll read back over it too.
Right, the main pro
On Sat, 29 Jul 2017 19:45:43 +1000
Nicholas Piggin wrote:
> hmm, we might be able to restrict iteration
> to mm_cpumask(current->mm), no?
Oh that's been discussed too. I'll read back over it too.
On Sat, 29 Jul 2017 11:23:33 +0200
Peter Zijlstra wrote:
> On Sat, Jul 29, 2017 at 11:58:40AM +1000, Nicholas Piggin wrote:
> > I haven't had time to read the thread and understand exactly why you need
> > this extra barrier, I'll do it next week. Thanks for cc'ing us on it.
>
> Bottom of here
On Sat, Jul 29, 2017 at 11:58:40AM +1000, Nicholas Piggin wrote:
> I haven't had time to read the thread and understand exactly why you need
> this extra barrier, I'll do it next week. Thanks for cc'ing us on it.
Bottom of here:
https://lkml.kernel.org/r/20170727135610.jwjfvyuacqzj5...@hirez.prog
On Fri, 28 Jul 2017 17:06:53 + (UTC)
Mathieu Desnoyers wrote:
> - On Jul 28, 2017, at 12:46 PM, Peter Zijlstra pet...@infradead.org wrote:
>
> > On Fri, Jul 28, 2017 at 03:38:15PM +, Mathieu Desnoyers wrote:
> >> > Which only leaves PPC stranded.. but the 'good' news is that mpe sa
- On Jul 28, 2017, at 12:46 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Fri, Jul 28, 2017 at 03:38:15PM +, Mathieu Desnoyers wrote:
>> > Which only leaves PPC stranded.. but the 'good' news is that mpe says
>> > they'll probably need a barrier in switch_mm() in any case.
>>
>> As
On Fri, Jul 28, 2017 at 03:38:15PM +, Mathieu Desnoyers wrote:
> > Which only leaves PPC stranded.. but the 'good' news is that mpe says
> > they'll probably need a barrier in switch_mm() in any case.
>
> As I pointed out in my other email, I plan to do this:
>
> --- a/kernel/sched/core.c
> +
- On Jul 28, 2017, at 7:57 AM, Peter Zijlstra pet...@infradead.org wrote:
> On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote:
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index e9785f7aed75..33f34a201255 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/co
- On Jul 28, 2017, at 4:55 AM, Peter Zijlstra pet...@infradead.org wrote:
> On Thu, Jul 27, 2017 at 05:13:14PM -0400, Mathieu Desnoyers wrote:
>> +static void membarrier_expedited_mb_after_set_current(struct mm_struct *mm,
>> +struct mm_struct *oldmm)
>
> That is a bit of a mouth-
On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e9785f7aed75..33f34a201255 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2641,8 +2641,18 @@ static struct rq *finish_task_switch(struct
> task_str
On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e9785f7aed75..33f34a201255 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2641,8 +2641,18 @@ static struct rq *finish_task_switch(struct
> task_str
On Thu, Jul 27, 2017 at 05:13:14PM -0400, Mathieu Desnoyers wrote:
> +static void membarrier_expedited_mb_after_set_current(struct mm_struct *mm,
> + struct mm_struct *oldmm)
That is a bit of a mouth-full...
> +{
> + if (!IS_ENABLED(CONFIG_MEMBARRIER))
> + return;
> +
On Thu, Jul 27, 2017 at 10:41:25PM +, Mathieu Desnoyers wrote:
> - On Jul 27, 2017, at 6:13 PM, Paul E. McKenney
> paul...@linux.vnet.ibm.com wrote:
>
> > On Thu, Jul 27, 2017 at 05:13:14PM -0400, Mathieu Desnoyers wrote:
> >> Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpu
- On Jul 27, 2017, at 6:13 PM, Paul E. McKenney paul...@linux.vnet.ibm.com
wrote:
> On Thu, Jul 27, 2017 at 05:13:14PM -0400, Mathieu Desnoyers wrote:
>> Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built
>> from all runqueues for which current thread's mm is the same as
On Thu, Jul 27, 2017 at 05:13:14PM -0400, Mathieu Desnoyers wrote:
> Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built
> from all runqueues for which current thread's mm is the same as the
> thread calling sys_membarrier.
>
> Scheduler-wise, it requires that we add a memory
Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built
from all runqueues for which current thread's mm is the same as the
thread calling sys_membarrier.
Scheduler-wise, it requires that we add a memory barrier after context
switching between processes (which have different mm).
34 matches
Mail list logo