On Mon, Oct 14, 2013 at 02:23:55AM -0700, Paul E. McKenney wrote:
> On Mon, Oct 14, 2013 at 11:05:08AM +0200, Peter Zijlstra wrote:
> > On Sat, Oct 12, 2013 at 07:06:56PM +0200, Oleg Nesterov wrote:
> > > it even disables irqs, so this should always imply rcu_read_lock() with
> > > any implementati
On Mon, Oct 14, 2013 at 11:05:08AM +0200, Peter Zijlstra wrote:
> On Sat, Oct 12, 2013 at 07:06:56PM +0200, Oleg Nesterov wrote:
> > it even disables irqs, so this should always imply rcu_read_lock() with
> > any implementation,
>
> Not so; I could make an RCU implementation that drives the state
On Sat, Oct 12, 2013 at 07:06:56PM +0200, Oleg Nesterov wrote:
> it even disables irqs, so this should always imply rcu_read_lock() with
> any implementation,
Not so; I could make an RCU implementation that drives the state machine
from rcu_read_unlock(). Such an implementation doesn't need the
i
On 10/11, Peter Zijlstra wrote:
>
> On Fri, Oct 11, 2013 at 08:25:07PM +0200, Oleg Nesterov wrote:
> > On 10/11, Peter Zijlstra wrote:
> > >
> > > As a penance I'll start by removing all get_online_cpus() usage from the
> > > scheduler.
> >
> > I only looked at the change in setaffinity,
> >
> > >
On Fri, Oct 11, 2013 at 08:25:07PM +0200, Oleg Nesterov wrote:
> On 10/11, Peter Zijlstra wrote:
> >
> > As a penance I'll start by removing all get_online_cpus() usage from the
> > scheduler.
>
> I only looked at the change in setaffinity,
>
> > @@ -3706,7 +3707,6 @@ long sched_setaffinity(pid_t
On 10/11, Peter Zijlstra wrote:
>
> As a penance I'll start by removing all get_online_cpus() usage from the
> scheduler.
I only looked at the change in setaffinity,
> @@ -3706,7 +3707,6 @@ long sched_setaffinity(pid_t pid, const struct cpumask
> *in_mask)
> struct task_struct *p;
>
> The problem is that the scheduler doesn't see that the current task has
> preemption disabled. It only looks at the priorities of the current
> task, and if it can preempt it, it will. It sets the NEED_RESCHED to the
> current task and waits for the preemption to schedule it out.
Ah, got it. It
On 11 Oct 2013 08:14:57 -0400
"George Spelvin" wrote:
> > There's places in the kernel that does for_each_cpu() that I'm sure you
> > don't want to disable preemption for. Especially when you start having
> > 4096 CPU machines!
>
> Er... why not?
>
> Seriously. If I have 4096 processors, and p
On Thu, Oct 10, 2013 at 11:49:15AM -0700, Linus Torvalds wrote:
> On Thu, Oct 10, 2013 at 11:34 AM, Peter Zijlstra wrote:
> >
> > But my point is that even though there aren't many of these today; with
> > the growing number of cpus in 'commodity' hardware you want to move away
> > from using pree
> There's places in the kernel that does for_each_cpu() that I'm sure you
> don't want to disable preemption for. Especially when you start having
> 4096 CPU machines!
Er... why not?
Seriously. If I have 4096 processors, and preemption is disabled on
*one* of them for a long time, can't an urgen
* Linus Torvalds wrote:
> On Thu, Oct 10, 2013 at 12:04 PM, Steven Rostedt wrote:
> >
> > I'm wondering if we can have a for_each_cpu() that only disables
> > preemption in the loop.
>
> I think we'd generally want to have it be something the loop asks for.
>
> If the loop is just some kind
On Thu, 10 Oct 2013 12:16:16 -0700
Linus Torvalds wrote:
> On Thu, Oct 10, 2013 at 12:04 PM, Steven Rostedt wrote:
> >
> > I'm wondering if we can have a for_each_cpu() that only disables
> > preemption in the loop.
>
> I think we'd generally want to have it be something the loop asks for.
Yea
On Thu, Oct 10, 2013 at 12:16:16PM -0700, Linus Torvalds wrote:
> On Thu, Oct 10, 2013 at 12:04 PM, Steven Rostedt wrote:
> >
> > I'm wondering if we can have a for_each_cpu() that only disables
> > preemption in the loop.
>
> I think we'd generally want to have it be something the loop asks for.
On Thu, Oct 10, 2013 at 12:04 PM, Steven Rostedt wrote:
>
> I'm wondering if we can have a for_each_cpu() that only disables
> preemption in the loop.
I think we'd generally want to have it be something the loop asks for.
If the loop is just some kind of "gather statistics" thing, I don't
think
On Thu, Oct 10, 2013 at 08:50:32PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 10, 2013 at 02:43:27PM -0400, Steven Rostedt wrote:
> > On Thu, 10 Oct 2013 11:10:35 -0700
> > Linus Torvalds wrote:
> > .. now we can free all the percpu data and kill the CPU ..
> > >
> > > without any locking an
On 10/10/2013 10:24 PM, Oleg Nesterov wrote:
> On 10/10, Andrew Morton wrote:
>>
>> On Thu, 10 Oct 2013 17:26:12 +0200 Oleg Nesterov wrote:
>>
>>> On 10/10, Ingo Molnar wrote:
So ... why not make it _really_ cheap, i.e. the read lock costing nothing,
and tie CPU hotplug to freezing
On Thu, 10 Oct 2013 11:49:15 -0700
Linus Torvalds wrote:
> Oh, and I'm sure there are several users that currently depend on
> being able to sleep over get_online_cpu's. But I'm pretty sure it's
> "several", not "hundreds", and I think we could fix them up.
I'm wondering if we can have a for_e
On Thu, Oct 10, 2013 at 11:43 AM, Steven Rostedt wrote:
>
> There's places in the kernel that does for_each_cpu() that I'm sure you
> don't want to disable preemption for. Especially when you start having
> 4096 CPU machines!
We could _easily_ add preemption points in for_each_cpu() for the
MAX_S
On 10/10/2013 08:56 PM, Oleg Nesterov wrote:
> On 10/10, Ingo Molnar wrote:
>>
>> * Peter Zijlstra wrote:
>>
>>> But the thing is; our sense of NR_CPUS has shifted, where it used to be
>>> ok to do something like:
>>>
>>> for_each_cpu()
>>>
>>> With preemption disabled; it gets to be less and le
On Thu, Oct 10, 2013 at 02:43:27PM -0400, Steven Rostedt wrote:
> On Thu, 10 Oct 2013 11:10:35 -0700
> Linus Torvalds wrote:
> .. now we can free all the percpu data and kill the CPU ..
> >
> > without any locking anywhere - not stop-machine, not anything. If
> > somebody is doing a "for_eac
On Thu, Oct 10, 2013 at 11:34 AM, Peter Zijlstra wrote:
>
> But my point is that even though there aren't many of these today; with
> the growing number of cpus in 'commodity' hardware you want to move away
> from using preempt_disable() as hotplug lock.
Umm.
Wasn't this pretty much the argument
On Thu, Oct 10, 2013 at 11:10:35AM -0700, Linus Torvalds wrote:
> You can't do that right now - since you have to get the cpu list. So
> it may not be with "preemption enabled", but it should always be under
> the locking provided by get_online_cpus().. That one allows sleeping,
> though.
>
> I pe
On Thu, 10 Oct 2013 11:10:35 -0700
Linus Torvalds wrote:
.. now we can free all the percpu data and kill the CPU ..
>
> without any locking anywhere - not stop-machine, not anything. If
> somebody is doing a "for_each_cpu()" (under just a regular
> rcu_read_lock()) and they see the bit set w
On Thu, Oct 10, 2013 at 10:13:53AM -0700, Paul E. McKenney wrote:
> That does add quite a bit of latency to the hotplug operations, which
> IIRC slows down things like boot, suspend, and resume.
Guess what suspend does before it unplugs all these cpus?
--
To unsubscribe from this list: send the li
On Thu, Oct 10, 2013 at 10:48:56AM -0700, Andrew Morton wrote:
> > > Very much agreed; now stop_machine() wouldn't actually work for hotplug
> > > because it will instantly preempt everybody, including someone who might
> > > be in the middle of using per-cpu state of the cpu we're about to
> > > r
On Thu, Oct 10, 2013 at 10:48 AM, Andrew Morton
wrote:
>
> Yes, I'd have thought that the cases where a CPU is fiddling with
> another CPU's percpu data with preemption enabled would be rather rare.
>
> I can't actually think of any off the top. Are there examples we can
> look at?
You can't do
On Thu, 10 Oct 2013 13:13:05 -0400 Steven Rostedt wrote:
> >
> > > > Why prevent all CPUs from running when we want to remove
> > > > one?
> > >
> > > So get_online_cpus() goes away. Nothing is more scalable than nothing!
> >
> > Very much agreed; now stop_machine() wouldn't actually work for
On 10/10, Peter Zijlstra wrote:
>
> The freeze suggestion from Ingo would actually work because we freeze
> tasks at known good points
Not really known/good, we have more and more freezable_schedule's.
But probably this is fine, nobody should do this under get_online_cpus().
> (userspace and kthr
On Thu, Oct 10, 2013 at 06:52:29PM +0200, Ingo Molnar wrote:
>
> * Andrew Morton wrote:
>
> > On Thu, 10 Oct 2013 17:26:12 +0200 Oleg Nesterov wrote:
> >
> > > On 10/10, Ingo Molnar wrote:
> > > >
> > > > * Peter Zijlstra wrote:
> > > >
> > > > > But the thing is; our sense of NR_CPUS has shi
* Paul E. McKenney wrote:
> On Thu, Oct 10, 2013 at 06:50:46PM +0200, Ingo Molnar wrote:
> >
> > * Peter Zijlstra wrote:
> >
> > > On Thu, Oct 10, 2013 at 04:57:38PM +0200, Ingo Molnar wrote:
> > >
> > > > So ... why not make it _really_ cheap, i.e. the read lock costing
> > > > nothing, and
On Thu, Oct 10, 2013 at 06:50:46PM +0200, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
> > On Thu, Oct 10, 2013 at 04:57:38PM +0200, Ingo Molnar wrote:
> >
> > > So ... why not make it _really_ cheap, i.e. the read lock costing
> > > nothing, and tie CPU hotplug to freezing all tasks in the
On Thu, 10 Oct 2013 18:53:37 +0200
Peter Zijlstra wrote:
> On Thu, Oct 10, 2013 at 09:43:55AM -0700, Andrew Morton wrote:
> > > But we would like to remove stomp machine from
> > > CPU hotplug.
> >
> > We do? That's news. It wasn't mentioned in the changelog and should
> > have been. Why?
>
On 10/10, Andrew Morton wrote:
>
> On Thu, 10 Oct 2013 17:26:12 +0200 Oleg Nesterov wrote:
>
> > On 10/10, Ingo Molnar wrote:
> > >
> > > So ... why not make it _really_ cheap, i.e. the read lock costing nothing,
> > > and tie CPU hotplug to freezing all tasks in the system?
> > >
> > > Actual CPU
On Thu, Oct 10, 2013 at 09:43:55AM -0700, Andrew Morton wrote:
> > But we would like to remove stomp machine from
> > CPU hotplug.
>
> We do? That's news. It wasn't mentioned in the changelog and should
> have been. Why?
It would be an unrelated change to this and unrelated to the reasons as
* Andrew Morton wrote:
> On Thu, 10 Oct 2013 17:26:12 +0200 Oleg Nesterov wrote:
>
> > On 10/10, Ingo Molnar wrote:
> > >
> > > * Peter Zijlstra wrote:
> > >
> > > > But the thing is; our sense of NR_CPUS has shifted, where it used to be
> > > > ok to do something like:
> > > >
> > > > for_
* Peter Zijlstra wrote:
> On Thu, Oct 10, 2013 at 04:57:38PM +0200, Ingo Molnar wrote:
>
> > So ... why not make it _really_ cheap, i.e. the read lock costing
> > nothing, and tie CPU hotplug to freezing all tasks in the system?
>
> Such that we freeze regular tasks in userspace and kernel tas
On Thu, 10 Oct 2013 12:36:31 -0400 Steven Rostedt wrote:
> On Thu, 10 Oct 2013 09:00:44 -0700
> Andrew Morton wrote:
>
> > It's been ages since I looked at this stuff :( Although it isn't used
> > much, memory hotplug manages to use stop_machine() on the add/remove
> > (ie, "writer") side and n
On Thu, 10 Oct 2013 09:00:44 -0700
Andrew Morton wrote:
> It's been ages since I looked at this stuff :( Although it isn't used
> much, memory hotplug manages to use stop_machine() on the add/remove
> (ie, "writer") side and nothing at all on the "reader" side. Is there
> anything which fundamen
On Thu, 10 Oct 2013 17:26:12 +0200 Oleg Nesterov wrote:
> On 10/10, Ingo Molnar wrote:
> >
> > * Peter Zijlstra wrote:
> >
> > > But the thing is; our sense of NR_CPUS has shifted, where it used to be
> > > ok to do something like:
> > >
> > > for_each_cpu()
> > >
> > > With preemption disable
On 10/10, Peter Zijlstra wrote:
>
> That said, Oleg wants to use the same scheme for percpu_rwsem,
Yes, and later then (I hope) get_online_cpus() will be just
current->cpuhp_ref++ || percpu_down_read().
(just in case, we only need ->cpuhp_ref to ensure that the readers
can't starve the writers a
On 10/10, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
> > But the thing is; our sense of NR_CPUS has shifted, where it used to be
> > ok to do something like:
> >
> > for_each_cpu()
> >
> > With preemption disabled; it gets to be less and less sane to do so,
> > simply because 'common' hard
On Thu, Oct 10, 2013 at 04:57:38PM +0200, Ingo Molnar wrote:
> So ... why not make it _really_ cheap, i.e. the read lock costing nothing,
> and tie CPU hotplug to freezing all tasks in the system?
Such that we freeze regular tasks in userspace and kernel tasks in their
special freezer callback so
* Peter Zijlstra wrote:
> But the thing is; our sense of NR_CPUS has shifted, where it used to be
> ok to do something like:
>
> for_each_cpu()
>
> With preemption disabled; it gets to be less and less sane to do so,
> simply because 'common' hardware has 256+ CPUs these days. If we cannot
On Wed, Oct 09, 2013 at 10:50:06PM -0700, Andrew Morton wrote:
> On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra
> wrote:
>
> > The current cpu hotplug lock is a single global lock; therefore excluding
> > hotplug is a very expensive proposition even though it is rare occurrence
> > under
> >
* Andrew Morton wrote:
> On Thu, 10 Oct 2013 09:27:57 +0200 Ingo Molnar wrote:
>
> > > > Should be fairly straightforward to test: the sys_sched_getaffinity()
> > > > and sys_sched_setaffinity() syscalls both make use of
> > > > get_online_cpus()/put_online_cpus(), so a testcase frobbing aff
On Thu, 10 Oct 2013 09:27:57 +0200 Ingo Molnar wrote:
> > > Should be fairly straightforward to test: the sys_sched_getaffinity()
> > > and sys_sched_setaffinity() syscalls both make use of
> > > get_online_cpus()/put_online_cpus(), so a testcase frobbing affinities
> > > on N CPUs in parallel
* Andrew Morton wrote:
> On Thu, 10 Oct 2013 08:27:41 +0200 Ingo Molnar wrote:
>
> > * Andrew Morton wrote:
> >
> > > On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra
> > > wrote:
> > >
> > > > The current cpu hotplug lock is a single global lock; therefore
> > > > excluding hotplug is
On Thu, 10 Oct 2013 08:27:41 +0200 Ingo Molnar wrote:
> * Andrew Morton wrote:
>
> > On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra
> > wrote:
> >
> > > The current cpu hotplug lock is a single global lock; therefore
> > > excluding hotplug is a very expensive proposition even though it
* Andrew Morton wrote:
> On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra
> wrote:
>
> > The current cpu hotplug lock is a single global lock; therefore
> > excluding hotplug is a very expensive proposition even though it is
> > rare occurrence under normal operation.
> >
> > There is a d
On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra wrote:
> The current cpu hotplug lock is a single global lock; therefore excluding
> hotplug is a very expensive proposition even though it is rare occurrence
> under
> normal operation.
>
> There is a desire for a more light weight implementati
On Tue, Oct 08, 2013 at 05:27:30PM +0200, Oleg Nesterov wrote:
> On 10/08, Peter Zijlstra wrote:
> >
> > - Added Reviewed-by for Oleg to patches 1,3 -- please holler if you
> > disagree!
>
> Thanks ;)
>
> And of course, feel free to add my sob to 4/6, although this doesn't
> matter.
Thanks, do
On 10/08, Peter Zijlstra wrote:
>
> - Added Reviewed-by for Oleg to patches 1,3 -- please holler if you disagree!
Thanks ;)
And of course, feel free to add my sob to 4/6, although this doesn't
matter.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
The current cpu hotplug lock is a single global lock; therefore excluding
hotplug is a very expensive proposition even though it is rare occurrence under
normal operation.
There is a desire for a more light weight implementation of
{get,put}_online_cpus() from both the NUMA scheduling as well as t
53 matches
Mail list logo