On Wed, 2012-09-05 at 13:31 +0400, Glauber Costa wrote:
>
> You wouldn't have to do more than one hierarchy walks for that. What
> Tejun seems to want, is the ability to not have a particular controller
> at some point in the tree. But if they exist, they are always together.
Right, but the acco
On Wed, 2012-09-05 at 12:35 +0800, Michael Wang wrote:
> > [ 10.968565] reboot: machine restart
> > [ 10.983510] [ cut here ]
> > [ 10.984218] WARNING: at
> > /c/kernel-tests/src/stable/arch/x86/kernel/smp.c:123
> > native_smp_send_reschedule+0x46/0x50()
> > [ 10.9
On Mon, 2012-09-03 at 03:04 +0300, Irina Tirdea wrote:
> - BUG_ON(gettimeofday(&tv_start, NULL));
> + ret = gettimeofday(&tv_start, NULL);
> + BUG_ON(ret);
Its valid (although admittedly dubious) to have BUG_ON with
side-effects.
The 'right' fix would be something like:
---
t
On Fri, 2012-08-31 at 00:21 -0700, Fengguang Wu wrote:
> [3.267585] Testing tracer function: [4.282931] tsc: Refined TSC
> clocksource calibration: 2833.332 MHz
> PASSED
> [ 13.392541] Testing tracer irqsoff: PASSED
> [ 13.428537] Testing tracer branch: [ 20.093074] [ cut
On Mon, 2012-09-10 at 22:26 +0200, Frederic Weisbecker wrote:
> > > OK, so colour me unconvinced.. why are we doing this?
> > >
> > > Typically when we call schedule nr_running != 1 (we need current to be
> > > running and a possible target to switch to).
> > >
> > > So I'd prefer to simply have
On Wed, 2012-09-12 at 13:01 +0200, Robert Richter wrote:
> + if (notsup)
> + pr_warn("perf: unsupported attribute flags: %016llx\n",
> notsup);
This is a dmesg DoS..
I'm also not sure dmesg is the right way.. could we not somehow change
the attrs to provide better diagnostic
Subject: perf, intel: Expose SMI_COUNT as a fixed counter
From: Peter Zijlstra
Date: Wed Sep 12 13:10:53 CEST 2012
The Intel SMI_COUNT sadly isn't a proper PMU event but a free-running
MSR, expose it by creating another fake fixed PMC and another pseudo
event.
Signed-off-by: Peter Zij
On Wed, 2012-09-12 at 14:06 +0200, Frederic Weisbecker wrote:
>
> 1) This can happen if something calls set_need_resched() while no other task
> is
> on the runqueue.
People really shouldn't be doing that... I think I know why RCU does
this, but yuck. I also think RCU can avoid doing this, but i
On Wed, 2012-09-12 at 14:41 +0200, Peter Zijlstra wrote:
> We could of course mandate that all remote wakeups to special nohz cpus
> get queued. That would just leave us with RCU and it would simply not
> send resched IPIs to extended quiescent CPUs anyway, right?
>
> So at that p
On Wed, 2012-09-12 at 15:54 +0200, Frederic Weisbecker wrote:
> On Wed, Sep 12, 2012 at 02:52:40PM +0200, Peter Zijlstra wrote:
> > On Wed, 2012-09-12 at 14:41 +0200, Peter Zijlstra wrote:
> >
> > > We could of course mandate that all remote wakeups to special nohz cpu
On Wed, 2012-09-12 at 16:13 +0200, Stephane Eranian wrote:
> +static DEFINE_PER_CPU(struct list_head, rotation_list);
Why do you keep the rotation list? The only use seems to be:
> +void perf_cpu_hrtimer_cancel(int cpu)
> +{
> + struct list_head *head = &__get_cpu_var(rotation_list);
> +
I'm rather sure Thomas would want to know about this..
On Wed, 2012-09-12 at 16:13 +0200, Stephane Eranian wrote:
> hrtimer_init() assumes it is called for the current CPU
> as it accesses per-cpu variables (hrtimer_bases).
>
> However, there can be cases where a hrtimer is initialized
> from a
On Wed, 2012-09-12 at 07:26 -0700, Andi Kleen wrote:
> Peter Zijlstra writes:
>
> > On Wed, 2012-09-12 at 13:01 +0200, Robert Richter wrote:
> >> + if (notsup)
> >> + pr_warn("perf: unsupported attribute flags: %016llx\n",
&g
On Wed, 2012-09-12 at 16:30 +0200, Oleg Nesterov wrote:
>
> Well, I hoped that someone else will nack^Wreview this patch. You know
> that personally I hate this feature ;)
I'll try and look at it soon-ish.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On Wed, 2012-09-12 at 16:33 +0200, Stephane Eranian wrote:
>
> If I do:
> for_each_possible_cpu(cpu) {
>cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
>hr = &cpuctx->hrtimer;
>hrtimer_init(hr)
> }
> I don't understand why I would have to refer to per-cpu data
> (hrtime
On Wed, 2012-09-12 at 16:43 +0200, Stephane Eranian wrote:
> The hrtimer_active is used to prevent activating the timer multiple times
> in a row.
see hrtimer_active(), this should do what you want I think.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On Wed, 2012-09-12 at 16:40 +0200, Stephane Eranian wrote:
> Hi,
>
> As I was debugging my hrtimer patch, I ran a few tests
> with hotplug CPU. In others words, I offline a CPU while
> there is an active monitoring session which causes multiplexing.
>
> When the CPU goes down, all is well. But wh
On Wed, 2012-09-12 at 16:46 +0200, Stephane Eranian wrote:
> I am fine with dropping this patch. I just found it odd there was a per-cpu
> data reference embedded deep into the call. I wanted things to be more
> explicit. I know it works without the proposed change.
Ah the reason its there is to
On Wed, 2012-09-12 at 16:48 +0200, Stephane Eranian wrote:
> On Wed, Sep 12, 2012 at 4:44 PM, Peter Zijlstra wrote:
> > On Wed, 2012-09-12 at 16:43 +0200, Stephane Eranian wrote:
> >> The hrtimer_active is used to prevent activating the timer multiple times
> >
ncy DS (BTS,PEBS), LBR and OFFCORE features
that make up intel_{get,put}_event_constraints.
Signed-off-by: Peter Zijlstra
---
arch/x86/kernel/cpu/perf_event_intel.c | 48 --
1 file changed, 29 insertions(+), 19 deletions(-)
diff --git a/arch/x86
On Wed, 2012-09-12 at 18:42 +0200, Stephane Eranian wrote:
> We use FREEZE_LBR_ON_PMI to sync LBR data with counter overflows.
> That means, LBR is already frozen by the time we get to the handler. But
> that means we need to re-enable LBR when we leave the handler. I don't
> think EOI is going to
On Wed, 2012-09-12 at 19:36 +0200, Oleg Nesterov wrote:
> On 09/12, Peter Zijlstra wrote:
> >
> > Oleg and Sebastian found that touching MSR_IA32_DEBUGCTLMSR from NMI
> > context is problematic since the only way to change the various
> > unrelated bits in t
On Wed, 2012-09-12 at 19:37 +0200, Peter Zijlstra wrote:
> Ah, so I do think EIO will re-enable LBR,
OK, it does not, but the:
> also the handler is wrapped in
> x86_pmu::{dis,en}able_all() which does end up calling
> intel_pmu_lbr_{dis,en}able_all().
Thing does the re-e
On Wed, 2012-09-12 at 20:00 +0200, Stephane Eranian wrote:
> On Wed, Sep 12, 2012 at 7:45 PM, Peter Zijlstra wrote:
> > On Wed, 2012-09-12 at 19:37 +0200, Peter Zijlstra wrote:
> >> Ah, so I do think EIO will re-enable LBR,
> >
> > OK, it does not, but the:
> &g
On Wed, 2012-09-12 at 20:50 +0200, Stephane Eranian wrote:
> > As for BTS, it looks like we don't throttle the thing at all, so we
> > shouldn't ever get to the asymmetric thing, right?
> No you do, in the same function:
> static void intel_pmu_disable_event(struct perf_event *event)
> {
>
On Thu, 2012-09-13 at 08:49 +0200, Mike Galbraith wrote:
> On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
> > On tickless system, one CPU runs load balance for all idle CPUs.
> > The cpu_load of this CPU is updated before starting the load balance
> > of each other idle CPUs. We should
On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
> On tickless system, one CPU runs load balance for all idle CPUs.
> The cpu_load of this CPU is updated before starting the load balance
> of each other idle CPUs. We should instead update the cpu_load of the
> balance_cpu.
>
> Signed-off
On Tue, 2012-09-11 at 11:33 -0700, Suresh Siddha wrote:
> > nohz_balance_enter_idle is good a name too. but I name it as
> > set_nohz_tick_stopped, since there is a clear_nohz_tick_stopped(), that
> > just do the opposed action of this function. According to this, is it
> > better to another functi
On Thu, 2012-09-13 at 10:37 +0200, Vincent Guittot wrote:
> > I think you need to present numbers showing benefit. Crawling all over
> > a mostly idle (4096p?) box is decidedly bad thing to do.
Yeah, but we're already doing that anyway.. we know nohz idle balance
doesn't scale. Venki and Suresh
On Thu, 2012-09-13 at 10:45 +0200, Peter Zijlstra wrote:
> On Thu, 2012-09-13 at 10:37 +0200, Vincent Guittot wrote:
> > > I think you need to present numbers showing benefit. Crawling all over
> > > a mostly idle (4096p?) box is decidedly bad thing to do.
>
> Yeah, b
On Thu, 2012-09-13 at 11:39 +0200, Maarten Lankhorst wrote:
> It is considered good form to lock the lock you claim to be nested in.
Uhm yeah.. cute. You actually found a site where this triggered?
> Signed-off-by: Maarten Lankhorst
> ---
> diff --git a/kernel/lockdep.c b/kernel/lockdep.c
> inde
On Wed, 2012-08-22 at 10:40 +0800, Michael Wang wrote:
> From: Michael Wang
>
> Fengguang Wu has reported the bug:
>
> [0.043953] BUG: scheduling while atomic: swapper/0/1/0x1002
> [0.044017] no locks held by swapper/0/1.
> [0.044692] Pid: 1, comm: swapper/0 Not tainted 3.6.0-rc
On Thu, 2012-09-13 at 12:10 +0200, Maarten Lankhorst wrote:
> Hey,
>
> Op 13-09-12 11:59, Peter Zijlstra schreef:
> > On Thu, 2012-09-13 at 11:39 +0200, Maarten Lankhorst wrote:
> >> It is considered good form to lock the lock you claim to be nested in.
> > Uhm yea
On Wed, 2012-09-12 at 18:22 +0200, Peter Zijlstra wrote:
> Oleg and Sebastian found that touching MSR_IA32_DEBUGCTLMSR from NMI
> context is problematic since the only way to change the various
> unrelated bits in there is:
>
> debugctl = get_debugctlmsr()
> /* frob
On Thu, 2012-09-13 at 11:17 +0200, Vincent Guittot wrote:
> On 10 July 2012 15:42, Peter Zijlstra wrote:
> > On Tue, 2012-07-10 at 14:35 +0200, Vincent Guittot wrote:
> >>
> >> May be the last one which enable ARCH_POWER should also go into tip ?
> >>
> >
On Thu, 2012-09-13 at 13:49 +0200, Stephane Eranian wrote:
> Should be, though it is pretty ugly to stash all of this in the
> put/get constraints.
Agreed, I almost added two extra functions for it but when I went to
look at where to call them I ended up next to get/put constraints.
> I will run
On Wed, 2012-09-12 at 17:37 +0200, Stephane Eranian wrote:
> Note however that the rotation_list is still used in perf_event_task_tick()
> to iterate over the ctx which needs unthrottling. We would have to switch
> that loop over to a for-each-pmu() which would necessary incur more
> iterations as
On Thu, 2012-09-13 at 14:20 +0200, Stephane Eranian wrote:
> On Thu, Sep 13, 2012 at 2:16 PM, Peter Zijlstra wrote:
> > On Wed, 2012-09-12 at 17:37 +0200, Stephane Eranian wrote:
> >
> >> Note however that the rotation_list is still used in perf_event_task_tick()
>
On Thu, 2012-09-13 at 14:27 +0200, Stephane Eranian wrote:
> No because we should not use the patch I posted last week. So rotation_start()
> would still enqueue SW pmus.
Hrmm. I just send it to Ingo.. let me see if I can still recall that.
--
To unsubscribe from this list: send the line "unsubsc
On Thu, 2012-09-13 at 13:58 -0700, Tejun Heo wrote:
> The cpu ones handle nesting correctly - parent's accounting includes
> children's, parent's configuration affects children's unless
> explicitly overridden, and children's limits nest inside parent's.
The implementation has some issues w
On Fri, 2012-09-14 at 17:12 +0800, Li Zefan wrote:
> > I think this is a pressing problem, yes, but not the only problem with
> > cgroup lock. Even if we restrict its usage to cgroup core, we still can
> > call cgroup functions, which will lock. And then we gain nothing.
> >
>
> Agreed. The bigge
On Fri, 2012-09-14 at 17:48 +0530, Srivatsa S. Bhat wrote:
> #! /bin/bash
CPUPATH="/sys/devices/system/cpu"
> NUMBER_OF_CPUS=`ls -d /sys/devices/system/cpu/cpu[0-9]* | wc -l`
apply the above
> cd /sys/devices/system/cpu
skip this, so running the script doesn't change PWD
> while [ 1 ]
while
On Wed, 2012-09-12 at 13:27 +0200, Peter Zijlstra wrote:
> Subject: perf, intel: Expose SMI_COUNT as a fixed counter
> From: Peter Zijlstra
> Date: Wed Sep 12 13:10:53 CEST 2012
>
> The Intel SMI_COUNT sadly isn't a proper PMU event but a free-running
> MSR, expose it b
On Fri, 2012-09-14 at 10:25 -0400, Vivek Goyal wrote:
> So while % model is more intutive to users, it is hard to implement.
I don't agree with that. The fixed quota thing is counter-intuitive and
hard to use. It begets you questions like: why, if everything is idle
except my task, am I not gettin
On Fri, 2012-09-14 at 11:00 -0700, Arnaldo Carvalho de Melo wrote:
> > Understood and there have been suggestions on how to definitely state
> > what the kernel side did not like. I like Peter's last suggestion --
> > something along the lines of clearing attr on a failure except the
> > offending
On Fri, 2012-09-14 at 10:59 -0700, Tejun Heo wrote:
> Hello,
>
> On Fri, Sep 14, 2012 at 05:12:31PM +0800, Li Zefan wrote:
> > Agreed. The biggest issue in cpuset is if hotplug makes a cpuset's cpulist
> > empty the tasks in it will be moved to an ancestor cgroup, which requires
> > holding cgroup
On Fri, 2012-09-14 at 22:11 +0200, Ingo Molnar wrote:
> return -EPERF_CPU_PRECISE_EV_NOTSUPP;
I just don't like having to enumerate all possible fails, I'm too lazy.
Can't we be smarter about that? Could we do a {reason}x{bit-offset} like
thing?
Where we limit reason to a few simple things like
On Fri, 2012-09-14 at 23:27 +0200, Borislav Petkov wrote:
>
> I was able to reproduce it on another box here and did a bisection run.
> It pointed to the commit below.
>
> And yes, reverting that commit fixes the issue here.
Hmm, cute. What kind of machine did you test it on? Nikolay's machines
On Fri, 2012-09-14 at 14:44 -0700, Linus Torvalds wrote:
> On Fri, Sep 14, 2012 at 2:40 PM, Peter Zijlstra
> wrote:
> >
> > The problem the patch is trying to address is not having to scan an
> > entire package for idle cores on every wakeup now that packages are
> &g
On Fri, 2012-09-14 at 23:56 +0200, Peter Zijlstra wrote:
> On Fri, 2012-09-14 at 14:44 -0700, Linus Torvalds wrote:
> > On Fri, Sep 14, 2012 at 2:40 PM, Peter Zijlstra
> > wrote:
> > >
> > > The problem the patch is trying to address is not having to scan an
&g
On Wed, 2008-01-30 at 14:40 -0800, Andrew Morton wrote:
> On Wed, 30 Jan 2008 18:28:59 +0100
> Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> > Implement MADV_WILLNEED for anonymous pages by walking the page tables and
> > starting asynchonous swap cache reads for
On Wed, 2008-01-30 at 23:54 +0100, Guillaume Chazarain wrote:
> On Jan 29, 2008 11:30 PM, Guillaume Chazarain <[EMAIL PROTECTED]> wrote:
> > ===
> > gnome-termina S 0027 0 2201 1
> >f6711fb0 00200082 cb330d62 0027 f664105c 0b1e
> > cb331
On Thu, 2008-01-31 at 01:12 -0800, Andrew Morton wrote:
> Implementation-wise: make_pages_present() _can_ be converted to do this.
> But it's a lot of patching, and the result will be a cleaner, faster and
> smaller core MM. Whereas your approach is easy, but adds more code and
> leaves the old
On Thu, 2008-01-31 at 01:47 -0800, Andrew Morton wrote:
> On Thu, 31 Jan 2008 10:35:18 +0100 Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> >
> > On Thu, 2008-01-31 at 01:12 -0800, Andrew Morton wrote:
> >
> > > Implementation-wise: make_pages_present() _c
On Thu, 2008-01-31 at 02:05 -0800, Andrew Morton wrote:
> On Thu, 31 Jan 2008 10:53:26 +0100 Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> >
> > On Thu, 2008-01-31 at 01:47 -0800, Andrew Morton wrote:
> > > On Thu, 31 Jan 2008 10:35:18 +0100 Peter Zijlstra
On Thu, 2008-01-31 at 10:46 +0100, Miklos Szeredi wrote:
> > On Tue, 29 Jan 2008 16:49:06 +0100
> > Miklos Szeredi <[EMAIL PROTECTED]> wrote:
> >
> > > Add "max_ratio" to /sys/class/bdi. This indicates the maximum
> > > percentage of the global dirty threshold allocated to this bdi.
> >
> > May
On Thu, 2008-01-31 at 01:54 -0800, Andrew Morton wrote:
> On Thu, 31 Jan 2008 10:39:02 +0100 Miklos Szeredi <[EMAIL PROTECTED]> wrote:
>
> > > On Tue, 29 Jan 2008 16:49:02 +0100
> > > Miklos Szeredi <[EMAIL PROTECTED]> wrote:
> > >
&
On Mon, 2008-01-28 at 21:13 +0100, Guillaume Chazarain wrote:
> Unfortunately it seems to not be completely fixed, with this script:
>
> #!/usr/bin/python
>
> import os
> import time
>
> SLEEP_TIME = 0.1
> SAMPLES = 5
> PRINT_DELAY = 0.5
>
> def print_wakeup_latency():
> times = []
> l
Lets CC the XFS maintainer..
On Wed, 2008-01-30 at 20:23 +, Sven Geggus wrote:
> Hi there,
>
> I get the following with 2.6.24:
>
> Ending clean XFS mount for filesystem: dm-0
> BUG: unable to handle kernel paging request at virtual address f2134000
> printing eip: c021a13a *pde = 010b5067
On Thu, 2008-01-31 at 12:29 +0100, Guillaume Chazarain wrote:
> On Jan 31, 2008 9:55 AM, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > Does this patch from thomas fix it as well?
>
> Unfortunately, not.
>
> For information, reverting just the first part of
On Thu, 2008-01-31 at 12:29 +0100, Guillaume Chazarain wrote:
> On Jan 31, 2008 9:55 AM, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > Does this patch from thomas fix it as well?
>
> Unfortunately, not.
>
> For information, reverting just the first part of
On Thu, 2008-01-31 at 13:49 +0100, Guillaume Chazarain wrote:
> On 1/31/08, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > Does something like this help?
>
> I made it compile by open coding undefined macros instead of
> refactoring the whole file.
> But it didn'
> user
> > logs out (ie. the user logs into KDE, works for some time, suspends the box
> > to
> > RAM and resmes one or more times and then logs out). Still, I also observe
> > the
> > symptoms on a box that's never suspended.
> >
> > I'm not
On Thu, 2008-01-31 at 13:53 +0100, Claude Frantz wrote:
> Hello !
>
> I'm faced to a problem where the OOM-killer is invoked but I cannot find
> the reason why. The machine is rather powerfull, the load is very moderate,
> the disk swap space is nearly unused. The only strange observation which
>
this time build tested
---
Subject: hrtimer: fix hrtimer_init_sleeper() users
commit 37bb6cb4097e29ffee970065b74499cbf10603a3
Author: Peter Zijlstra <[EMAIL PROTECTED]>
Date: Fri Jan 25 21:08:32 2008 +0100
hrtimer: unlock hrtimer_wakeup
Broke hrtimer_init_sleeper() users. It for
On Mon, 2008-01-28 at 02:26 +0100, Rafael J. Wysocki wrote:
> On Sunday, 27 of January 2008, Ingo Molnar wrote:
> >
> > * Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
> >
> > > Hi,
> > >
> > > 2.6.24-git3 adds a 5 - 10 sec delay to the suspend and hibernation
> > > code paths (probably related
On Thu, 2008-01-31 at 15:41 +0100, Claude Frantz wrote:
> Peter Zijlstra wrote:
>
> > You seem to have ran out of zone normal memory with all of it stuck in
> > kernel allocations. Would you have /proc/slabinfo available?
>
> Thanks Peter !
>
> No ! There is no
On Thu, 2008-01-31 at 23:39 +0530, Balbir Singh wrote:
> Srivatsa Vaddagiri wrote:
> > Hi,
> > As we were implementing multiple-hierarchy support for CPU
> > controller, we hit some oddities in its implementation, partly related
> > to current cgroups implementation. Peter and I have been deba
d on the CPU occupied by it. In this state it also breaks suspend
> > and
> > hibernation (it cannot be frozen).
> >
> > Since the problem is 100% reproducible on my test boxes, I carried out a
> > bisection which turned out the following commit:
> >
> > commit
On Thu, 2008-01-31 at 18:39 -0800, Paul Menage wrote:
> On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> >
> > Here are some questions that arise in this picture:
> >
> > 1. What is the relationship of the task-group in A/tasks with the
> >task-group in A/a1/tasks? In o
On Fri, 2008-02-01 at 08:44 +0100, Peter Zijlstra wrote:
> On Fri, 2008-02-01 at 03:04 +0100, Rafael J. Wysocki wrote:
> > On Friday, 1 of February 2008, Rafael J. Wysocki wrote:
> > > Hi,
> > >
> > > This is related to the problem I reported earlier this wee
On Fri, 2008-02-01 at 16:43 +0800, Rijndael Cosque wrote:
> Hi all,
>
> I found the x2APIC spec via http://www.intel.com/products/processor/manuals/.
>
> Looks at present there is no x2APIC support in Linux kernel 2.6.24?
>
> Is there any experimental patch available for Linux kernel? -- I
> go
On Thu, 2008-01-31 at 21:54 +0100, Rafael J. Wysocki wrote:
> On Thursday, 31 of January 2008, Peter Zijlstra wrote:
> > I can seem to reproduce this:
> >
> > [EMAIL PROTECTED] cpu1]# time echo 0 > online
> >
> > real0m6.230s
> > user0m0.000s
On Fri, 2008-02-01 at 12:50 +0100, Rafael J. Wysocki wrote:
> On Friday, 1 of February 2008, Peter Zijlstra wrote:
> > > It arts run as root, or does it use RLIMIT_RTPRIO to allow users to
> > > execute realtime tasks?
>
> artswrapper is setuid root and RLIMIT_RT
On Fri, 2008-02-01 at 08:21 -0700, Dale Farnsworth wrote:
> Add each lock class to the all_lock_classes list when it is
> first registered.
>
> Previously, lock classes were added to all_lock_classes when
> the lock class was first used.
> Since one of the uses of the list is to find unused loc
On Sun, 2008-02-03 at 12:47 -0700, Dale Farnsworth wrote:
> On Sun, Feb 03, 2008 at 04:21:02PM +0100, Peter Zijlstra wrote:
> > On Fri, 2008-02-01 at 08:21 -0700, Dale Farnsworth wrote:
> > > Add each lock class to the all_lock_classes list when it is
>
On Mon, 2008-02-04 at 12:17 +0100, Lukas Hejtmanek wrote:
> Ingo,
>
> any progress here? I've tried to revert this patch:
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=67e9fb2a39a1d454218d50383094940982be138f
>
> as it was marked as suspicious patch in this ca
On Mon, 2008-02-04 at 15:36 +0100, Lukas Hejtmanek wrote:
> On Mon, Feb 04, 2008 at 12:36:36PM +0100, Peter Zijlstra wrote:
> > I can't reproduce this with a pure cpu load. I started 10
> > while :; do :; done &
> > instances and aside from slowing down, nothing
On Mon, 2008-02-04 at 05:04 -0800, Andrew Morton wrote:
> After disabling both CONFIG_DEBUG_LOCKING_API_SELFTESTS and netconsole
> (using current mainline) I get a login prompt, and also...
> [7.819146] WARNING: at kernel/lockdep.c:2033
> trace_hardirqs_on+0x9b/0x10d()
> That warning in lo
Make the rt group scheduler compile time configurable.
Enable it by default for cgroup scheduling.
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
include/linux/cgroup_subsys.h |2
init/Kconfig | 23 +--
kernel/sched.c
Various SMP balancing algorithms require that the bandwidth period
run in sync.
Possible improvements are moving the rt_bandwidth thing into root_domain
and keeping a span per rt_bandwidth which marks throttled cpus.
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
include/linux/s
Clean up some of the excessive ifdeffery introduces in the last patch.
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
kernel/sched.c | 150 ++---
1 file changed, 100 insertions(+), 50 deletions(-)
Index: linux-2.6/kernel/s
lockdep spotted this bogus irq locking. normalize_rt_tasks() can be called
from hardirq context through sysrq-n
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
kernel/sched.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
Index: linux-2.6/kernel/s
Change the rt_ratio interface to rt_runtime_us, to match rt_period_us.
This avoids picking a granularity for the ratio.
Extend the /sys/kernel/uids// interface to allow setting
the group's rt_runtime.
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
Documentation/ABI/testing/s
runtime from the other cpus once the local limit runs out.
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
kernel/sched.c| 38 ++-
kernel/sched_rt.c | 88 --
2 files changed, 121 insertions(+), 5 del
quite correct. Two possible ways forward are:
- second prio array for boosted tasks
- boost to a prio ceiling (this would also work for deadline scheduling)
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
kernel/sched.c|3 +++
kernel/sched_rt.c
Refuse to accept or create RT tasks in groups that can't run them.
Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
---
kernel/sched.c | 15 +++
1 file changed, 15 insertions(+)
Index: linux-2.6/ker
On Tue, 2008-02-05 at 18:05 +, Andy Whitcroft wrote:
> > + if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
> > + rcu_read_lock();
> > + hlist_for_each_entry_safe_rcu(mn, n, t,
> > + &mm->mmu_notifier.head, hlist) {
> > +
On Mon, 2012-10-01 at 19:31 +0200, Jiri Olsa wrote:
> @@ -696,7 +696,7 @@ struct perf_branch_stack {
>
> struct perf_regs_user {
> __u64 abi;
> - struct pt_regs *regs;
> + struct pt_regs regs;
> };
That's somewhat unfortunate but unavoidable I guess, can't go mo
On Tue, 2012-09-25 at 21:12 +0800, Tang Chen wrote:
> Tang Chen (2):
> Ensure sched_domains_numa_levels safe in other functions.
> Update sched_domains_numa_masks when new cpus are onlined.
>
> kernel/sched/core.c | 69
> +++
> 1 file change
On Tue, 2012-10-02 at 13:42 +0200, Jiri Olsa wrote:
> +++ b/kernel/events/core.c
> @@ -394,7 +394,8 @@ void perf_cgroup_switch(struct task_struct *task, int
> mode)
> }
>
> if (mode & PERF_CGROUP_SWIN) {
> - WARN_ON_ON
On Tue, 2012-10-02 at 14:48 +0200, Stephane Eranian wrote:
> Not sure, I understand what active_pmu represents.
Its a 'random' pmu of those that share the cpuctx, exactly so you can
limit pmu iterations to those with unique cpuctx instances.
Its assigned when we create a cpuctx to the pmu creatin
On Tue, 2012-10-02 at 15:34 +0200, Stephane Eranian wrote:
> > If you've got a good suggestion I'd be glad to rename it.
>
> how about unique_pmu?
Done!
---
Subject: perf: Clarify perf_cpu_context::active_pmu by renaming it
From: Peter Zijlstra
Date: Tue Oct 02 15:38:
ix perf_cgroup_switch for sw-events
From: Peter Zijlstra
Date: Tue Oct 02 15:41:23 CEST 2012
Jiri reported that he could trigger the WARN_ON_ONCE() in
perf_cgroup_switch() using sw-events. This is because sw-events share
a cpuctx with multiple PMUs.
Use the ->unique_pmu pointer to limit the
On Wed, 2012-10-03 at 15:13 +0200, Jiri Olsa wrote:
> @@ -1190,8 +1191,8 @@ static inline void perf_sample_data_init(struct
> perf_sample_data *data,
> data->raw = NULL;
> data->br_stack = NULL;
> data->period = period;
> - data->regs_user.abi = PERF_SAMPLE_REGS_ABI_N
On Wed, 2012-10-03 at 11:14 -0400, Steven Rostedt wrote:
>
> Yep. I personally never use the get_maintainers script. I first check
> the MAINTAINERS file. If the subsystem I'm working on exists there, I
> only email those that are listed there, including any mailing lists that
> are mentioned (as
On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote:
> +++ b/kernel/sched/core.c
> @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct
> task_struct *p, int flags)
> void activate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> if (task_contributes_to_load(
7fdba1ca10462f42ad2246b918fe6368f5ce488e
Author: Peter Zijlstra
Date: Fri Jul 22 13:41:54 2011 +0200
perf, x86: Avoid kfree() in CPU_STARTING
On -rt kfree() can schedule, but CPU_STARTING is before the CPU is
fully up and running. These are contradictory, so avoid it. Instead
push the kfree() to CPU_ONLINE
On Thu, 2012-10-04 at 11:43 +0200, Andrea Righi wrote:
>
> Right, the update must be atomic to have a coherent nr_uninterruptible
> value. And AFAICS the only way to account a coherent
> nr_uninterruptible
> value per-cpu is to go with atomic ops... mmh... I'll think more on
> this.
You could st
On Tue, 2012-09-25 at 21:12 +0800, Tang Chen wrote:
> +static int sched_domains_numa_masks_update(struct notifier_block
> *nfb,
> + unsigned long action,
> + void *hcpu)
> +{
> + int cpu = (int)hcpu;
kernel/sc
101 - 200 of 24297 matches
Mail list logo