Hi Will,
On 2017-08-25 12:48, Vikram Mulukutla wrote:
Hi Will,
On 2017-08-15 11:40, Will Deacon wrote:
Hi Vikram,
On Thu, Aug 03, 2017 at 04:25:12PM -0700, Vikram Mulukutla wrote:
On 2017-07-31 06:13, Will Deacon wrote:
>On Fri, Jul 28, 2017 at 12:09:38PM -0700, Vikram Mulukutla wrote:
&
On 2017-08-25 12:48, Vikram Mulukutla wrote:
If I understand the code correctly, the upper 32 bits of an ARM64
virtual
address will overflow when 1 is added to it, and so we'll keep WFE'ing
on
every subsequent cpu_relax invoked from the same PC, until we cross the
hard-coded
Hi Will,
On 2017-08-15 11:40, Will Deacon wrote:
Hi Vikram,
On Thu, Aug 03, 2017 at 04:25:12PM -0700, Vikram Mulukutla wrote:
On 2017-07-31 06:13, Will Deacon wrote:
>On Fri, Jul 28, 2017 at 12:09:38PM -0700, Vikram Mulukutla wrote:
>>On 2017-07-28 02:28, Will Deacon wrote:
>>
Hi Qiao,
On 2017-08-01 00:37, qiaozhou wrote:
On 2017年07月31日 19:20, qiaozhou wrote:
=
Also apply Vikram's patch and have a test.
cpu2: a53, 832MHz, cpu7: a73, 1.75Hz
Without cpu_relax bodging patch
==
Hi Will,
On 2017-07-31 06:13, Will Deacon wrote:
Hi Vikram,
On Fri, Jul 28, 2017 at 12:09:38PM -0700, Vikram Mulukutla wrote:
On 2017-07-28 02:28, Will Deacon wrote:
>On Thu, Jul 27, 2017 at 06:10:34PM -0700, Vikram Mulukutla wrote:
>
This does seem to help. Here's some data a
On 2017-07-28 02:28, Peter Zijlstra wrote:
On Thu, Jul 27, 2017 at 06:10:34PM -0700, Vikram Mulukutla wrote:
I think we should have this discussion now - I brought this up earlier
[1]
and I promised a test case that I completely forgot about - but here
it
is (attached). Essentially a Big CPU
On 2017-07-28 02:28, Will Deacon wrote:
On Thu, Jul 27, 2017 at 06:10:34PM -0700, Vikram Mulukutla wrote:
I think we should have this discussion now - I brought this up earlier
[1]
and I promised a test case that I completely forgot about - but here
it
is (attached). Essentially a Big
m 51d6186b620a9e354a0d40af06aef1c1299ca223 Mon Sep 17 00:00:00 2001
From: Vikram Mulukutla
Date: Thu, 27 Jul 2017 12:14:48 -0700
Subject: [PATCH] measure spinlock fairness across differently capable CPUs
How to run this test:
1) compile and boot
2) echo 1 > /sys/module/test/parameters/run_test
3) s
On 7/20/2017 3:56 PM, Florian Fainelli wrote:
On 07/20/2017 07:45 AM, Peter Zijlstra wrote:
Can your ARM part change OPP without scheduling? Because (for obvious
reasons) the idle thread is not supposed to block.
I think it should be able to do that, but I am not sure that if I went
throu
On 2017-07-07 23:14, Joel Fernandes wrote:
Hi Vikram,
On Thu, Jul 6, 2017 at 11:44 PM, Vikram Mulukutla
wrote:
On 2017-07-04 10:34, Patrick Bellasi wrote:
Currently the utilization of the FAIR class is collected before
locking
the policy. Although that should not be a big issue for most
On 2017-07-07 00:47, Vikram Mulukutla wrote:
On 2017-07-04 13:20, Thomas Gleixner wrote:
Vikram reported the following backtrace:
BUG: scheduling while atomic: swapper/7/0/0x0002
CPU: 7 PID: 0 Comm: swapper/7 Not tainted 4.9.32-perf+ #680
schedule
schedule_hrtimeout_range_clock
hotplug thread of the upcoming CPU to complete the bringup
to
the final target state.
Fixes: 8df3e07e7f21 ("cpu/hotplug: Let upcoming cpu bring itself fully
up")
Reported-by: Vikram Mulukutla
Signed-off-by: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Sebastian Siewior
Cc: ru...@r
On 2017-07-04 10:34, Patrick Bellasi wrote:
Currently the utilization of the FAIR class is collected before locking
the policy. Although that should not be a big issue for most cases, we
also don't really know how much latency there can be between the
utilization reading and its usage.
Let's get
On 7/4/2017 12:49 PM, Thomas Gleixner wrote:
On Mon, 26 Jun 2017, Vikram Mulukutla wrote:
On 6/26/2017 3:18 PM, Vikram Mulukutla wrote:
kthread_park waits for the target kthread to park itself with
__kthread_parkme using a completion variable. __kthread_parkme - which is
invoked by the target
On 7/4/2017 9:07 AM, Peter Zijlstra wrote:
On Mon, Jun 26, 2017 at 03:18:03PM -0700, Vikram Mulukutla wrote:
kernel/kthread.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 26db528..7ad3354 100644
--- a/kernel
Ping. This is happening on x86 across suspend/resume too.
https://bugs.freedesktop.org/show_bug.cgi?id=100113
On 2017-06-26 16:03, Vikram Mulukutla wrote:
correcting Thomas Gleixner's email address. s/linuxtronix/linutronix
On 6/26/2017 3:18 PM, Vikram Mulukutla wrote:
kthread_park
On 6/26/2017 10:33 AM, Luis R. Rodriguez wrote:
On Sat, Jun 24, 2017 at 02:39:51PM +0200, Greg KH wrote:
On Sat, Jun 24, 2017 at 02:48:28AM +0200, Luis R. Rodriguez wrote:
On Fri, Jun 23, 2017 at 04:09:29PM -0700, Linus Torvalds wrote:
On Fri, Jun 23, 2017 at 3:43 PM, Luis R. Rodriguez
wrote:
correcting Thomas Gleixner's email address. s/linuxtronix/linutronix
On 6/26/2017 3:18 PM, Vikram Mulukutla wrote:
kthread_park waits for the target kthread to park itself with
__kthread_parkme using a completion variable. __kthread_parkme - which is
invoked by the target kthread - set
iable and the schedule happen atomically inside
__kthread_parkme. This focuses the fix to the hotplug requirement alone,
and removes the unnecessary migration of cpuhp/X.
Signed-off-by: Vikram Mulukutla
---
kernel/kthread.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --g
Hi Greg, Luis,
On 2017-06-17 12:38, Greg KH wrote:
On Tue, Jun 13, 2017 at 09:40:11PM +0200, Luis R. Rodriguez wrote:
On Tue, Jun 13, 2017 at 11:05:48AM +0200, Greg KH wrote:
> On Mon, Jun 05, 2017 at 02:39:33PM -0700, Luis R. Rodriguez wrote:
> > As the firmware API evolves we keep extending
OK
So there are two pieces here.
One is that if we want *all* drivers to work with schedutil, we need to
keep
the kthread for the ones that will never be reworked (because nobody
cares
etc). But then perhaps the kthread implementation may be left alone
(because
nobody cares etc).
The se
Hi Sudeep,
Interesting. Just curious if this is r0p0/p1 A53 ? If so, is the errata
819472 enabled ?
Sorry for bringing this up after the loo-ong delay, but I've been
assured that the A53 involved is > r0p1. I've also confirmed this
problem on multiple internal platforms, and I'm pretty sur
Hi Sudeep,
Thanks for taking a look!
On 2016-11-18 02:30, Sudeep Holla wrote:
Hi Vikram,
On 18/11/16 02:22, Vikram Mulukutla wrote:
Hello,
This isn't really a bug report, but just a description of a
frequency/IPC
dependent behavior that I'm curious if we should worry about. The
Hello,
This isn't really a bug report, but just a description of a
frequency/IPC
dependent behavior that I'm curious if we should worry about. The
behavior
is exposed by questionable design so I'm leaning towards don't-care.
Consider these threads running in parallel on two ARM64 CPUs running
On 2016-10-28 01:49, Peter Zijlstra wrote:
On Fri, Oct 28, 2016 at 12:57:05AM -0700, Vikram Mulukutla wrote:
On 2016-10-28 00:49, Peter Zijlstra wrote:
>On Fri, Oct 28, 2016 at 12:10:39AM -0700, Vikram Mulukutla wrote:
>>This RFC patch has been tested on live X86 machines with the
On 2016-10-28 00:46, Peter Zijlstra wrote:
On Fri, Oct 28, 2016 at 12:10:41AM -0700, Vikram Mulukutla wrote:
+void walt_finish_migrate(struct task_struct *p, struct rq *dest_rq,
bool locked)
+{
+ u64 wallclock;
+ unsigned long flags;
+
+ if (!p->on_rq &&am
On 2016-10-28 00:43, Peter Zijlstra wrote:
On Fri, Oct 28, 2016 at 12:10:41AM -0700, Vikram Mulukutla wrote:
+u64 walt_ktime_clock(void)
+{
+ if (unlikely(walt_ktime_suspended))
+ return ktime_to_ns(ktime_last);
+ return ktime_get_ns();
+}
+static int walt_suspend
On 2016-10-28 00:49, Peter Zijlstra wrote:
On Fri, Oct 28, 2016 at 12:10:39AM -0700, Vikram Mulukutla wrote:
This RFC patch has been tested on live X86 machines with the following
sanity
and benchmark results (thanks to Juri Lelli, Dietmar Eggeman, Patrick
Bellasi
for initial code reviews
On 2016-10-28 00:29, Peter Zijlstra wrote:
On Fri, Oct 28, 2016 at 12:10:39AM -0700, Vikram Mulukutla wrote:
We propose Window-Assisted Load Tracking (WALT) as an alternative or
additional
load tracking scheme in lieu of or along with PELT, one that in our
estimation
better tracks task
ke the implementation too fragile.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Vikram Mulukutla
---
include/linux/sched/sysctl.h | 1 +
init/Kconfig | 9 +
kernel/sched/Makefile| 1 +
kernel/sched/cputime.c | 10 +-
k
an be switched to PELT's
util_avg at runtime by the following command:
echo 0 > /proc/sys/kernel/sched_use_walt_metrics
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Vikram Mulukutla
---
kernel/sched/core.c | 29 -
kernel/sched/deadline.c | 7 +++
invariance.
Signed-off-by: Srivatsa Vaddagiri
Signed-off-by: Vikram Mulukutla
---
include/linux/sched.h | 39 +++
kernel/sched/fair.c | 2 --
kernel/sched/sched.h | 8
3 files changed, 47 insertions(+), 2 deletions(-)
diff --git a/include/linux/sched.h
We propose Window-Assisted Load Tracking (WALT) as an alternative or additional
load tracking scheme in lieu of or along with PELT, one that in our estimation
better tracks task demand and CPU utilization especially for use cases on
mobile devices. WALT was conceived by Srivatsa Vaddagiri to provi
On 5/14/2016 8:39 AM, Thomas Gleixner wrote:
On Fri, 13 May 2016, Vikram Mulukutla wrote:
On 5/13/2016 7:58 AM, Peter Zijlstra wrote:
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index 5d8ffa3e6f8c..c1cde3577551 100644
--- a/include/asm-generic/preempt.h
+++ b
On 5/13/2016 7:58 AM, Peter Zijlstra wrote:
On Thu, May 12, 2016 at 11:39:47PM -0700, Vikram Mulukutla wrote:
Hi,
I came across a piece of engineering code that looked like:
preempt_disable();
/* --cut, lots of code-- */
preempt_enable_no_resched();
put_user()
preempt_disable();
(If you wish
Hi,
I came across a piece of engineering code that looked like:
preempt_disable();
/* --cut, lots of code-- */
preempt_enable_no_resched();
put_user()
preempt_disable();
(If you wish to seriously question the usage of the preempt API in this
manner, I unfortunately have no comment since I didn
/kunmap_atomic.
This was caught by the highmem debug code present in kunmap_atomic.
Fix the loop to do the unmapping properly.
Reviewed-by: Stephen Boyd
Reported-by: Lime Yang
Signed-off-by: Vikram Mulukutla
---
kernel/trace/trace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
e same. If there is a livelock caused by the debug code,
this change will allow the lock to be acquired, depending on
the implementation of the lower level arch specific spinlock
code.
Signed-off-by: Vikram Mulukutla
---
lib/spinlock_debug.c | 32 ++--
1 files changed,
38 matches
Mail list logo