Sys: 319.46 418.57 363.31 37.62 -29.47%
numa05.sh User:33727.7734732.6834127.41 447.11 -1.353%
The commit does cause some performance regression but is needed from
a fairness/correctness perspective.
Signed-off-by: Srikar Dronamraju
---
include
36654.5335074.51 1187.71 3.368%
Ideally this change shouldn't have affected performance.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
25.5436896.3135637.84 1222.64 -2.12%
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 20 ++--
1 file changed, 2 insertions(+), 18 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 57d1ee8..ea32a66 100644
--- a/kernel/sched/fair.c
+++ b/k
4732.1238016.8036255.85 1070.51 -1.704%
While there is a performance hit, this is a correctness issue that is very
much needed in bigger systems.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/ke
%
numa05.sh User:33255.8636890.8234879.31 1641.98 12.11%
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 128 +++-
1 file changed, 57 insertions(+), 71 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched
* Peter Zijlstra [2018-06-04 14:18:00]:
> On Mon, Jun 04, 2018 at 03:30:13PM +0530, Srikar Dronamraju wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index ea32a66..94091e6 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
>
* Peter Zijlstra [2018-06-04 14:23:36]:
> OK, the above matches the description, but I'm puzzled by the remainder:
>
> >
> > - if (ng->active_nodes > 1 && numa_is_active_node(env.dst_nid,
> > ng))
> > - sched_setnuma(p, env.dst_nid);
> > + if (nid != p->numa
> > Testcase Time: Min Max Avg StdDev
> > %Change
> > numa01.sh Real: 478.45 565.90 515.11 30.87
> > 16.29%
> > numa01.sh Sys: 207.79 271.04 232.94 21.33
> > -15.8%
> > numa01.sh
* Peter Zijlstra [2018-06-04 15:39:53]:
> > >
> > > That seems to entirely loose the active_node thing, or are you saying
> > > best_cpu already includes that? (Changelog could use a little help there
> > > I suppose)
> >
> > I think checking for active_nodes before calling sched_setnuma was a
>
* Rik van Riel [2018-06-04 10:51:27]:
> On Mon, 2018-06-04 at 15:30 +0530, Srikar Dronamraju wrote:
>
>
> Just bike shedding, but it may be easier to read
> if the "we found our destination" check were written
> more explicitly:
>
>
> if (!cur) {
* Rik van Riel [2018-06-04 10:37:30]:
> On Mon, 2018-06-04 at 05:59 -0700, Srikar Dronamraju wrote:
> > * Peter Zijlstra [2018-06-04 14:23:36]:
> >
> > > > - if (ng->active_nodes > 1 &&
> > > > numa_is_active_node(env.dst_nid,
* Rik van Riel [2018-06-04 16:05:55]:
> On Mon, 2018-06-04 at 15:30 +0530, Srikar Dronamraju wrote:
>
> > @@ -1554,6 +1562,9 @@ static void task_numa_compare(struct
> > task_numa_env *env,
> > if (READ_ONCE(dst_rq->numa_migrate_on))
> >
* Peter Zijlstra [2018-06-04 21:28:21]:
> > if (time_after(jiffies, pgdat->numabalancing_migrate_next_window)) {
> > - spin_lock(&pgdat->numabalancing_migrate_lock);
> > - pgdat->numabalancing_migrate_nr_pages = 0;
> > - pgdat->numabalancing_migrate_next_window =
> >
> > I thought about this. Lets say we evaluated that destination node can
> > allow movement. While we iterate through the list of cpus trying to
> > find
> > the best cpu node, we find a idle cpu towards the end of the list.
> > However if another task as already raced with us to move a task
> > The commit does cause some performance regression but is needed from
> > a fairness/correctness perspective.
> >
>
> While it may cause some performance regressions, it may be due to either
> a) some workloads benefit from overloading a node if the tasks idle
> frequently or b) the regression
* Mel Gorman [2018-06-05 10:58:43]:
> On Mon, Jun 04, 2018 at 03:30:27PM +0530, Srikar Dronamraju wrote:
> > Currently task scan rate is reset when numa balancer migrates the task
> > to a different node. If numa balancer initiates a swap, reset is only
> > applicable to th
> >
> > All tasks will not be stuck at task/cpu A.
> >
> > "[PATCH 10/19] sched/numa: Stop multiple tasks from moving to the
> > cpu..." the first task to set cpu A as swap target will ensure
> > subsequent tasks wont be allowed to set cpu A as target for swap till
> > it
> > finds a better task/
hed_debug()) {
Same as above.
> pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
> --
> 1.9.1
>
--
Thanks and Regards
Srikar Dronamraju
* Yue Hu [2021-02-03 18:10:19]:
> On Wed, 3 Feb 2021 15:22:56 +0530
> Srikar Dronamraju wrote:
>
> > * Yue Hu [2021-02-03 12:20:10]:
> >
> >
> > sched_debug() would only be present in CONFIG_SCHED_DEBUG. Right?
> > In which case there would
ailed.
> Aborted (core dumped)
> <<>>
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
r Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Co-developed-by: Gautham R Shenoy
Signed-off-by: Gautham R Shenoy
Co-developed-by: Parth Shah
Signed-off-by: Parth Shah
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 41 +++--
kernel/sched/featu
* Srikar Dronamraju [2020-07-27 11:02:20]:
> Changelog v3 ->v4:
> v3:
> https://lore.kernel.org/lkml/20200723085116.4731-1-sri...@linux.vnet.ibm.com/t/#u
>
Here is a summary of some of the testing done with coregroup v4 patchsets.
It includes ebizzy, schbench, perf bench
* Michael Ellerman [2020-07-31 17:49:55]:
> Srikar Dronamraju writes:
> > Add support for grouping cores based on the device-tree classification.
> > - The last domain in the associativity domains always refers to the
> > core.
> > - If primary reference domain ha
* Michael Ellerman [2020-07-31 17:45:37]:
> Srikar Dronamraju writes:
> > Currently "CACHE" domain happens to be the 2nd sched domain as per
> > powerpc_topology. This domain will collapse if cpumask of l2-cache is
> > same as SMT domain. However we could gene
* Michael Ellerman [2020-07-31 17:52:15]:
> Srikar Dronamraju writes:
> > If allocated earlier and the search fails, then cpumask need to be
> > freed. However cpu_l1_cache_map can be allocated after we search thread
> > group.
>
> It's not freed anywhere
* Michael Ellerman [2020-07-31 18:02:21]:
> Srikar Dronamraju writes:
> > Lookup the coregroup id from the associativity array.
>
Thanks Michael for all your comments and inputs.
> It's slightly strange that this is called in patch 9, but only properly
> imple
Hi Andrew, Michal, David
* Andrew Morton [2020-08-06 21:32:11]:
> On Fri, 3 Jul 2020 18:28:23 +0530 Srikar Dronamraju
> wrote:
>
> > > The memory hotplug changes that somehow because you can hotremove numa
> > > nodes and therefore make the nodemask sparse but that
eflect your LLC situation via this
> flag to make cpus_share_cache() work properly.
I detect if the LLC is shared at BIGCORE, and if they are shared at BIGCORE,
then I dynamically rename the DOMAIN as CACHE and enable
SD_SHARE_PKG_RESOURCES in that domain.
>
> [1]: https://linuxplumbersconf.org/event/4/contributions/484/
Thanks for the pointer.
--
Thanks and Regards
Srikar Dronamraju
Zijlstra (Intel)
>
> An updated Changelog that recaps some of this discussion might also be
> nice.
Okay, will surely do the needful.
--
Thanks and Regards
Srikar Dronamraju
of reboot
they would only have the older P8 topology. After reboot the kernel topology
would change, but the userspace is made to believe that they are running on
SMT8 core by way of keeping the sibling_cpumask at SMT8 core level.
--
Thanks and Regards
Srikar Dronamraju
: LKML
Cc: Michael Ellerman
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Dietmar Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Cc: Vaidyanathan Srinivasan
Signed-off-by: Srikar Dronamraju
---
Changelog v1->v2:
Modified com
uling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Dietmar Eggemann
Cc: Mel Gorman
Cc: Vincent Guittot
Cc: Vaidyanathan Srinivasan
Acked-by; Peter Zijlstra (Intel)
Signed-off-by: Srikar Dronamraju
---
Changelog v1->v2:
Update the commit msg
SD_SHARE_CPUCAPACITY?
> /*
>* Buddy candidates are cache hot:
> */
> --
> 2.28.0.163.g6104cc2f0b6-goog
>
--
Thanks and Regards
Srikar Dronamraju
henoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v2 -> v3:
Rewrote changelog (Gautham)
Renamed to powerpc/smp: Move topology fixups
Ellerman
Cc: Nicholas Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Signed-off-by: Srikar Dronamraju
---
Changelog v4
ned-off-by: Srikar Dronamraju
---
Changelog v1 -> v2:
Move coregroup_enabled before getting associativity (Gautham)
arch/powerpc/mm/numa.c | 20
1 file changed, 20 insertions(+)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 0d57779e7942..8b3b3e
Piggin
Cc: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changel
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cc: linuxppc-dev
Cc: LKML
Cc: Michael Ellerman
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Nick Piggin
Cc: Oliver OHalloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Anton Blanchard
Cc: Gautham R Shenoy
Cc: Vaidyan
Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v4 ->v5:
Updated commit msg to specify actual implementation of
cpu_to_coregroup_id is in a subsequent patch (Michael Ellerman)
Changelog v3 ->v4:
if coregroup_support doesn
vasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v4 ->v5:
Updated commit msg on why cpumask need not be freed.
(Michael Ellerman)
arch/powerpc/kernel/smp.c | 7 +++
1 file changed, 3 insertions(+), 4 d
h
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v4->v5:
Updated commit msg with current abstract natur
chael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/kernel/smp.c | 104 +++---
1 file ch
Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v2 -> v3:
Removed node caching part. Rewrote the Commit msg (Michael Ellerman)
Renamed to powerpc/
: Anton Blanchard
Cc: Oliver O'Halloran
Cc: Nathan Lynch
Cc: Michael Neuling
Cc: Gautham R Shenoy
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Jordan Niethe
Cc: Vaidyanathan Srinivasan
Reviewed-by: Gautham R. Shenoy
Signed-off-by: Srikar Dronamraju
---
Changelog v1
* Michal Hocko [2020-08-18 09:37:12]:
> On Tue 18-08-20 09:32:52, David Hildenbrand wrote:
> > On 12.08.20 08:01, Srikar Dronamraju wrote:
> > > Hi Andrew, Michal, David
> > >
> > > * Andrew Morton [2020-08-06 21:32:11]:
> > >
> > >
int i_group_start = get_cpu_thread_group_start(i, tg);
>
> if (unlikely(i_group_start == -1)) {
> WARN_ON_ONCE(1);
> @@ -843,7 +881,7 @@ static int init_cpu_l1_cache_map(int cpu)
> }
>
> if (i_group_start == cpu_group_start)
> - cpumask_set_cpu(i, per_cpu(cpu_l1_cache_map, cpu));
> + cpumask_set_cpu(i, *mask);
> }
>
> out:
> @@ -924,7 +962,7 @@ static int init_big_cores(void)
> int cpu;
>
> for_each_possible_cpu(cpu) {
> - int err = init_cpu_l1_cache_map(cpu);
> + int err = init_cpu_cache_map(cpu, THREAD_GROUP_SHARE_L1);
>
> if (err)
> return err;
> --
> 1.9.4
>
--
Thanks and Regards
Srikar Dronamraju
for_each_cpu(i, *mask) {
> + if (!cpu_online(i))
> + continue;
> + set_cpus_related(i, cpu, cpu_l2_cache_mask);
> + }
> +
> + return true;
> + }
> +
Ah this can be simplified to:
if (thread_group_shares_l2) {
cpumask_set_cpu(cpu, cpu_l2_cache_mask(cpu));
for_each_cpu(i, per_cpu(thread_group_l2_cache_map, cpu)) {
if (cpu_online(i))
set_cpus_related(i, cpu, cpu_l2_cache_mask);
}
}
No?
> l2_cache = cpu_to_l2cache(cpu);
> if (!l2_cache || !*mask) {
> /* Assume only core siblings share cache with this CPU */
--
Thanks and Regards
Srikar Dronamraju
he first place. For example:- If for a P9
core with CPUs 0-7, the cache->shared_cpu_map for L1 would have 0-7 but
would display 0,2,4,6.
The drawback of this is even if cpus 0,2,4,6 are released L1 cache will not
be released. Is this as expected?
--
Thanks and Regards
Srikar Dronamraju
Gautham R Shenoy
Signed-off-by: Gautham R Shenoy
Co-developed-by: Parth Shah
Signed-off-by: Parth Shah
Signed-off-by: Srikar Dronamraju
---
Changelog v1->v2:
v1:
http://lore.kernel.org/lkml/20210226164029.122432-1-sri...@linux.vnet.ibm.com/t/#u
- Make WA_WAKER default (Suggested by Rik)
-
isphere, and finally across hemispheres), do you have any
suggestions on how we could handle the same in the core scheduler?
--
Thanks and Regards
Srikar Dronamraju
> + zalloc_cpumask_var_node(mask, GFP_KERNEL, cpu_to_node(cpu));
> > >
> >
> > This hunk (and the next hunk) should be moved to next patch.
> >
>
> The next patch is only about introducing THREAD_GROUP_SHARE_L2. Hence
> I put in any other code in this
just that there is
still something more left to be done.
--
Thanks and Regards
Srikar Dronamraju
n't we want to enforce that the siblings sharing L1 be a subset of
> the siblings sharing L2 ? Or do you recommend putting in a check for
> that somewhere ?
>
I didnt think about the case where the device-tree could show L2 to be a
subset of L1.
How about initializing thread_group_l2_cache_map itself with
cpu_l1_cache_map. It would be a simple one time operation and reduce the
overhead here every CPU online.
And it would help in your subsequent patch too. We dont want the cacheinfo
for L1 showing CPUs not present in L2.
--
Thanks and Regards
Srikar Dronamraju
7;
> > powerpc64-linux-ld: mm/khugepaged.o:(.toc+0x0): undefined reference to
> > `node_reclaim_distance'
>
> Hm, OK.
> CONFIG_NUMA=y
> # CONFIG_SMP is not set
>
> Michael, Gautham, does anyone care about this config combination?
>
I can add #ifdef CONFIG_SMP where coregroup_enabled is being accessed
but I do feel CONFIG_NUMA but !CONFIG_SMP may not be a valid combination.
>
> Thanks.
--
Thanks and Regards
Srikar Dronamraju
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
Changelog:
v1->v2:
v1:
https://lore.kernel.org/linuxppc-dev/20201028123512.871051-1-sri...@linux.vnet.ibm.com/t/#u
- Moved a hunk to fix a no previous prototype warning reported by:
l...@intel.com
https://lists.01.
: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/kvm_guest.h | 4 ++--
arch/powerpc/include/asm/kvm_para.h | 2 +-
arch/powerpc/kernel/firmware.c | 2 +-
arch/powerpc/platforms/pseries/smp.c | 2 +-
4 files
las Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Srikar Dronamraju (4):
powerpc: Refactor is_kvm_guest declaration to new header
powerpc: Rename is_kvm_guest to check_kvm_guest
powerpc: Reintrod
: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/kvm_guest.h | 10 ++
arch/powerpc/include/asm
: Michael Ellerman
Cc: Nicholas Piggin
Cc: Nathan Lynch
Cc: Gautham R Shenoy
Cc: Peter Zijlstra
Cc: Valentin Schneider
Cc: Juri Lelli
Cc: Waiman Long
Cc: Phil Auld
Acked-by: Waiman Long
Signed-off-by: Srikar Dronamraju
---
arch/powerpc/include/asm/paravirt.h | 18 ++
1 file
; Fixes: 2b1444983508 ("uprobes, mm, x86: Add the ability to install and remove
> uprobes breakpoints")
> Cc: sta...@vger.kernel.org
> Reported-by: Kees Cook
> Signed-off-by: Masami Hiramatsu
Looks good to me.
Reviewed-by: Srikar Dronamraju
> ---
> arch/x86/kernel/upro
we need to be conservative esp if we want to make WA_WAKER on by
default. I would still like to hear from other people if they think its ok
to enable it by default. I wonder if enabling it by default can cause some
load imbalances leading to more active load balance down the line. I
haven't benchmarked with WA_WAKER enabled.
Thanks Rik for your inputs.
--
Thanks and Regards
Srikar Dronamraju
* Peter Zijlstra [2021-03-01 16:44:42]:
> On Sat, Feb 27, 2021 at 02:56:07PM -0500, Rik van Riel wrote:
> > On Fri, 2021-02-26 at 22:10 +0530, Srikar Dronamraju wrote:
>
> > > + if (sched_feat(WA_WAKER) && tnr_busy < tllc_size)
> > > + return
* Peter Zijlstra [2021-03-01 16:40:33]:
> On Fri, Feb 26, 2021 at 10:10:29PM +0530, Srikar Dronamraju wrote:
> > +static int prefer_idler_llc(int this_cpu, int prev_cpu, int sync)
> > +{
> > + struct sched_domain_shared *tsds, *psds;
> > + int pnr_busy, pllc_size
* Peter Zijlstra [2021-03-01 18:18:28]:
> On Mon, Mar 01, 2021 at 10:36:01PM +0530, Srikar Dronamraju wrote:
> > * Peter Zijlstra [2021-03-01 16:44:42]:
> >
> > > On Sat, Feb 27, 2021 at 02:56:07PM -0500, Rik van Riel wrote:
> > > > On Fri, 2021-02-26 at
* Dietmar Eggemann [2021-03-02 10:53:06]:
> On 26/02/2021 17:40, Srikar Dronamraju wrote:
>
> [...]
>
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b13634..d49bfcdc4a19 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/
* Peter Zijlstra [2021-03-02 10:10:32]:
> On Tue, Mar 02, 2021 at 01:09:46PM +0530, Srikar Dronamraju wrote:
> > > Oh, could be, I didn't grep :/ We could have core code keep track of the
> > > smt count I suppose.
> >
> > Could we use cpumask_
* Vincent Guittot [2021-03-08 14:52:39]:
> On Fri, 26 Feb 2021 at 17:41, Srikar Dronamraju
> wrote:
> >
Thanks Vincent for your review comments.
> > +static int prefer_idler_llc(int this_cpu, int prev_cpu, int sync)
> > +{
> > + struct sched_domain_shared
nstruction and Data flow.
>
> This patch renames the variable to "thread_group_l1_cache_map" to make
> it consistent with a subsequent patch which will introduce
> thread_group_l2_cache_map.
>
> This patch introduces no functional change.
>
Looks good to me.
R
property (L1 or
> L2) and update a suitable mask. This is a preparatory patch for the
> next patch where we will introduce discovery of thread-groups that
> share L2-cache.
>
> No functional change.
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
> Signed-off-by: Ga
00006 0001 0003
> 0005 0007
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
0004 0006 0001
> 0003 0005 0007 0002
> 0002 0004 0002
> 0004 0006 0001 0003
> 0005 0007
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
y is being shared by
> which groups of threads. This array can encode information about
> multiple properties being shared by different thread-groups within the
> core.
>
Looks good to me.
Reviewed-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
e updated or used for rq->avg. Should we look at
splitting sched_avg so that rq->avg doesn't have unwanted fields?
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.
> - load_idx = 0;
> -
> do {
> struct sg_lb_stats *sgs = &tmp_sgs;
> int local_group;
The single line change in the previous patch gets removed here so why
not club them.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this
nction declarations out of arch
>
> Oleg Nesterov (3):
> uprobes: Kill module_init() and module_exit()
> uprobes: Introduce arch_uprobe->ixol
> uprobes: Export write_opcode() as uprobe_write_opcode()
>
Acked-by: Srikar Dronamraju
for this series.
--
Thanks
> hammer). We don't need to do this on ARM, and we don't do it. The
> result is that, unless PERF_EVENT is set separately, uprobes tends
> not to build. I was lucking-out in my testing due to other default
> config items turning on PERF_EVENT.
>
--
Thanks and Regards
Srikar
wake up
was the commit thats causing the threads to be stuck in futex.
I reverted b0c29f79ecea0b6fbcefc999e70f2843ae8306db on top of v3.14-rc6 and
confirmed that
reverting the commit solved the problem.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send
owever if I set the
constraint to core (which means running more instances of java), the
problem is not seen. I kind of guess, the lesser the number of java
instances the easier it is to reproduce.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "
S 3fff825f6044 0 14682 14076 0x0080
Is there any other information that I provide that can help?
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
Mo
lds/linux.git/commit/?id=b0c29f79ecea0b6fbcefc999
are the same.
Or am I missing something?
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://v
13/12/19/624
> https://lkml.org/lkml/2013/12/19/630
I reverted commits 99b60ce6 and b0c29f79. Then applied the patches in
the above url. The last one had a reject but it was pretty
straightforward to resolve it. After this, specjbb completes.
So reverting and applying v3 3/4 and 4/4 patches
().
>
> ppc: Looks like, it can emulate almost everything. Does it
>actually needs to record the fact that emulate_step()
>failed? Hopefully not. But if yes, it can add the ppc-
> specific flag into arch_uprobe.
>
> TODO: rename arch_uprob
ood_insns_32 should depend
> on CONFIG_X86_32/EMULATION
>
> - the usage of mm->context.ia32_compat looks wrong if the task
> is TIF_X32.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
> checks insn->x86_64.
>
> Also, remove the no longer needed "struct mm_struct *mm" argument and
> the unnecessary "return" at the end.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe
() to use utask instead of autask, to
> make the code more symmetrical with arch_uprobe_post_xol().
>
> Signed-off-by: Oleg Nesterov
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
* Namhyung Kim [2013-11-27 15:19:49]:
> From: Namhyung Kim
>
> There are functions that can be shared to both of kprobes and uprobes.
> Separate common data structure to struct trace_probe and use it from
> the shared functions.
>
> Acked-by: Masami Hiramatsu
> Cc:
* Namhyung Kim [2013-11-27 15:19:47]:
> From: Namhyung Kim
>
> The uprobe syntax requires an offset after a file path not a symbol.
>
> Reviewed-by: Masami Hiramatsu
> Acked-by: Oleg Nesterov
> Cc: Srikar Dronamraju
> Cc: zhangwei(Jovi)
> Cc: Arnaldo Carval
* Namhyung Kim [2013-11-27 15:19:50]:
> From: Namhyung Kim
>
> Convert struct trace_uprobe to make use of the common trace_probe
> structure.
>
> Reviewed-by: Masami Hiramatsu
> Cc: Srikar Dronamraju
> Cc: Oleg Nesterov
> Cc: zhangwei(Jovi)
> Cc: Arnaldo Car
hing like "private_data_for_handlers" so that the tracing
> handlers could use it to communicate with call_fetch() methods.
One nit below + request for change in the above commit message.
Otherwise
Acked-by: Srikar Dronamraju
>
> Signed-off-by: Oleg Nesterov
> ---
>
ys
>nothing.
>
> 3. Kill the dummy definition of uprobe_get_swbp_addr(), nobody
>except handle_swbp() needs it.
>
> 4. Purely cosmetic, but move the decl of uprobe_get_swbp_addr()
>up, close to other __weak helpers.
>
> Signed-off-by: Oleg Nesterov
Acked-by
* Oleg Nesterov [2013-11-09 18:54:09]:
> powerpc has both arch_uprobe->insn and arch_uprobe->ainsn to
> make the generic code happy. This is no longer needed after
> the previous change, powerpc can just use "u32 insn".
>
> Signed-off-by: Oleg Nesterov
Acke
me effect.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
8b 52 10 48 8b 52 10 8b 4a 08 8b 52 04
49 01 cc 48 01 d3 8
3
RIP [] update_group_power+0xa3/0x130
RSP
CR2: 0010
---[ end trace cd8cb7fb261d7bea ]---
Kernel panic - not syncing: Fatal exception
This can be fixed by a simple check below.
--
Thanks and Regards
Srikar Dron
After Commit-id 863bffc80898 (sched/fair: Fix group power_orig
computation), we might end up computing group power before the
sched_domain for a cpu is updated.
Update with cpu_power, if rq->sd is not yet updated.
Signed-off-by: Srikar Dronamraju
---
Changelog since v1: Fix divide by z
>
> Hurm.. can you provide the actual topology of the machine that triggers
> this? My brain hurts trying to thing through the weird cases of this
> code.
>
Hope this helps. Please do let me know if you were looking for pdf output.
Machine (251GB)
NUMANode P#0 (63GB)
Socket P#0
Okay, moving to arch_uprobe_task is fine. I probably got confused by
"First of all it is not really needed,"
>
> > Your change still retains it.
>
>
> OK. How about dup_xol_work/dup_xol_vaddr ?
>
Yes fine with me.
--
Thanks and Regards
Srikar Dronamraju
--
To uns
* Oleg Nesterov [2013-11-12 20:20:38]:
> On 11/12, Srikar Dronamraju wrote:
> >
> > Okay, moving to arch_uprobe_task is fine. I probably got confused by
> > "First of all it is not really needed,"
>
> OK, this doesn't look good, I agree.
>
* Peter Zijlstra [2013-11-12 18:55:54]:
> On Tue, Nov 12, 2013 at 10:45:07PM +0530, Srikar Dronamraju wrote:
> > >
> > > Hurm.. can you provide the actual topology of the machine that triggers
> > > this? My brain hurts trying to thing through the
ote: we do not care if offset + size > i_size, the users of
> arch_uprobe->insn can't know how many bytes were actually copied
> anyway. But perhaps this needs more changes.
>
> Signed-off-by: Oleg Nesterov
Acked-by: Srikar Dronamraju
--
Thanks and Regards
roup_power().
> Please do elaborate on how you observed this.
>
Does this clarify?
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
401 - 500 of 881 matches
Mail list logo