[1] Previous versions:
v1:
https://lore.kernel.org/bpf/ca+i-1c1lvkjfqlbyk6siiqhxfy0jcr7ubcamj4jced0a9aw...@mail.gmail.com/T/#t
v2:
https://lore.kernel.org/bpf/20210118155735.532663-1-jackm...@google.com/T/#t
Brendan Jackman (2):
docs: bpf: Fixup atomics markup
docs: bpf: Clarify -mc
Alexei pointed out [1] that this wording is pretty confusing. Here's
an attempt to be more explicit and clear.
[1]
https://lore.kernel.org/bpf/CAADnVQJVvwoZsE1K+6qRxzF7+6CvZNzygnoBW9tZNWJELk5c=q...@mail.gmail.com/T/#m07264fc18fdc43af02fc1320968afefcc73d96f4
Signed-off-by: Brendan Ja
This fixes up the markup to fix a warning, be more consistent with
use of monospace, and use the correct .rst syntax for (* instead
of _).
Signed-off-by: Brendan Jackman
Reviewed-by: Lukas Bulwahn
---
Documentation/networking/filter.rst | 15 ---
1 file changed, 8 insertions
Add missing skeleton destroy call.
Reported-by: Yonghong Song
Fixes: 37086bfdc737 ("bpf: Propagate stack bounds to registers in atomics w/
BPF_FETCH")
Signed-off-by: Brendan Jackman
---
tools/testing/selftests/bpf/prog_tests/atomic_bounds.c | 2 ++
1 file changed, 2 insertions(+)
Add missing skeleton destroy call.
Reported-by: Yonghong Song
Fixes: 37086bfdc737 ("bpf: Propagate stack bounds to registers in atomics w/
BPF_FETCH")
Signed-off-by: Brendan Jackman
---
Differences from v1: this actually builds.
tools/testing/selftests/bpf/prog_tests/atomic_bo
p.
>
> Hence, make htmldocs warns on Documentation/networking/filter.rst:1053:
>
> WARNING: Inline emphasis start-string without end-string.
>
> Add some minimal markup to address this warning.
>
> Signed-off-by: Lukas Bulwahn
Acked-By: Brendan Jackman
>
> ---
&
Thanks!
On Wed, 27 Jan 2021 at 03:25, wrote:
>
> From: Menglong Dong
>
> This 'BPF_ADD' is duplicated, and I belive it should be 'BPF_AND'.
>
> Fixes: 981f94c3e921 ("bpf: Add bitwise atomic instructions")
> Signed-off-by: Menglong Dong
Acked
On Wed, 16 Dec 2020 at 08:08, Yonghong Song wrote:
>
>
>
> On 12/15/20 4:18 AM, Brendan Jackman wrote:
> > Document new atomic instructions.
> >
> > Signed-off-by: Brendan Jackman
>
> Ack with minor comments below.
>
> Acked-by: Yonghong Song
>
>
On Wed, 16 Dec 2020 at 08:19, Yonghong Song wrote:
>
>
>
> On 12/15/20 3:12 AM, Brendan Jackman wrote:
> > On Tue, Dec 08, 2020 at 10:15:35AM -0800, Yonghong Song wrote:
> >>
> >>
> >> On 12/8/20 8:59 AM, Brendan Jackman wrote:
> >>> On T
On Fri, Dec 11, 2020 at 11:44:41AM -0800, Andrii Nakryiko wrote:
> On Fri, Dec 11, 2020 at 10:58 AM Brendan Jackman wrote:
> >
> > This allows the user to do their own manual polling in more
> > complicated setups.
> >
> > Signed-off-by: Brendan Jackman
> &g
This provides a convenient perf ringbuf -> libbpf ringbuf migration
path for users of external polling systems. It is analogous to
perf_buffer__epoll_fd.
Signed-off-by: Brendan Jackman
---
Difference from v1: Added entry to libbpf.map.
tools/lib/bpf/libbpf.h | 1 +
tools/lib/bpf/libbpf.
Seems I never replied to this, thanks for the reviews!
On Mon, Dec 07, 2020 at 10:37:32PM -0800, John Fastabend wrote:
> Brendan Jackman wrote:
> > This adds two atomic opcodes, both of which include the BPF_FETCH
> > flag. XCHG without the BPF_FETCH flag would naturally encode
On Mon, 14 Dec 2020 at 21:46, Daniel Borkmann wrote:
>
> On 12/14/20 12:38 PM, Brendan Jackman wrote:
[...]
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 7c4126542e2b..7be850271be6 100644
> > --- a/tools/lib/bpf/libbpf.map
> >
On Tue, Dec 08, 2020 at 10:15:35AM -0800, Yonghong Song wrote:
>
>
> On 12/8/20 8:59 AM, Brendan Jackman wrote:
> > On Tue, Dec 08, 2020 at 08:38:04AM -0800, Yonghong Song wrote:
> > >
> > >
> > > On 12/8/20 4:41 AM, Brendan Jackman wrote:
> &
Fastabend
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_jit_comp.c | 43 +
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 796506dcfc42..30526776fa78 100644
--- a/arch/x86/net/bpf_
https://lore.kernel.org/bpf/5fcf0fbcc8aa8_9ab320853@john-XPS-13-9370.notmuch/
[4] Mail from Andrii about not supporting old Clang in selftests:
https://lore.kernel.org/bpf/CAEf4BzYBddPaEzRUs=jaWSo5kbf=lzdb7geauvj85gxlqzt...@mail.gmail.com/
Brendan Jackman (11):
bpf: x86: Factor ou
old-value is easier to JIT, so that's
what we use.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_jit_comp.c| 8
include/linux/filter.h | 2 ++
include/uapi/linux/bpf.h | 4 +++-
kernel/bpf/core.c | 20
kernel/bpf/di
This provides a convenient perf ringbuf -> libbpf ringbuf migration
path for users of external polling systems. It is analogous to
perf_buffer__epoll_fd.
Signed-off-by: Brendan Jackman
---
Difference from v1: Added entry to libbpf.map.
tools/lib/bpf/libbpf.h | 1 +
tools/lib/bpf/libbpf.
Document new atomic instructions.
Signed-off-by: Brendan Jackman
---
Documentation/networking/filter.rst | 26 ++
1 file changed, 26 insertions(+)
diff --git a/Documentation/networking/filter.rst
b/Documentation/networking/filter.rst
index 1583d59d806d..26d508a5e038
ectly support
the fetch_ version these operations, so we need to generate a CMPXCHG
loop in the JIT. This requires the use of two temporary registers,
IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_j
switch case means that we
need an extra conditional branch to differentiate them) in favour of
compact and (relatively!) simple C code.
Acked-by: Yonghong Song
Signed-off-by: Brendan Jackman
---
kernel/bpf/core.c | 80 +++
1 file changed, 39 insertions
ect's data section, which tells the userspace
object whether to skip the atomics test.
Acked-by: Yonghong Song
Signed-off-by: Brendan Jackman
---
tools/testing/selftests/bpf/Makefile | 2 +
.../selftests/bpf/prog_tests/atomics.c| 246 ++
tools/testing/sel
The BPF_FETCH field can be set in bpf_insn.imm, for BPF_ATOMIC
instructions, in order to have the previous value of the
atomically-modified memory location loaded into the src register
after an atomic op is carried out.
Suggested-by: Yonghong Song
Signed-off-by: Brendan Jackman
Acked-by: John
I can't find a reason why this code is in resolve_pseudo_ldimm64;
since I'll be modifying it in a subsequent commit, tidy it up.
Signed-off-by: Brendan Jackman
Acked-by: Yonghong Song
Acked-by: John Fastabend
---
kernel/bpf/verifier.c | 13 ++---
1 file changed, 6 insert
possible (doesn't break existing valid BPF progs) because the
immediate field is currently reserved MBZ and BPF_ADD is zero.
All uses are removed from the tree but the BPF_XADD definition is
kept around to avoid breaking builds for people including kernel
headers.
Signed-off-by: Brendan Ja
The JIT case for encoding atomic ops is about to get more
complicated. In order to make the review & resulting code easier,
let's factor out some shared helpers.
Signed-off-by: Brendan Jackman
Acked-by: John Fastabend
---
arch/x86/net/bpf_jit_co
A later commit will need to lookup a subset of these opcodes. To
avoid duplicating code, pull out a table.
The shift opcodes won't be needed by that later commit, but they're
already duplicated, so fold them into the table anyway.
Signed-off-by: Brendan Jackman
Acked-by: John
The case for JITing atomics is about to get more complicated. Let's
factor out some common code to make the review and result more
readable.
NB the atomics code doesn't yet use the new helper - a subsequent
patch will add its use as a side-effect of other changes.
Signed-off-by: Brend
chset:
https://lore.kernel.org/bpf/20201123173202.1335708-1-jackm...@google.com/
[2] Visualisation of eBPF opcode space:
https://gist.github.com/bjackman/00fdad2d5dfff601c1918bc29b16e778
Brendan Jackman (13):
bpf: x86: Factor out emission of ModR/M for *(reg + off)
bpf: x86: Factor out emiss
A later commit will need to lookup a subset of these opcodes. To
avoid duplicating code, pull out a table.
The shift opcodes won't be needed by that later commit, but they're
already duplicated, so fold them into the table anyway.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_
The JIT case for encoding atomic ops is about to get more
complicated. In order to make the review & resulting code easier,
let's factor out some shared helpers.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_jit_comp.c | 39 ++---
1 file ch
possible (doesn't break existing valid BPF progs) because the
immediate field is currently reserved MBZ and BPF_ADD is zero.
All uses are removed from the tree but the BPF_XADD definition is
kept around to avoid breaking builds for people including kernel
headers.
Signed-off-by: Brendan Ja
This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions,
in order to have the previous value of the atomically-modified memory
location loaded into the src register after an atomic op is carried
out.
Suggested-by: Yonghong Song
Signed-off-by: Brendan Jackman
---
arch/x86/net
I can't find a reason why this code is in resolve_pseudo_ldimm64;
since I'll be modifying it in a subsequent commit, tidy it up.
Signed-off-by: Brendan Jackman
---
kernel/bpf/verifier.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/bpf/ve
switch case means that we
need an extra conditional branch to differentiate them) in favour of
compact and (relatively!) simple C code.
Signed-off-by: Brendan Jackman
---
kernel/bpf/core.c | 79 +++
1 file changed, 38 insertions(+), 41 deletions
rn-old-value is easier to JIT.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_jit_comp.c| 8
include/linux/filter.h | 20
include/uapi/linux/bpf.h | 4 +++-
kernel/bpf/core.c | 20
kernel/bpf/disasm.c
There's currently only one usage of this but implementation of
atomic_sub add another.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_jit_comp.c | 23 ++-
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x8
ectly support
the fetch_ version these operations, so we need to generate a CMPXCHG
loop in the JIT. This requires the use of two temporary registers,
IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_j
Including only interpreter and x86 JIT support.
x86 doesn't provide an atomic exchange-and-subtract instruction that
could be used for BPF_SUB | BPF_FETCH, however we can just emit a NEG
followed by an XADD to get the same effect.
Signed-off-by: Brendan Jackman
---
arch/x86/net/bpf_jit_c
e - I tried implementing that and found that it
ballooned into an explosion of nightmares at the top of
tools/testing/selftests/bpf/Makefile without actually improving the
clarity of the CLANG_BPF_BUILD_RULE code at all. Hence the simple
$(shell) call...
Signed-off-by: Brendan Jackman
---
tools/te
Signed-off-by: Brendan Jackman
---
Documentation/networking/filter.rst | 27 +++
1 file changed, 27 insertions(+)
diff --git a/Documentation/networking/filter.rst
b/Documentation/networking/filter.rst
index 1583d59d806d..c86091b8cb0e 100644
--- a/Documentation
The arguments of sizeof are not evaluated so arguments are safe to
re-use in that context. Excludeing sizeof sub-expressions means
macros like ARRAY_SIZE can pass checkpatch.
Signed-off-by: Brendan Jackman
---
scripts/checkpatch.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
The arguments of sizeof are not evaluated so arguments are safe to
re-use in that context. Excluding sizeof sub-expressions means
macros like ARRAY_SIZE can pass checkpatch.
Cc: Andy Whitcroft
Cc: Joe Perches
Signed-off-by: Brendan Jackman
---
v2 is the same patch, I just forgot to add CCs to
On Thu, 20 Aug 2020 at 23:46, Kees Cook wrote:
>
> On Thu, Aug 20, 2020 at 06:47:53PM +0200, Brendan Jackman wrote:
> > From: Paul Renauld
> >
> > LSMs have high overhead due to indirect function calls through
> > retpolines. This RPC proposes to replace these with
On Mon, Aug 24, 2020 at 04:33:44PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 24, 2020 at 04:09:09PM +0200, Brendan Jackman wrote:
>
> > > > Why this trick with a switch statement? The table of static call is
> > > > defined
> > > > at compile time.
On Fri, 21 Aug 2020 at 00:46, Casey Schaufler wrote:
>
> On 8/20/2020 9:47 AM, Brendan Jackman wrote:
[...]
> What does NOP really look like?
The NOP is the same as a regular function call but the CALL
instruction is replaced with a NOP instruction. The code that sets up
the call para
On Mon, 24 Aug 2020 at 18:43, Casey Schaufler wrote:
>
> On 8/24/2020 8:20 AM, Brendan Jackman wrote:
> > On Fri, 21 Aug 2020 at 00:46, Casey Schaufler
> > wrote:
> >> On 8/20/2020 9:47 AM, Brendan Jackman wrote:
> > [...]
> >> What does NOP real
Hi Rafael,
On Fri, Apr 14 2017 at 22:51, Rafael J. Wysocki wrote:
> On Tuesday, April 11, 2017 12:20:41 AM Rafael J. Wysocki wrote:
>> From: Rafael J. Wysocki
>>
>> Make the schedutil governor take the initial (default) value of the
>> rate_limit_us sysfs attribute from the (new) transition_dela
Hi Rafael,
On Mon, Apr 10 2017 at 00:10, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> Make the schedutil governor compute the initial (default) value of
> the rate_limit_us sysfs attribute by multiplying the transition
> latency by a multiplier depending on the policy and set by the
> s
CPUHP_AP_SCHED_MIGRATE_DYING doesn't exist, it looks like this was
supposed to refer to CPUHP_AP_SCHED_STARTING's teardown callback
i.e. sched_cpu_dying.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Sebastian Andrzej Siewior
Cc: Boris Ostrovsky
Cc: Dietmar Eggemann
Cc: Quentin
On Wed, Aug 09 2017 at 21:22, Atish Patra wrote:
> On 08/03/2017 10:05 AM, Brendan Jackman wrote:
>>
>> On Thu, Aug 03 2017 at 13:15, Josef Bacik wrote:
>>> On Thu, Aug 03, 2017 at 11:53:19AM +0100, Brendan Jackman wrote:
>>>>
>>>> Hi,
>>&
big.LITTLE systems since they have relatively few CPUs, which
suggests the trade-off makes sense here.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Josef Bacik
Cc: Joel Fernandes
Cc: Mike Galbraith
Cc: Matt Fleming
---
include/linux/sched/wake_q.h | 2 ++
k
On Mon, Aug 28 2017 at 08:56, Vincent Guittot wrote:
> On 25 August 2017 at 17:51, Brendan Jackman wrote:
>>
>> On Fri, Aug 25 2017 at 13:38, Vincent Guittot wrote:
>>> On 25 August 2017 at 12:16, Brendan Jackman wrote:
>>>> find_idlest_group currently
ive would be to just initialise @new_cpu to
@cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own functi
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ing
case.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Reviewed-by: Vincent Guittot
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched
s and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
kernel/sched/fair.
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
On Tue, Aug 22 2017 at 04:34, Joel Fernandes wrote:
> Hi Peter,
>
> On Mon, Aug 21, 2017 at 2:14 PM, Peter Zijlstra wrote:
>> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
>>> The current use of returning NULL from find_idlest_group is broken in
>&g
On Tue, Aug 22 2017 at 07:48, Vincent Guittot wrote:
> On 21 August 2017 at 17:21, Brendan Jackman wrote:
>> The current use of returning NULL from find_idlest_group is broken in
> [snip]
>> ---
>> kernel/sched/fair.c | 34 +++---
>>
On Tue, Aug 22 2017 at 10:39, Brendan Jackman wrote:
> On Tue, Aug 22 2017 at 04:34, Joel Fernandes wrote:
>> Hi Peter,
>>
>> On Mon, Aug 21, 2017 at 2:14 PM, Peter Zijlstra wrote:
>>> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
>>>&
On Tue, Aug 22 2017 at 11:03, Peter Zijlstra wrote:
> On Tue, Aug 22, 2017 at 11:39:26AM +0100, Brendan Jackman wrote:
>
>> However the code movement helps - I'll combine it with Vincent's
>> suggestions and post a v2.
>
> Please also split into multiple pa
hat PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair: Remove unnecessary comparison with -1
sched/fair: Fix find_id
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
case.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
).
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
kernel/sched/fair.c | 29 ++---
1 file changed, 10 insertions(+), 19 deletions(-)
diff --git a/kernel/sched/fair.
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ing
On Fri, Aug 25 2017 at 13:38, Vincent Guittot wrote:
> On 25 August 2017 at 12:16, Brendan Jackman wrote:
>> find_idlest_group currently returns NULL when the local group is
>> idlest. The caller then continues the find_idlest_group search at a
>> lower level of the curre
On Mon, Sep 18 2017 at 22:15, Joel Fernandes wrote:
> Hi Brendan,
Hi Joel,
Thanks for taking a look :)
> On Fri, Aug 11, 2017 at 2:45 AM, Brendan Jackman
> wrote:
>> This patch adds a parameter to select_task_rq, sibling_count_hint
>> allowing the caller, where it has
This patchset optimises away an unused comparison, and fixes some corner cases
in
the find_idlest_group path of select_task_rq_fair.
Brendan Jackman (2):
sched/fair: Remove unnecessary comparison with -1
sched/fair: Fix use of NULL with find_idlest_group
kernel/sched/fair.c | 36
Since 83a0a96a5f26 (sched/fair: Leverage the idle state info when
choosing the "idlest" cpu) find_idlest_cpu no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
-
This patch also re-words the check for whether the group in
consideration is local, under the assumption that the first group in
the sched domain is always the local one.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Cc: Dietmar Eggemann
Cc:
Hi Josef,
Thanks for taking a look.
On Mon, Aug 21 2017 at 17:26, Josef Bacik wrote:
> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
[...]
>> -local_group = cpumask_test_cpu(this_cpu,
>> - sched_g
Hi Viresh,
On Mon, May 22 2017 at 05:10, Viresh Kumar wrote:
> The rate_limit_us for the schedutil governor is getting set to 500 ms by
> default for the ARM64 hikey board. And its way too much, even for the
> default value. Lets set the default transition_delay_ns to something
> more realistic (1
ring fork balancing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
changes v1
e force_balance case means
there's an upper bound on the time before we can attempt to solve the
underutilization: after DIE's sd->balance_interval has passed the
next nohz balance kick will help us out.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ing
case.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Reviewed-by: Vincent Guittot
Reviewed-by: Josef Bacik
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions
ive would be to just initialise @new_cpu to
@cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own functi
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
s and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Reviewed-by: Josef
Hi PeterZ,
I just got this in my inbox and noticed I didn't adress it to anyone. I
meant to address it to you.
On Fri, Sep 29 2017 at 17:05, Brendan Jackman wrote:
> There has been a bit of discussion on this RFC, but before I do any
> more work I'd really like your input o
7 at 09:45, Brendan Jackman wrote:
> This patch adds a parameter to select_task_rq, sibling_count_hint
> allowing the caller, where it has this information, to inform the
> sched_class the number of tasks that are being woken up as part of
> the same event.
>
> The wake_q mech
On Wed, Sep 20 2017 at 05:06, Joel Fernandes wrote:
>> On Tue, Sep 19, 2017 at 3:05 AM, Brendan Jackman
>> wrote:
>>> On Mon, Sep 18 2017 at 22:15, Joel Fernandes wrote:
> [..]
>>>>> IIUC, if wake_affine() behaves correctly this trick wouldn't be
&g
Hi Joel,
Sorry I didn't see your comments on the code before, I think it's
orthoganal to the other thread about the overall design so I'll just
respond here.
On Tue, Sep 19 2017 at 05:15, Joel Fernandes wrote:
> Hi Brendan,
>
> On Fri, Aug 11, 2017 at 2:45 AM, Brendan Jac
Hi Peter,
Ping.
Log of previous discussion: https://patchwork.kernel.org/patch/9876769/
Cheers,
Brendan
On Tue, Aug 08 2017 at 09:55, Brendan Jackman wrote:
> We use task_util in find_idlest_group via capacity_spare_wake. This
> task_util is updated in wake_cap. However wake_cap is n
Hi Peter, Josef,
Do you have any thoughts on this one?
On Mon, Aug 07 2017 at 16:39, Brendan Jackman wrote:
> The "goto force_balance" here is intended to mitigate the fact that
> avg_load calculations can result in bad placement decisions when
> priority is asymmetrical
Hi,
On Fri, Jun 30 2017 at 17:55, Josef Bacik wrote:
> On Fri, Jun 30, 2017 at 07:02:20PM +0200, Mike Galbraith wrote:
>> On Fri, 2017-06-30 at 10:28 -0400, Josef Bacik wrote:
>> > On Thu, Jun 29, 2017 at 08:04:59PM -0700, Joel Fernandes wrote:
>> >
>> > > That makes sense that we multiply slave'
On Thu, Aug 03 2017 at 13:15, Josef Bacik wrote:
> On Thu, Aug 03, 2017 at 11:53:19AM +0100, Brendan Jackman wrote:
>>
>> Hi,
>>
>> On Fri, Jun 30 2017 at 17:55, Josef Bacik wrote:
>> > On Fri, Jun 30, 2017 at 07:02:20PM +0200, Mike Galbraith wrote:
>> &
e force_balance case means
there's an upper bound on the time before we can attempt to solve the
underutilization: after DIE's sd->balance_interval has passed the
next nohz balance kick will help us out.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
ring fork balancing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
changes v1
Hi Josef,
I happened to be thinking about something like this while investigating
a totally different issue with ARM big.LITTLE. Comment below...
On Fri, Jul 14 2017 at 13:21, Josef Bacik wrote:
> From: Josef Bacik
>
> The wake affinity logic will move tasks between two cpu's that appear to be
>
ring fork balancing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
kernel/sche
On Wed, Aug 02 2017 at 13:24, Peter Zijlstra wrote:
> On Wed, Aug 02, 2017 at 02:10:02PM +0100, Brendan Jackman wrote:
>> We use task_util in find_idlest_group via capacity_spare_wake. This
>> task_util is updated in wake_cap. However wake_cap is not the only
>> re
y having
CPU B convert the pending/ongoing stats kick to a proper balance
by clearing the NOHZ_STATS_KICK bit in nohz_kick_needed.
Brendan Jackman (1):
sched/fair: Update blocked load from newly idle balance
Vincent Guittot (1):
sched: force update of blocked load of idle cpus
kern
out taking the rq
lock.
Change-Id: If9d4e14d7b4da86e05474b5c125d91d9b47f9e93
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Signed-off-by: Brendan Jackman
---
kernel/sched/core.c | 1 +
kernel/sched/fa
use PELT half life]
[Moved update_blocked_averges call outside rebalance_domains
to simplify code]
Signed-off-by: Brendan Jackman
---
kernel/sched/fair.c | 86 ++--
kernel/sched/sched.h | 1 +
2 files changed, 77 insertions(+), 10 deletions
101 - 200 of 287 matches
Mail list logo