On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> Introduce a helper function for setting lapic parameters when
> activate/deactivate apicv.
>
> Signed-off-by: Suravee Suthikulpanit
> ---
> arch/x86/kvm/lapic.c | 23 ++-
> arch/x86/kvm/lapic.h | 1 +
> 2 files changed, 19
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> When activate / deactivate AVIC during runtime, all vcpus has to be
> operating in the same mode. So, introduce new interface to request
> all vCPUs to activate/deactivate APICV.
If we need to switch APICV on and off on all vCPUs of a VM, should
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> Activate/deactivate AVIC requires setting/unsetting the memory region used
> for APIC_ACCESS_PAGE_PRIVATE_MEMSLOT. So, re-factor avic_init_access_page()
> to avic_setup_access_page() and add srcu_read_lock/unlock, which are needed
> to allow this
Hi Suravee.
I wonder, how this interacts with Hyper-V SynIC; see comments below.
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> AMD AVIC does not support ExtINT. Therefore, AVIC must be temporary
> deactivated and fall back to using legacy interrupt injection via
> vINTR and interrupt windo
Am 14.02.19 um 22:46 schrieb Jan H. Schönherr:
Some systems experience regular interruptions (60 Hz SMI?), that prevent
the quick PIT calibration from succeeding: individual interruptions can be
so long, that the PIT MSB is observed to decrement by 2 or 3 instead of 1.
The existing code cannot
Am 12.02.19 um 12:57 schrieb Thomas Gleixner:
On Tue, 29 Jan 2019, Thomas Gleixner wrote:
On Tue, 29 Jan 2019, Jan H. Schönherr wrote:
Am 29.01.2019 um 11:23 schrieb Jan H. Schönherr:
+calibrate:
+ /*
+* Extrapolate the error and fail fast if the error will
+* never be
very first reads.
Signed-off-by: Jan H. Schönherr
---
v2:
- Dropped the other hacky patch for the time being.
- Fixed the early exit check.
- Hopefully fixed all inaccurate math in v1.
- Extended comments.
arch/x86/kernel/tsc.c | 91 +++
1 file changed, 57
very first reads.
Signed-off-by: Jan H. Schönherr
---
arch/x86/kernel/tsc.c | 80 +--
1 file changed, 46 insertions(+), 34 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index e9f777bfed40..a005e0aa215e 100644
--- a/arch/x86/kernel
Fatal1ty X399 Professional Gaming, BIOS P3.30.
This unexplained behavior goes away as soon as the sibling CPU of the
boot CPU is brought back up. Hence, add a hack to restore the sibling
CPU before all others on unfreeze. This keeps the TSC stable.
Signed-off-by: Jan H. Schönherr
---
kernel/cpu.c
PU16 in my case) hasn't been resumed yet. (I did some experiments
with CPU hotplug before and after suspend, but apart from reproducing
the issue and verifying the "fix", I got nowhere.)
The patches are against v4.20.
Jan H. Schönherr (2):
x86/tsc: Allow quick PIT calibration de
On 23/11/2018 17.51, Frederic Weisbecker wrote:
> On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
>> On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
>>> Right, so the whole bandwidth thing becomes a pain; the simplest
>>> solution is to detect the
On 27/10/2018 01.05, Subhra Mazumdar wrote:
>
>
>> D) What can I *not* do with this?
>> -
>>
>> Besides the missing load-balancing within coscheduled task-groups, this
>> implementation has the following properties, which might be considered
>> short-comings.
>>
>>
On 19/10/2018 02.26, Subhra Mazumdar wrote:
> Hi Jan,
Hi. Sorry for the delay.
> On 9/7/18 2:39 PM, Jan H. Schönherr wrote:
>> The collective context switch from one coscheduled set of tasks to another
>> -- while fast -- is not atomic. If a use-case needs the absolute gua
On 19/10/2018 17.45, Rik van Riel wrote:
> On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
>> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
>>> On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
>>>>
>>>> Now, it
On 17/10/2018 04.09, Frederic Weisbecker wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> C) How does it work?
>>
[...]
>> For each task-group, the user can select at which level it should be
>> scheduled. If yo
On 09/26/2018 11:05 PM, Nishanth Aravamudan wrote:
> On 26.09.2018 [10:25:19 -0700], Nishanth Aravamudan wrote:
>>
>> I found another issue today, while attempting to test (with 61/60
>> applied) separate coscheduling cgroups for vcpus and emulator threads
>> [the default configuration with libvirt
On 09/17/2018 02:25 PM, Peter Zijlstra wrote:
> On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
>
>> Assuming, there is a cgroup-less solution that can prevent simultaneous
>> execution of tasks on a core, when they're not supposed to. How would you
>&
On 09/17/2018 03:37 PM, Peter Zijlstra wrote:
> On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
>> With gang scheduling as defined by Feitelson and Rudolph [6], you'd have to
>> explicitly schedule idle time. With coscheduling as defined by Ousterhout
>
On 09/19/2018 11:53 PM, Subhra Mazumdar wrote:
> Can we have a more generic interface, like specifying a set of task ids
> to be co-scheduled with a particular level rather than tying this with
> cgroups? KVMs may not always run with cgroups and there might be other
> use cases where we might want
On 09/18/2018 04:40 PM, Rik van Riel wrote:
> On Fri, 2018-09-14 at 18:25 +0200, Jan H. Schönherr wrote:
>> On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
>>> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>>>>
>>>> B) Why would I want t
On 09/18/2018 04:35 PM, Rik van Riel wrote:
> On Tue, 2018-09-18 at 15:22 +0200, Jan H. Schönherr wrote:
[...]
> Task priorities in a flat runqueue are relatively straightforward, with
> vruntime scaling just like done for nice levels, but I have to admit
> that throttled grou
On 09/18/2018 03:38 PM, Peter Zijlstra wrote:
> On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
>> AFAIK, changing the affinity of a cpuset overwrites the individual
>> affinities of tasks
>> within them. Thus, it shouldn't be an issue.
>
> N
On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
> On Sat, Sep 15, 2018 at 10:48:20AM +0200, Jan H. Schönherr wrote:
>> On 09/14/2018 06:25 PM, Jan H. Schönherr wrote:
>
>>> b) ability to move CFS RQs between CPUs: someone changed the affinity of
>>>a cpuset? No pro
On 09/18/2018 02:33 AM, Subhra Mazumdar wrote:
> On 09/07/2018 02:39 PM, Jan H. Schönherr wrote:
>> A) Quickstart guide for the impatient.
>> --
>>
>> Here is a quickstart guide to set up coscheduling at core-level for
>> select
On 09/14/2018 06:25 PM, Jan H. Schönherr wrote:
> On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
>>
>> There are known scalability problems with the existing cgroup muck; you
>> just made things a ton worse. The existing cgroup overhead is
>> significant, you also
On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> This patch series extends CFS with support for coscheduling. The
>> implementation is versatile enough to cover many different coscheduling
>> use-cases, w
Partly-reported-by: Nishanth Aravamudan
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 2 ++
kernel/sched/fair.c| 35 ++-
2 files changed, 12 insertions(+), 25 deletions(-)
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index a1f0d3
On 09/13/2018 01:15 AM, Nishanth Aravamudan wrote:
> [...] if I just try to set machine's
> cpu.scheduled to 1, with no other changes (not even changing any child
> cgroup's cpu.scheduled yet), I get the following trace:
>
> [16052.164259] [ cut here ]
> [16052.168973] rq->
On 09/12/2018 09:34 PM, Jan H. Schönherr wrote:
> That said, I see a hang, too. It seems to happen, when there is a
> cpu.scheduled!=0 group that is not a direct child of the root task group.
> You seem to have "/sys/fs/cgroup/cpu/machine" as an intermediate group.
> (T
On 09/12/2018 02:24 AM, Nishanth Aravamudan wrote:
> [ I am not subscribed to LKML, please keep me CC'd on replies ]
>
> I tried a simple test with several VMs (in my initial test, I have 48
> idle 1-cpu 512-mb VMs and 2 idle 2-cpu, 2-gb VMs) using libvirt, none
> pinned to any CPUs. When I tried
unctionality. Highlights are:
23: Data structures used for coscheduling.
24-26: Creation of root-task-group runqueue hierarchy.
39-40: Runqueue hierarchies for normal task groups.
41-42: Locking strategies under coscheduling.
47-49: Adjust core CFS code.
52: Adjust core C
the other direction.
Adjust all users, simplifying many of them.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 7 ++-
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 36
kernel/sched/sched.h | 5 ++---
4 files changed, 21 insertions(+), 2
-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 21 +++--
kernel/sched/sched.h | 6 ++
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd1b0abd8474..c38a54f57e90 100644
--- a/kernel/sched/core.c
+++ b/kernel
Prepare for future changes and refactor sync_throttle() to work with
a different set of arguments.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index
Factor out the logic to retrieve the parent CFS runqueue of another
CFS runqueue into its own function and replace open-coded variants.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/kernel/sched
With scheduling domains sufficiently prepared, we can now initialize
the full hierarchy of runqueues and link it with the already existing
bottom level, which we set up earlier.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 1 +
kernel/sched/cosched.c | 76
Scheduled task groups will bring coscheduling to Linux.
The actual functionality will be added successively.
Signed-off-by: Jan H. Schönherr
---
init/Kconfig | 11 +++
kernel/sched/Makefile | 1 +
kernel/sched/cosched.c | 9 +
3 files changed, 21 insertions
The code path is not yet adjusted for coscheduling. Disable
it for now.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 30e5ff30f442..8504790944bf 100644
--- a/kernel/sched
callers to use hrq_of() instead of rq_of() to derive the cpu
argument.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fde1c4ba4bb4..a2945355f823 100644
--- a/kernel/sched
Provide variants of the task group CFS traversal constructs that also
reach the hierarchical runqueues. Adjust task group management functions
where necessary.
The most changes are in alloc_fair_sched_group(), where we now need to
be a bit more careful during initialization.
Signed-off-by: Jan H
(), which returns the leader's CPU runqueue.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 35 +++
1 file changed, 35 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8cba7b8fb6bd..24d01bf8f796 100644
--- a/kernel/sched/f
er-CPU runqueues.
The change in set_next_entity() just silences a warning. The code looks
bogus even without coscheduling, as the weight of an SE is independent
from the weight of the runqueue, when task groups are involved. It's
just for statistics anyway.
Signed-off-by: Jan H. Schönherr
---
k
.
Include some lockdep goodness, so that we detect incorrect usage.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 70 +
1 file changed, 70 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f72a72c8c3b8
that only one of multiple CPUs has to walk up the
hierarchy.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d64f4478fda..0dc4d289497c 100644
--- a
we perform the actual
SD-SE weight adjustment via update_sdse_load().
At some point in the future (the code isn't there yet), this will
allow software combining, where not all CPUs have to walk up the
full hierarchy on enqueue/dequeue.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fai
) putting the current
task back.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2aa3a60dfca5..2227e4840355 100644
--- a/kernel/sched/fair.c
arily bumping the aggregated value.
(A nicer solution would be to apply only the actual difference to the
aggregate instead of doing full removal and a subsequent addition.)
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 15 +++
1 file changed, 15 insertions(+)
diff --
Move struct rq_flags around to keep future commits crisp.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b8c8dfd0e88d..cd3a32ce8fc6 100644
ned-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 2 ++
kernel/sched/cosched.c | 85 ++
kernel/sched/sched.h | 6
3 files changed, 93 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 48e37c3baed1..a235b6041
The function cfs_rq_util_change() notifies frequency governors of
utilization changes, so that they can be scheduler driven. This is
coupled to per CPU runqueue statistics. So, don't do anything
when called for non-CPU runqueues.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c
With coscheduling the number of required classes is twice the depth of
the scheduling domain hierarchy. For a 256 CPU system, there are eight
levels at most. Adjust the number of subclasses, so that lockdep can
still be used on such systems.
Signed-off-by: Jan H. Schönherr
---
include/linux
, where the sdrq->is_root fields do not yield
a consistent picture across a task group.
Handle these cases.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 68 +
1 file changed, 68 insertions(+)
diff --git a/kernel/sched/fair.
-by: Jan H. Schönherr
---
kernel/sched/core.c | 5 ++---
kernel/sched/fair.c | 11 +--
kernel/sched/sched.h | 21 +
3 files changed, 28 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5350cab7ac4a..337bae6fa836 100644
--
ent group to
create a new group.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 64 +++-
kernel/sched/sched.h | 31 +
2 files changed, 59 insertions(+), 36 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sc
hand, we need to handle additional idle cases, as CPUs
are now idle *within* certain coscheduled sets and woken tasks may
not preempt the idle task blindly anymore.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 85 +++--
1 file change
transparently to system level.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 32 +++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index eb6a6a61521e..a1f0d3a7b02a 100644
--- a/kernel/sched
of another use case. This will change soon.
Also, move the structure definition below kernel/sched/. It is not used
outside and in the future it will carry some more internal types that
we don't want to expose.
Signed-off-by: Jan H. Schönherr
---
include/linux/sched/topology.h | 6 ---
-by: Jan H. Schönherr
---
kernel/sched/fair.c | 51 ++-
1 file changed, 42 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2227e4840355..6d64f4478fda 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
U0 1 2 3 0 1 2 3
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 138 -
kernel/sched/fair.c| 14 -
kernel/sched/sched.h | 2 +
3 files changed, 151 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/cosc
nse
from a coscheduling perspective).
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 21 +++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0bba924b40ba..2aa3a60dfca5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sc
parts have already been
locked via rq_lock_owned(), as for example dequeueing might happen
during task selection if a runqueue is throttled.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 53 ++
kernel/sched/sched.h | 14
Make parent_cfs_rq() coscheduling-aware.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8504790944bf..8cba7b8fb6bd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched
If a dynamic amount of locks needs to be pinned in the same context,
it is impractical to have a cookie per lock. Make the cookie generator
accessible, so that such a group of locks can be (re-)pinned with
just one (shared) cookie.
Signed-off-by: Jan H. Schönherr
---
include/linux/lockdep.h
which is needed for resched_curr()
to work. Therefore, fall back to resched_cpu_locked() on higher levels.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 337bae6fa836..c4358396f588 1
SD-SEs.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 12 +---
kernel/sched/sched.h | 14 ++
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 18b1d81951f1..9cbdd027d449 100644
--- a/kernel/sched/fair.c
hierarchical aspect and represent
SEs of a lower hierarchy level at a higher level within the parent task
group, causing SEs at the lower level to get coscheduled.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 151 +++
1 file changed, 151
,
without CONFIG_SCHED_DEBUG, SCHED_WARN_ON() evaluates to false
unconditionally.
Change SCHED_WARN_ON() to not discard the WARN_ON condition, even without
CONFIG_SCHED_DEBUG, so that it can be used within if() statements as
expected.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 2
Replace open-coded cases of parent_entity() with actual parent_entity()
invocations.
This will make later checks within parent_entity() more useful.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched
We use and keep rq->clock updated on all hierarchical runqueues. In
fact, not using the hierarchical runqueue would be incorrect as there is
no guarantee that the leader's CPU runqueue clock is updated.
Switch all obvious cases from rq_of() to hrq_of().
Signed-off-by: Jan H. S
The functions sync_throttle() and unregister_fair_sched_group() are
called during the creation and destruction of cgroups. They are never
called for the root task-group. Remove checks that always yield the
same result when operating on non-root task groups.
Signed-off-by: Jan H. Schönherr
Move init_entity_runnable_average() into init_tg_cfs_entry(), where all
the other SE initialization is carried out.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 34 ++
1 file changed, 34 insertions(+)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d65c98c34c13..456b266b8a2c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1130,6
Decouple init_tg_cfs_entry() from other structures' implementation
details, so that it only updates/accesses task group related fields
of the CFS runqueue and its SE.
This prepares calling this function in slightly different contexts.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/c
find something to execute, then we force the
CPU into idle.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 150 +++-
1 file changed, 137 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9e8b811
locked per-CPU
runqueue upwards, locking/unlocking runqueues as they go along, stopping
when they would leave their area of responsibility.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 94 ++
kernel/sched/sched.h | 11 ++
2
actual decisions on the (SD-)SE, under which there were no
tasks.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 11 +++
kernel/sched/fair.c | 43 +---
kernel/sched/idle.c | 7 ++-
kernel/sched/sched.h | 55
beyond that of the
root task group. The value for the root task group cannot be configured
via this interface. It has to be configured with a command line
argument, which will be added later.
The function sdrq_update_root() will be filled in a follow-up commit.
Signed-off-by: Jan H. Schönherr
Jan H. Schönherr
---
kernel/sched/fair.c | 107 +---
1 file changed, 102 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 483db54ee20a..bc219c9c3097 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fai
queue. This is used during load balancing. We keep these lists per
hierarchy level, which corresponds to the lock we hold and also
keeps the per-CPU logic compatible to what is there.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 12 ++--
1 file changed, 6 insertions(+), 6 dele
.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 2 ++
kernel/sched/cosched.c | 19 +++
kernel/sched/sched.h | 4
3 files changed, 25 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a9f5339d58cb..b3ff885a88d4 100644
--- a/kernel/sched
Modify update_blocked_averages() and update_cfs_rq_h_load() so that they
won't access the next higher hierarchy level, for which they don't hold a
lock.
This will have to be touched again, when load balancing is made
functional.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/
Add entity variants of put_prev_task_fair() and set_curr_task_fair()
that will be later used by coscheduling.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 34 +-
kernel/sched/sched.h | 2 ++
2 files changed, 23 insertions(+), 13 deletions(-)
diff
Locks within the runqueue hierarchy are always taken from bottom to top
to avoid deadlocks. Let the lock validator know about this by declaring
different runqueue levels as distinct lock classes.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 29 ++---
1 file
rithm operating on the NUMA distance matrix.
Also, as mentioned before, not everyone needs the finer granularity.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 1 +
kernel/sched/cosched.c | 259 +
kernel/sched/sched.h | 2 +
3
from disabling
NOHZ and allows us to gradually improve the situation later.
Signed-off-by: Jan H. Schönherr
---
kernel/time/tick-sched.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 5b33e2f5c0ed..5e9c2a7d4ea9 100644
There is fair amount of overlap between enqueue_task_fair() and
unthrottle_cfs_rq(), as well as between dequeue_task_fair() and
throttle_cfs_rq(). This is the first step toward having both of
them use the same basic function.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 82
Make the task delta handled by enqueue_entity_fair() and dequeue_task_fair()
variable as required by unthrottle_cfs_rq() and throttle_cfs_rq().
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 18 ++
kernel/sched/sched.h | 6 --
2 files changed, 14 insertions
to what NUMA already does on top of the system topology).
Signed-off-by: Jan H. Schönherr
---
include/linux/sched/topology.h | 11 ---
kernel/sched/topology.c| 40 ++--
2 files changed, 30 insertions(+), 21 deletions(-)
diff --git a/include
Just like init_tg_cfs_entry() does something useful without a scheduling
entity, let it do something useful without a CFS runqueue.
This prepares for the addition of new types of SEs.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 28 +++-
1 file changed, 15
sched_domain_topology
is replaced with another one.
Signed-off-by: Jan H. Schönherr
---
include/linux/sched/topology.h | 1 +
kernel/sched/topology.c| 5 +
2 files changed, 6 insertions(+)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index
Factor out the logic to place a SE into a CFS runqueue into its own
function.
This consolidates various sprinkled updates of se->cfs_rq, se->parent,
and se->depth at the cost of updating se->depth unnecessarily on
same-group movements between CPUs.
Signed-off-by: Jan H. Schönherr
s not specified, the memory is removed from the e820 map.
Signed-off-by: Jan H. Schönherr
---
v2: Small coding style and typography adjustments
Documentation/admin-guide/kernel-parameters.txt | 9 +
arch/x86/kernel/e820.c | 18 ++
2 files ch
On 02/02/2018 09:50 PM, Andy Shevchenko wrote:
> On Fri, Feb 2, 2018 at 1:13 AM, Jan H. Schönherr wrote:
>
>> + [KNL,ACPI] Convert memory within the specified region
>> + from to . If "-" is left
>> +
s not specified, the memory is removed from the e820 map.
Signed-off-by: Jan H. Schönherr
---
Documentation/admin-guide/kernel-parameters.txt | 9 +
arch/x86/kernel/e820.c | 18 ++
2 files changed, 27 insertions(+)
diff --git a/Documentatio
additional argument to
indicate where to stop, so that only newly added entries are removed
from the tree.
Fixes: 9476df7d80df ("mm: introduce find_dev_pagemap()")
Signed-off-by: Jan H. Schönherr
---
kernel/memremap.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
di
("mm: fix mixed zone detection in devm_memremap_pages")
Signed-off-by: Jan H. Schönherr
---
kernel/memremap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 403ab9c..4712ce6 100644
--- a/kernel/memremap.c
+++ b/kernel/
wrong, reverting just restores the previous behavior
where overly large values are ignored when encountered (without
any direct feedback).
Reported-by: Abdul Haleem
Signed-off-by: Jan H. Schönherr
---
virt/kvm/eventfd.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/virt/kvm/eventfd.c b
strictly
needed -- unless someone else can provide more insight on valid values for GSI.
Paolo, do you want me to prepare a revert for the commit below?
Regards
Jan
> Possible bad commit is :
>
> commit 36ae3c0a36b7456432fedce38ae2f7bd3e01a563
> Author: Jan H. Schönherr
> Date
If a zero for the number of lines manages to slip through, scroll()
may underflow some offset calculations, causing accesses outside the
video memory.
Make the check in __putstr() more pessimistic to prevent that.
Signed-off-by: Jan H. Schönherr
---
arch/x86/boot/compressed/misc.c | 3 +--
1
The current slack space is not enough for LZ4, which has a worst case
overhead of 0.4% for data that cannot be further compressed. With
an LZ4 compressed kernel with an embedded initrd, the output is likely
to overwrite the input.
Increase the slack space to avoid that.
Signed-off-by: Jan H
1 - 100 of 152 matches
Mail list logo