3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Snild Dolkow
commit 3e536e222f2930534c252c1cc7ae799c725c5ff9 upstream.
There is a window for racing when printing directly to task->comm,
allowing other threads to see a non-terminated string.
Modify update_blocked_averages() and update_cfs_rq_h_load() so that they
won't access the next higher hierarchy level, for which they don't hold a
lock.
This will have to be touched again, when load balancing is made
functional.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 4 +++-
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Steven Rostedt (VMware)
commit 016f8ffc48cb01d1e7701649c728c5d2e737d295 upstream.
While debugging another bug, I was looking at all the synchronize*()
functions being used in kernel/trace, and
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Bartosz Golaszewski
commit 563a53f3906a6b43692498e5b3ae891fac93a4af upstream.
On non-OF systems spi->controlled_data may be NULL. This causes a NULL
pointer derefence on dm365-evm.
Signed-off
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Shan Hai
commit 3943b040f11ed0cc6d4585fd286a623ca8634547 upstream.
The writeback thread would exit with a lock held when the cache device
is detached via sysfs interface, fix it by releasing th
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Richard Weinberger
commit eef19816ada3abd56d9f20c88794cc2fea83ebb2 upstream.
Allocate the buffer after we return early.
Otherwise memory is being leaked.
Cc:
Fixes: 1e51764a3c2a ("UBIFS: add
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Dan Carpenter
commit 0914bb965e38a055e9245637aed117efbe976e91 upstream.
"dev->nr_children" is the number of children which were parsed
successfully in bl_parse_stripe(). It could be all of th
The functions check_preempt_tick() and entity_tick() are executed by
the leader of the group. As such, we already hold the lock for the
per CPU runqueue. Thus, we can use the quick path to resched_curr().
Also, hrtimers are only used/active on per-CPU runqueues. So, use that.
The function __accoun
At a later point (load balancing and throttling at non-CPU levels), we
will have to iterate through parts of the task group hierarchy, visiting
all SD-RQs at the same position within the SD-hierarchy.
Keep track of the task group hierarchy within each SD-RQ to make that
use case efficient.
Signed
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Jon Hunter
commit 6e1811900b6fe6f2b4665dba6bd6ed32c6b98575 upstream.
On all versions of Tegra30 Cardhu, the reset signal to the NXP PCA9546
I2C mux is connected to the Tegra GPIO BB0. Currentl
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Al Viro
commit 9ba3eb5103cf56f0daaf07de4507df76e7813ed7 upstream.
Signed-off-by: Al Viro
Signed-off-by: Greg Kroah-Hartman
---
arch/alpha/kernel/osf_sys.c | 23 +--
1
Add the sysfs interface to configure the scheduling domain hierarchy
level at which coscheduling should happen for a cgroup. By default,
task groups are created with a value of zero corresponding to regular
task groups without any coscheduling.
Note, that you cannot specify a value that goes beyon
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Tomas Bortoli
commit 10aa14527f458e9867cf3d2cc6b8cb0f6704448b upstream.
Added checks to prevent GPFs from raising.
Link: http://lkml.kernel.org/r/20180727110558.5479-1-tomasbort...@gmail.com
Hi all.
> +Optional property:
> +- nxp,quartz_load_12.5pF: The capacitive load on the quartz is 12.5 pF,
> + which differ from the default value of 7 pF
> +
> +Example:
> +
> +pcf8523: pcf8523@68 {
> + compatible = "nxp,pcf85063";
> + reg = <0x68>;
> + nxp,quartz_load_12.5pF;
> +};
T
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Peter Zijlstra
commit a6f572084fbee8b30f91465f4a085d7a90901c57 upstream.
Will noted that only checking mm_users is incorrect; we should also
check mm_count in order to cover CPUs that have a l
SD-SEs require some attention during enqueuing and dequeuing. In some
aspects they behave similar to TG-SEs, for example, we must not dequeue
a SD-SE if it still represents other load. But SD-SEs are also different
due to the concurrent load updates by multiple CPUs and that we need to
be careful w
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Richard Weinberger
commit eef19816ada3abd56d9f20c88794cc2fea83ebb2 upstream.
Allocate the buffer after we return early.
Otherwise memory is being leaked.
Cc:
Fixes: 1e51764a3c2a ("UBIFS: add
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Jann Horn
commit 5820f140edef111a9ea2ef414ab2428b8cb805b1 upstream.
The old code would hold the userns_state_mutex indefinitely if
memdup_user_nul stalled due to e.g. a userfault region. Preve
If a coscheduled set is partly idle, some CPUs *must* do nothing, even
if they have other tasks (in other coscheduled sets). This forced idle
mode must work similar to normal task execution, e.g., not just any
task is allowed to replace the forced idle task.
Lay the ground work for this by introdu
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Jann Horn
commit 42a0cc3478584d4d63f68f2f5af021ddbea771fa upstream.
Holding uts_sem as a writer while accessing userspace memory allows a
namespace admin to stall all processes that attempt to
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Vignesh R
commit 38dabd91ff0bde33352ca3cc65ef515599b77a05 upstream.
pwm-tiehrpwm driver disables PWM output by putting it in low output
state via active AQCSFRC register in ehrpwm_pwm_disable(
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Mikulas Patocka
commit 8c5b044299951acd91e830a688dd920477ea1eda upstream.
I have a USB display adapter using the udlfb driver and I use it on an ARM
board that doesn't have any graphics card.
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Richard Weinberger
commit 08acbdd6fd736b90f8d725da5a0de4de2dd6de62 upstream.
This reverts commit 353748a359f1821ee934afc579cf04572406b420.
It bypassed the linux-mtd review process and fixes th
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Mikulas Patocka
commit bb24153a3f13dd0dbc1f8055ad97fe346d598f66 upstream.
The default delay 5 jiffies is too much when the kernel is compiled with
HZ=100 - it results in jumpy cursor in Xwindo
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Hari Bathini
commit 1bd6a1c4b80a28d975287630644e6b47d0f977a5 upstream.
Crash memory ranges is an array of memory ranges of the crashing kernel
to be exported as a dump via /proc/vmcore file. T
With hierarchical runqueues and locks at each level, it is often
necessary to get multiple locks. Introduce the first of two locking
strategies, which is suitable for typical leader activities.
To avoid deadlocks the general rule is that multiple locks have to be
taken from bottom to top. Leaders
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Christian Brauner
commit 82c9a927bc5df6e06b72d206d24a9d10cced4eb5 upstream.
When running in a container with a user namespace, if you call getxattr
with name = "system.posix_acl_access" and si
Introduce the selection and notification mechanism used to realize
coscheduling.
Every CPU starts selecting tasks from its current_sdrq, which points
into the currently active coscheduled set and which is only updated by
the leader. Whenever task selection crosses a hierarchy level, the
leaders of
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Richard Weinberger
commit 59965593205fa4044850d35ee3557cf0b7edcd14 upstream.
In ubifs_jnl_update() we sync parent and child inodes to the flash,
in case of xattrs, the parent inode (AKA host i
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: jiangyiwen
commit 23cba9cbde0bba05d772b335fe5f66aa82b9ad19 upstream.
Because the value of limit is VIRTQUEUE_NUM, if index is equal to
limit, it will cause sg array out of bounds, so correct t
Decouple init_tg_cfs_entry() from other structures' implementation
details, so that it only updates/accesses task group related fields
of the CFS runqueue and its SE.
This prepares calling this function in slightly different contexts.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c |
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Mahesh Salgaonkar
commit cd813e1cd7122f2c261dce5b54d1e0c97f80e1a5 upstream.
During Machine Check interrupt on pseries platform, register r3 points
RTAS extended event log passed by hypervisor.
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Tomas Bortoli
commit 430ac66eb4c5b5c4eb846b78ebf65747510b30f1 upstream.
The patch adds the flush in p9_mux_poll_stop() as it the function used by
p9_conn_destroy(), in turn called by p9_fd_clo
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Shan Hai
commit 3943b040f11ed0cc6d4585fd286a623ca8634547 upstream.
The writeback thread would exit with a lock held when the cache device
is detached via sysfs interface, fix it by releasing t
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Lars-Peter Clausen
commit 9a5094ca29ea9b1da301b31fd377c0c0c4c23034 upstream.
A sysfs write callback function needs to either return the number of
consumed characters or an error.
The ad952x_s
4.18-stable review patch. If anyone has any objections, please let me know.
--
From: Mikulas Patocka
commit 564f1807379298dfdb12ed0d5b25fcb89c238527 upstream.
The udlfb driver reprograms the hardware everytime the user switches the
console, that makes quite unusable when worki
The functions sync_throttle() and unregister_fair_sched_group() are
called during the creation and destruction of cgroups. They are never
called for the root task-group. Remove checks that always yield the
same result when operating on non-root task groups.
Signed-off-by: Jan H. Schönherr
---
ke
This is the start of the stable review cycle for the 3.18.122 release.
There are 29 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sun Sep 9 21:08:52 UTC 2018.
Anything recei
Move init_entity_runnable_average() into init_tg_cfs_entry(), where all
the other SE initialization is carried out.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c0dd5825
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 34 ++
1 file changed, 34 insertions(+)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d65c98c34c13..456b266b8a2c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1130,6 +11
Chao,
I was testing the previous patch and removed in the queue due to quota-related
hang during fault injection + shutdown test. Let me try this later.
Thanks,
On 09/07, Chao Yu wrote:
> For journalled quota mode, let checkpoint to flush dquot dirty data
> and quota file data to guarntee persis
If a dynamic amount of locks needs to be pinned in the same context,
it is impractical to have a cookie per lock. Make the cookie generator
accessible, so that such a group of locks can be (re-)pinned with
just one (shared) cookie.
Signed-off-by: Jan H. Schönherr
---
include/linux/lockdep.h |
The function resched_curr() kicks the scheduler for a certain runqueue,
assuming that the runqueue is already locked.
If called for a hierarchical runqueue, the equivalent operation is to
kick the leader. Unfortunately, we don't know whether we also hold
the CPU runqueue lock at this point, which
Add a function is_sd_se() to easily distinguish SD-SEs from a TG-SEs.
Internally, we distinguish tasks, SD-SEs, and TG-SEs based on the my_q
field. For tasks it is empty, for TG-SEs it is a pointer, and for
SD-SEs it is a magic value.
Also modify propagate_entity_load_avg() to not page fault on S
For coscheduling, we will set up hierarchical runqueues that correspond
to larger fractions of the system. They will be organized along the
scheduling domains.
Although it is overkill at the moment, we keep a full struct rq per
scheduling domain. The existing code is so used to pass struct rq
arou
SCHED_WARN_ON() is conditionally compiled depending on CONFIG_SCHED_DEBUG.
WARN_ON() and variants can be used in if() statements to take an action
in the unlikely case that the WARN_ON condition is true. This is supposed
to work independently of whether the warning is actually printed. However,
wit
Make parent_cfs_rq() coscheduling-aware.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8504790944bf..8cba7b8fb6bd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.
Replace open-coded cases of parent_entity() with actual parent_entity()
invocations.
This will make later checks within parent_entity() more useful.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.
We use and keep rq->clock updated on all hierarchical runqueues. In
fact, not using the hierarchical runqueue would be incorrect as there is
no guarantee that the leader's CPU runqueue clock is updated.
Switch all obvious cases from rq_of() to hrq_of().
Signed-off-by: Jan H. Schönherr
---
kerne
A regularly scheduled runqueue is enqueued via its TG-SE in its parent
task-group. When coscheduled it is enqueued via its hierarchical
parent's SD-SE. Switching between both means to replace one with the
other, and taking care to get rid of all references to the no longer
current SE, which is rec
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Lars-Peter Clausen
commit 5a4e33c1c53ae7d4425f7d94e60e4458a37b349e upstream.
Fix the displayed phase for the ad9523 driver. Currently the most
significant decimal place is dropped and all othe
With hierarchical runqueues and locks at each level, it is often
necessary to get locks at different level. Introduce the second of two
locking strategies, which is suitable for progressing upwards through
the hierarchy with minimal impact on lock contention.
During enqueuing and dequeuing, a sche
Initially, coscheduling won't support throttling of CFS runqueues that
are not at CPU level. Print a warning to remind us of this fact and note
down everything that's currently known to be broken, if we wanted to
throttle higher level CFS runqueues (which would totally make sense
from a coschedulin
Buddies are not very well defined with coscheduling. Usually, they
bubble up the hierarchy on a single CPU to steer task picking either
away from a certain task (yield a task: skip buddy) or towards a certain
task (yield to a task, execute a woken task: next buddy; execute a
recently preempted task
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Mike Snitzer
commit fd2fa95416188a767a63979296fa3e169a9ef5ec upstream.
policy_hint_size starts as 0 during __write_initial_superblock(). It
isn't until the policy is loaded that policy_hint_s
There are a few places that make decisions based on the total number
of CFS tasks on a certain CPU. With coscheduling, the inspected value
rq->cfs.h_nr_running does not contain all tasks anymore, as some are
accounted on higher hierarchy levels instead. This would lead to
incorrect conclusions as t
Task group management has to iterate over all CFS runqueues within the
task group. Currently, this uses for_each_possible_cpu() loops and
accesses tg->cfs_rq[] directly. This does not adjust well to the
upcoming addition of coscheduling, where we will have additional CFS
runqueues.
Introduce more
Modify check_preempt_wakeup() to work correctly with coscheduled sets.
On the one hand, that means not blindly preempting, when the woken
task potentially belongs to a different set and we're not allowed to
switch sets. Instead we have to notify the correct leader to follow up.
On the other hand,
Add a new command line argument cosched_max_level=, which allows
enabling coscheduling at boot. The number corresponds to the scheduling
domain up to which coscheduling can later be enabled for cgroups.
For example, to enable coscheduling of cgroups at SMT level, one would
specify cosched_max_leve
Relax the restriction to setup a sched_domain_shared only for domains
with SD_SHARE_PKG_RESOURCES. Set it up for every domain.
This restriction was imposed since the struct was created via commit
24fc7edb92ee ("sched/core: Introduce 'struct sched_domain_shared'") for
the lack of another use case.
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Tomas Bortoli
commit 7913690dcc5e18e235769fd87c34143072f5dbea upstream.
The p9_client_version() does not initialize the version pointer. If the
call to p9pdu_readf() returns an error and versi
With coscheduling the number of required classes is twice the depth of
the scheduling domain hierarchy. For a 256 CPU system, there are eight
levels at most. Adjust the number of subclasses, so that lockdep can
still be used on such systems.
Signed-off-by: Jan H. Schönherr
---
include/linux/lock
We cannot switch a task group from regular scheduling to coscheduling
atomically, as it would require locking the whole system. Instead,
the switch is done runqueue by runqueue via cosched_set_scheduled().
This means that other CPUs may see an intermediate state when locking
a bunch of runqueues,
Move struct rq_flags around to keep future commits crisp.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b8c8dfd0e88d..cd3a32ce8fc6 100644
The scheduler is operational before we have the necessary information
about scheduling domains, which would allow us to set up the runqueue
hierarchy. Because of that, we have to postpone the "real"
initialization a bit. We cannot not totally skip all initialization,
though, because all the adapted
The function cfs_rq_util_change() notifies frequency governors of
utilization changes, so that they can be scheduler driven. This is
coupled to per CPU runqueue statistics. So, don't do anything
when called for non-CPU runqueues.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 11 +
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Eric W. Biederman
commit 36476beac4f8ca9dc7722790b2e8ef0e8e51034e upstream.
It is important that all maps are less than PAGE_SIZE
or else setting the last byte of the buffer to '0'
could write
The aggregated SD-SE weight is updated lock-free to avoid contention
on the higher level. This also means, that we have to be careful
with intermediate values as another CPU could pick up the value and
perform actions based on it.
Within reweight_entity() there is such a place, where weight is rem
Enqueuing and dequeuing of tasks (or entities) are a general activities
that span across leader boundaries. They start from the bottom of the
runqueue hierarchy and bubble upwards, until they hit their terminating
condition (for example, enqueuing stops when the parent entity is already
enqueued).
> -Original Message-
> From: Wang, Dongsheng [mailto:dongsheng.w...@hxt-semitech.com]
> Sent: Friday, September 07, 2018 5:41 AM
> To: Kirsher, Jeffrey T ;
> sergei.shtyl...@cogentembedded.com
> Cc: Keller, Jacob E ; da...@davemloft.net; intel-
> wired-...@lists.osuosl.org; net...@vger.k
The weight of an SD-SE is defined to be the average weight of all
runqueues that are represented by the SD-SE. Hence, its weight
should change whenever one of the child runqueues changes its
weight. However, as these are two different hierarchy levels,
they are protected by different locks. To redu
Modify some of the core scheduler paths, which function as entry points
into the CFS scheduling class and which are activities where the leader
operates on behalf of the group.
There are (a) handling the tick, (b) picking the next task from the
runqueue, (c) setting a task to be current, and (d) p
Even with coscheduling, we define the fields rq->nr_running and rq->load
of per-CPU runqueues to represent the total amount of tasks and the
total amount of load on that CPU, respectively, so that existing code
continues to work as expected.
Make sure to still account load changes on per-CPU runqu
Add a new loop constuct for_each_owned_sched_entity(), which iterates
over all owned scheduling entities, stopping when it encounters a
leader change.
This allows relatively straight-forward adaptations of existing code,
where the leader only handles that part of the hierarchy it actually
owns.
I
The cpu argument supplied to all callers of ___update_load_sum() is used
in accumulate_sum() to scale load values according to the CPU capacity.
While we should think about that at some point, it is out-of-scope for now.
Also, it does not matter on homogeneous system topologies.
Update all callers
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Adrian Hunter
commit 99cbbe56eb8bede625f410ab62ba34673ffa7d21 upstream.
When the number of queues grows beyond 32, the array of queues is
resized but not all members were being copied. Fix by a
Provide variants of the task group CFS traversal constructs that also
reach the hierarchical runqueues. Adjust task group management functions
where necessary.
The most changes are in alloc_fair_sched_group(), where we now need to
be a bit more careful during initialization.
Signed-off-by: Jan H.
The rq_of() function is used everywhere. With the introduction of
hierarchical runqueues, we could modify rq_of() to return the
corresponding queue. In fact, no change would be necessary for that.
However, many code paths do not handle a hierarchical runqueue
adequately. Thus, we introduce variant
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Christian Brauner
commit 82c9a927bc5df6e06b72d206d24a9d10cced4eb5 upstream.
When running in a container with a user namespace, if you call getxattr
with name = "system.posix_acl_access" and siz
>> + * Only do the expensive exception table search when we might be at
>> + * risk of a deadlock:
>> + * 1. We failed to acquire mmap_sem, and
>> + * 2. The access was an explicit kernel-mode access
>> + *(X86_PF_USER=0).
>
> Might be worth reminding the reader that X86_
The code path is not yet adjusted for coscheduling. Disable
it for now.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 30e5ff30f442..8504790944bf 100644
--- a/kernel/sched/fai
From: Florian Fainelli
Date: Fri, 7 Sep 2018 11:09:02 -0700
> There is no way for user-space to know what a given DSA network device's
> tagging protocol is. Expose this information through a dsa/tagging
> attribute which reflects the tagging protocol currently in use.
>
> This is helpful for c
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Richard Weinberger
commit 59965593205fa4044850d35ee3557cf0b7edcd14 upstream.
In ubifs_jnl_update() we sync parent and child inodes to the flash,
in case of xattrs, the parent inode (AKA host in
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Mikulas Patocka
commit 8c5b044299951acd91e830a688dd920477ea1eda upstream.
I have a USB display adapter using the udlfb driver and I use it on an ARM
board that doesn't have any graphics card. W
On 09/07/2018 02:06 PM, Sean Christopherson wrote:
>> The page fault handler (__do_page_fault()) basically has two sections:
>> one for handling faults in the kernel porttion of the address space
>> and another for faults in the user porttion of the address space.
> %s/porttion/portion
Fixed, tha
With scheduling domains sufficiently prepared, we can now initialize
the full hierarchy of runqueues and link it with the already existing
bottom level, which we set up earlier.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 1 +
kernel/sched/cosched.c | 76 +++
Scheduled task groups will bring coscheduling to Linux.
The actual functionality will be added successively.
Signed-off-by: Jan H. Schönherr
---
init/Kconfig | 11 +++
kernel/sched/Makefile | 1 +
kernel/sched/cosched.c | 9 +
3 files changed, 21 insertions(+)
creat
Factor out the logic to retrieve the parent CFS runqueue of another
CFS runqueue into its own function and replace open-coded variants.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fai
Running:
scripts/checkpatch.pl -f arch/arm/mach-s3c24xx/mach-mini2440.c
revealed several errors and warnings.
They were all removed, except one which is an #if 0 around the declaration
of a gpio pin. This needs some more investigation and I prefer to let it
here. This is not some dead code.
'
Prepare for future changes and refactor sync_throttle() to work with
a different set of arguments.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5cad364e3a8
Add resched_cpu_locked(), which still works as expected, when it is called
while we already hold a runqueue lock from a different CPU.
There is some optimization potential by merging the logic of resched_curr()
and resched_cpu_locked() to avoid IPIs when calls to both functions happen.
Signed-off
The mini2440 computer uses "active high" to signal that the "write protect"
of the inserted MMC is set. The current code uses the opposite, leading to
a wrong detection of write protection. The solution is simply to use
".wprotect_invert = 1" in the description of the MMC.
Signed-off-by: Cedric Ro
From: "Maciej S. Szmigiero"
Date: Fri, 7 Sep 2018 20:15:22 +0200
> Commit 3559d81e76bf ("r8169: simplify rtl_hw_start_8169") changed order of
> two register writes:
> 1) Caused RxConfig to be written before TX / RX is enabled,
> 2) Caused TxConfig to be written before TX / RX is enabled.
>
> At
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Matthew Auld
commit c11c7bfd213495784b22ef82a69b6489f8d0092f upstream.
Operating on a zero sized GEM userptr object will lead to explosions.
Fixes: 5cc9ed4b9a7a ("drm/i915: Introduce mapping o
Move around the storage location of the scheduling entity references
of task groups. Instead of linking them from the task_group struct,
link each SE from the CFS runqueue itself with a new field "my_se".
This resembles the "my_q" field that is already available, just in
the other direction.
Adju
This patch series extends CFS with support for coscheduling. The
implementation is versatile enough to cover many different coscheduling
use-cases, while at the same time being non-intrusive, so that behavior of
legacy workloads does not change.
Peter Zijlstra once called coscheduling a "scalabili
On Fri, Sep 7, 2018 at 2:34 PM Greg Kroah-Hartman
wrote:
>
> 4.9-stable review patch. If anyone has any objections, please let me know.
>
Do your scripts have a bad hair day ? The subject says 4.18.
Guenter
> --
>
> From: Chirantan Ekbote
>
> commit d28c756caee6e414d9ba367d0b9
On Fri, Sep 7, 2018 at 2:54 PM Guenter Roeck wrote:
>
> On Fri, Sep 7, 2018 at 2:34 PM Greg Kroah-Hartman
> wrote:
> >
> > 4.9-stable review patch. If anyone has any objections, please let me know.
> >
>
> Do your scripts have a bad hair day ? The subject says 4.18.
>
Hmm, I suspect it is the gm
4.18-stable review patch. If anyone has any objections, please let me know.
--
From: Ming Lei
commit b233f127042dba991229e3882c6217c80492f6ef upstream.
Runtime PM isn't ready for blk-mq yet, and commit 765e40b675a9 ("block:
disable runtime-pm for blk-mq") tried to disable it.
4.18-stable review patch. If anyone has any objections, please let me know.
--
From: Bart Van Assche
commit 24ecc3585348b616993a3c4d6dc2c6b8007e358c upstream.
Several block drivers call alloc_disk() followed by put_disk() if
something fails before device_add_disk() is called w
801 - 900 of 962 matches
Mail list logo