The group_imbalance path in calculate_imbalance() made sense when it was
added back in 2007 with commit 908a7c1b9b80 ("sched: fix improper load
balance across sched domain") because busiest->load_per_task factored into
the amount of imbalance that was calculated. That is not the case today.
The gr
If load_balance() fails to migrate any tasks because all tasks were
affined, load_balance() removes the source cpu from consideration and
attempts to redo and balance among the new subset of cpus.
There is a bug in this code path where the algorithm considers all active
cpus in the system (minus t
On Fri, 12 May 2017 06:56:03 +
"Chen, Xiaoguang" wrote:
> Hi Gerd,
>
> >-Original Message-
> >From: intel-gvt-dev [mailto:intel-gvt-dev-boun...@lists.freedesktop.org] On
> >Behalf Of Gerd Hoffmann
> >Sent: Thursday, May 11, 2017 9:28 PM
> >To: Chen, Xiaoguang
> >Cc: Tian, Kevin ; in
On 05/12, Eric W. Biederman wrote:
>
> Oleg Nesterov writes:
>
> > Looks good to me.
>
> Oleg can I have a review or acked by?
Sure, feel free to add
Acked-by: Oleg Nesterov
On Thu, May 11, 2017 at 11:58 PM, Ingo Molnar wrote:
>
> * Linus Torvalds wrote:
>
>> On Thu, May 11, 2017 at 4:17 PM, Thomas Garnier wrote:
>> >
>> > Ingo: Do you want the change as-is? Would you like it to be optional?
>> > What do you think?
>>
>> I'm not ingo, but I don't like that patch. It
On Fri, May 12, 2017 at 11:15:20AM +0100, Mark Rutland wrote:
> Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
> must take the jump_label mutex.
>
> We call cpus_set_cap() in the secondary bringup path, from the idle
> thread where interrupts are disabled. Taking a mutex
On Thu, May 11, 2017 at 2:54 PM, Brian Norris wrote:
> Despite the claims in the associated comment block, it seems that
> clearing the command register is not enough to guarantee that no
> MSI interrupts get triggered during Function Level Reset. Through code
> instrumentation, I'm able to clearl
* Tony Lindgren [170512 08:39]:
> * Linus Walleij [170512 02:28]:
> > On Thu, May 11, 2017 at 4:20 PM, Andre Przywara
> > wrote:
> > > Linus, can you shed some light if this array creation serves some purpose?
> >
> > Tony [author of this function] can you look at this?
> >
> > The code in pi
Hi Lorenzo
On Fri, May 12, 2017 at 04:50:40PM +0100, Lorenzo Pieralisi wrote:
> Hi Vadim,
>
> On Fri, May 12, 2017 at 05:44:05AM -0700, Vadim Lomovtsev wrote:
> > Hi Lorenzo,
> >
> > Are there any news related to these patches ?
>
> Not really, I have not received any feedback but I was expecti
* Bin Liu [170512 08:24]:
> On Fri, May 12, 2017 at 07:58:49AM -0700, Tony Lindgren wrote:
> > OK. No better ideas except I think we should probably have a separate
> > timer for keeping VBUS on after state changes eventually.
>
> Currently with the patch below, VBUS is constantly on for host-onl
Move the entity migrate handling from enqueue_entity_load_avg() to
update_load_avg(). This has two benefits:
- {en,de}queue_entity_load_avg() will become purely about managing
runnable_load
- we can avoid a double update_tg_load_avg() and reduce pressure on
the global tg->shares cacheline
Vincent wondered why his self migrating task had a roughly 50% dip in
load_avg when landing on the new CPU. This is because we uncondionally
take the asynchronous detatch_entity route, which can lead to the
attach on the new CPU still seeing the old CPU's contribution to
tg->load_avg, effectively h
The load balancer uses runnable_load_avg as load indicator. For
!cgroup this is:
runnable_load_avg = \Sum se->avg.load_avg ; where se->on_rq
That is, a direct sum of all runnable tasks on that runqueue. As
opposed to load_avg, which is a sum of all tasks on the runqueue,
which includes a blocke
When an entity migrates in (or out) of a runqueue, we need to add (or
remove) its contribution from the entire PELT hierarchy, because even
non-runnable entities are included in the load average sums.
In order to do this we have some propagation logic that updates the
PELT tree, however the way it
The PELT _sum values are a saw-tooth function, dropping on the decay
edge and then growing back up again during the window.
When these window-edges are not aligned between cfs_rq and se, we can
have the situation where, for example, on dequeue, the se decays
first.
Its _sum values will be small(e
Since on wakeup migration we don't hold the rq->lock for the old CPU
we cannot update its state. Instead we add the removed 'load' to an
atomic variable and have the next update on that CPU collect and
process it.
Currently we have 2 atomic variables; which already have the issue
that they can be
Most call sites of update_load_avg() already have cfs_rq_of(se)
available, pass it down instead of recomputing it.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/fair.c | 31 +++
1 file changed, 15 insertions(+), 16 deletions(-)
--- a/kernel/sched/fair.c
++
For consistencies sake, we should have only a single reading of
tg->shares.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/fair.c | 31 ---
1 file changed, 12 insertions(+), 19 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2633,9 +263
Vincent reported that when running in a cgroup, his root
cfs_rq->avg.load_avg dropped to 0 on task idle.
This is because reweight_entity() will now immediately propagate the
weight change of the group entity to its cfs_rq, and as it happens,
our approxmation (5) for calc_cfs_shares() results in 0
On Fri, 12 May 2017, Pavel Machek wrote:
> On Sun 2017-05-07 20:49:03, Henrique de Moraes Holschuh wrote:
> > On Sun, 07 May 2017, Pavel Machek wrote:
> > > On Thu 2017-01-19 12:21:32, Adam Goode wrote:
> > > > This allows the control of the red status LED, which is the dot of the
> > > > "i"
> >
On Fri, May 12, 2017 at 11:01:37AM -0600, Jeffrey Hugo wrote:
> Signed-off-by: Austin Christ
> Signed-off-by: Dietmar Eggemann
> Signed-off-by: Jeffrey Hugo
So per that Chain Austin wrote the patch, who handed it to Dietmar, who
handed it to you. Except I don't see a From: Austin on.
What giv
The problem with the overestimate is that it will subtract too big a
value from the load_sum, thereby pushing it down further than it ought
to go. Since runnable_load_avg is not subject to a similar 'force',
this results in the occasional 'runnable_load > load' situation.
Signed-off-by: Peter Zijl
one mult worse, but more obvious code
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/fair.c | 57 +++-
1 file changed, 30 insertions(+), 27 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2786,7 +2786,7 @@ static inl
When a (group) entity changes it's weight we should instantly change
its load_avg and propagate that change into the sums it is part of.
Because we use these values to predict future behaviour and are not
interested in its historical value.
Without this change, the change in load would need to pro
From: "Steven Rostedt (VMware)"
jump_label_lock() is taken under get_online_cpus(). Make sure that kprobes
follows suit.
Signed-off-by: Steven Rostedt (VMware)
---
kernel/kprobes.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
ind
From: "Steven Rostedt (VMware)"
Allow get_online_cpus() to be recursive. If a lock is taken while under
"get_online_cpus()", it can call get_online_cpus() as well, just as long as
it is never held without being under get_online_cpus(), but then calling it.
GOC() -> Lock(X) -> GOC()
is OK, as
From: "Steven Rostedt (VMware)"
As stack tracing now requires "rcu watching", force RCU to be watching when
recording a stack trace.
Signed-off-by: Steven Rostedt (VMware)
---
kernel/trace/trace.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/kernel/trace/trace.c b/kernel/trace/trac
NOTE: This was quickly written. The change logs and patches probably need
some loving. This is for discussion. These may become legitimate patches,
but for now, I'm seeing if this is an acceptable solution.
Also note, I checked out the last branch that I had Linus pull, and then
merged with tip's
On 05/12/2017 12:57 PM, David Miller wrote:
From: Pasha Tatashin
Date: Thu, 11 May 2017 16:59:33 -0400
We should either keep memset() only for deferred struct pages as what
I have in my patches.
Another option is to add a new function struct_page_clear() which
would default to memset() and
Hi gengdongjiu,
On 05/05/17 13:31, gengdongjiu wrote:
> when guest OS happen an SEA, My current solution is shown below:
>
> (1) host EL3 firmware firstly handle the SEA error and generate the CPER
> record.
> (2) EL3 firmware separately copy the esr_el3, elr_el3, SPSR_el3,
> far_el3 to the esr_
Hi gengdongjiu,
On 10/05/17 09:44, gengdongjiu wrote:
> On 2017/5/9 1:28, James Morse wrote:
(hwpoison for KVM is a corner case as Qemu's memory effectively has two
users,
Qemu and KVM. This isn't the example of how user-space gets signalled.)
>>
>> KVM creates guests as if they we
From: "Steven Rostedt (VMware)"
There's places that take tracepoints_mutex while holding get_online_cpus(),
and since tracepoints call jump_label code, which also takes
get_online_cpus(), make sure that the tracepoints_mutex is always taken
under get_online_cpus().
Signed-off-by: Steven Rostedt
From: "Steven Rostedt (VMware)"
The event_mutex is a high level lock. It should never be taken under
get_online_cpus() being held. Perf is the only user that does so. Move the
taking of event_mutex outside of get_online_cpus() and this should solve the
locking order.
Signed-off-by: Steven Rosted
On 5/12/17 8:24 AM, David Miller wrote:
> From: Jan Moskyto Matejka
> Date: Fri, 12 May 2017 13:15:10 +0200
>
>> -int rt6_dump_route(struct rt6_info *rt, void *p_arg);
>> +int rt6_dump_route(struct rt6_info *rt, void *p_arg, int truncate);
>
> Please use "bool" and "true"/"false" for boolean val
Remove the load from the load_sum for sched_entities, basically
turning load_sum into runnable_sum. This prepares for better
reweighting of group entities.
Since we now have different rules for computing load_avg, split
___update_load_avg() into two parts, ___update_load_sum() and
___update_load_
Explain the magic equation in calc_cfs_shares() a bit better.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/fair.c | 61
1 file changed, 61 insertions(+)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2633,6 +2633,67 @@ ac
On Fri, May 12, 2017 at 8:14 AM, Miroslav Lichvar wrote:
> On Tue, Jul 15, 2014 at 09:02:38PM -0700, John Stultz wrote:
>> On 07/08/2014 04:08 AM, Miroslav Lichvar wrote:
>> > I spent some time trying to figure out a workaround for the nanosecond
>> > rounding, but I didn't find anything that woul
On 5/12/2017 11:23 AM, Peter Zijlstra wrote:
On Fri, May 12, 2017 at 11:01:37AM -0600, Jeffrey Hugo wrote:
Signed-off-by: Austin Christ
Signed-off-by: Dietmar Eggemann
Signed-off-by: Jeffrey Hugo
So per that Chain Austin wrote the patch, who handed it to Dietmar, who
handed it to you. Exce
Hi all,
So after staring at all that PELT stuff and working my way through it again:
https://lkml.kernel.org/r/20170505154117.6zldxuki2fgyo...@hirez.programming.kicks-ass.net
I started doing some patches to fix some of the identified broken.
So here are a few too many patches that do:
- f
On Fri, May 12, 2017 at 06:07:22PM +0100, Will Deacon wrote:
> On Fri, May 12, 2017 at 11:15:20AM +0100, Mark Rutland wrote:
> > Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
> > must take the jump_label mutex.
> >
> > We call cpus_set_cap() in the secondary bringup path
From: David Ahern
Date: Fri, 12 May 2017 10:26:08 -0700
> On 5/12/17 8:24 AM, David Miller wrote:
>> From: Jan Moskyto Matejka
>> Date: Fri, 12 May 2017 13:15:10 +0200
>>
>>> -int rt6_dump_route(struct rt6_info *rt, void *p_arg);
>>> +int rt6_dump_route(struct rt6_info *rt, void *p_arg, int tru
Hello,
Some data type definitions are provided by Linux source files.
How would you like to see them represented in the documentation formats
which are generated by the Sphinx software?
Regards,
Markus
From: Pasha Tatashin
Date: Fri, 12 May 2017 13:24:52 -0400
> Right now it is larger, but what I suggested is to add a new optimized
> routine just for this case, which would do STBI for 64-bytes but
> without membar (do membar at the end of memmap_init_zone() and
> deferred_init_memmap()
>
> #de
Guenter Roeck writes:
> Hi Eric,
>
> On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
>> Vovo Yang writes:
>>
>> > On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
>> > wrote:
>> >> Guenter Roeck writes:
>> >>
>> >>> What I know so far is
>> >>> - We see this condition on
On Fri, May 12, 2017 at 10:21:35AM -0700, Tony Lindgren wrote:
> * Bin Liu [170512 08:24]:
> > On Fri, May 12, 2017 at 07:58:49AM -0700, Tony Lindgren wrote:
> > > OK. No better ideas except I think we should probably have a separate
> > > timer for keeping VBUS on after state changes eventually.
On 05/12, Chao Yu wrote:
> On 2017/5/11 10:35, Jaegeuk Kim wrote:
> > On 05/11, Chao Yu wrote:
> >> On 2017/5/11 7:50, Jaegeuk Kim wrote:
> >>> On 05/09, Chao Yu wrote:
> Hi Jaegeuk,
>
> On 2017/5/9 5:23, Jaegeuk Kim wrote:
> > Hi Chao,
> >
> > I can't see a strong reason
* Bin Liu [170512 10:43]:
> On Fri, May 12, 2017 at 10:21:35AM -0700, Tony Lindgren wrote:
> > * Bin Liu [170512 08:24]:
> > > On Fri, May 12, 2017 at 07:58:49AM -0700, Tony Lindgren wrote:
> > > > OK. No better ideas except I think we should probably have a separate
> > > > timer for keeping VBU
On Fri, May 12, 2017 at 05:11:54PM +0200, Hans de Goede wrote:
> PEAQ is a new European OEM, I've bought one of their 2-in-1 x86
> devices, which is actually quite a nice device. Under Windows it has
> Dolby software for "better" sound and you can select different equalizer
> presets using a specia
On Mai 12 2017, Rob Landley wrote:
> Last I checked I couldn't just "git push" the fullhist tree to
> git.kernel.org because git graft didn't propagate right.
Perhaps you could recreate them with git replace --graft. That creates
replace objects that can be pushed and fetched. (They are stored
On 05/12, Chao Yu wrote:
> Hi Jaegeuk,
>
> On 2017/5/12 2:36, Jaegeuk Kim wrote:
> > Hi Chao,
> >
> > On 05/09, Chao Yu wrote:
> >> From: Chao Yu
> >>
> >> Serialize data/node IOs by using fifo list instead of mutex lock,
> >> it will help to enhance concurrency of f2fs, meanwhile keeping LFS
>
Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
must take the jump_label mutex.
We call cpus_set_cap() in the secondary bringup path, from the idle
thread where interrupts are disabled. Taking a mutex in this path "is a
NONO" regardless of whether it's contended, and somet
On Fri, 2017-05-12 at 17:56 +0800, joeyli wrote:
> On Mon, May 08, 2017 at 12:25:23PM -0700, Sai Praneeth Prakhya wrote:
> > From: Sai Praneeth
> >
> > Booting kexec kernel with "efi=old_map" in kernel command line hits
> > kernel panic as shown below.
> >
> > [0.001000] BUG: unable to handl
Date: Thu, 11 May 2017 18:21:01 -0500
The code can potentially sleep for an indefinite amount of time in
zap_pid_ns_processes triggering the hung task timeout, and increasing
the system average. This is undesirable. Sleep with a task state of
TASK_INTERRUPTIBLE instead of TASK_UNINTERRUPTIBLE to
This patch introduces Interrupt Aware Scheduler(IAS). The tests till now
show an overall improvement in cases where the workload has some
interrupt activity.
The patch avoids CPUs which might be considered interrupt-heavy when
trying to schedule threads (on the push side) in the system. Interrupt
This is a simple OpenMP program which does a barrier sync at the end of
each parallel for loop section.
Signed-off-by: Rohit Jain
---
tools/testing/selftests/openmp_barrier/Makefile | 6 +
tools/testing/selftests/openmp_barrier/barrier.c | 29
2 files changed, 35 i
The patch avoids CPUs which might be considered interrupt-heavy when
trying to schedule threads (on the push side) in the system. Interrupt
Awareness has only been added into the fair scheduling class.
It does so by, using the following algorithm:
--
On Fri, May 12, 2017 at 01:15:44PM -0400, Steven Rostedt wrote:
> NOTE: This was quickly written. The change logs and patches probably need
> some loving. This is for discussion. These may become legitimate patches,
> but for now, I'm seeing if this is an acceptable solution.
>
> Also note, I chec
On sama5d2, power to the core may be cut while entering suspend mode. It is
necessary to save and restore the TCB registers.
Signed-off-by: Alexandre Belloni
---
Changes in v2:
- use writel instead of __raw_writel
- Document sequence
- use ARRAY_SIZE(tcb_cache) instead of 3
drivers/clocksour
CONFIG_GENERIC_TIME_VSYSCALL_OLD was introduced five years ago
to allow a transition from the old vsyscall implementations to
the new method (which simplified internal accounting and made
timekeeping more precise).
However, PPC and IA64 have yet to make the transition, despite
in some cases me sen
On Fri, May 12, 2017 at 01:15:45PM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> As stack tracing now requires "rcu watching", force RCU to be watching when
> recording a stack trace.
>
> Signed-off-by: Steven Rostedt (VMware)
Assuming that you never get to __trace_stack()
Using octal permissions instead of symbolic ones is preferred.
Signed-off-by: Aleksey Kurbatov
---
drivers/staging/rtl8712/os_intfs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/rtl8712/os_intfs.c
b/drivers/staging/rtl8712/os_intfs.c
index 8836b31b4ef8..e
On Fri, 12 May 2017 11:25:35 -0700
"Paul E. McKenney" wrote:
> On Fri, May 12, 2017 at 01:15:45PM -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (VMware)"
> >
> > As stack tracing now requires "rcu watching", force RCU to be watching when
> > recording a stack trace.
> >
> > Signed-off
On Fri, May 12, 2017 at 01:15:46PM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> Allow get_online_cpus() to be recursive. If a lock is taken while under
> "get_online_cpus()", it can call get_online_cpus() as well, just as long as
> it is never held without being under get_on
> -Original Message-
> From: owner-linux-security-mod...@vger.kernel.org [mailto:owner-linux-
> security-mod...@vger.kernel.org] On Behalf Of Casey Schaufler
> Sent: Thursday, May 11, 2017 1:46 PM
> To: Stephen Smalley ; Sebastien Buisson
> ; linux-security-mod...@vger.kernel.org; linux-
On Fri, May 12, 2017 at 01:15:47PM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> jump_label_lock() is taken under get_online_cpus(). Make sure that kprobes
> follows suit.
>
> Signed-off-by: Steven Rostedt (VMware)
The remaining three (3/5 through 5/5) look straightforward
On Fri, 12 May 2017 11:35:59 -0700
"Paul E. McKenney" wrote:
> On Fri, May 12, 2017 at 01:15:46PM -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (VMware)"
> >
> > Allow get_online_cpus() to be recursive. If a lock is taken while under
> > "get_online_cpus()", it can call get_online_cpus
On Fri, Apr 28, 2017 at 05:10:40PM -0700, Paul E. McKenney wrote:
> On Fri, Apr 28, 2017 at 05:51:15PM -0400, Nicolas Pitre wrote:
> > On Fri, 28 Apr 2017, Paul E. McKenney wrote:
> >
> > > Hello, Nicolas!
> > >
> > > Saw the TTY write up LWN and figured I should send this your way.
> > > It shou
On Fri, 12 May 2017 11:39:11 -0700
"Paul E. McKenney" wrote:
> On Fri, May 12, 2017 at 01:15:47PM -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (VMware)"
> >
> > jump_label_lock() is taken under get_online_cpus(). Make sure that kprobes
> > follows suit.
> >
> > Signed-off-by: Steven
On Fri, May 12, 2017 at 11:41:55AM -0700, Paul E. McKenney wrote:
> On Fri, Apr 28, 2017 at 05:10:40PM -0700, Paul E. McKenney wrote:
> > On Fri, Apr 28, 2017 at 05:51:15PM -0400, Nicolas Pitre wrote:
> > > On Fri, 28 Apr 2017, Paul E. McKenney wrote:
> > >
> > > > Hello, Nicolas!
> > > >
> > > >
> -Original Message-
> From: David Miller [mailto:da...@davemloft.net]
> Sent: Friday, May 12, 2017 12:20 PM
> To: Haiyang Zhang ; Haiyang Zhang
>
> Cc: net...@vger.kernel.org; KY Srinivasan ;
> o...@aepfle.de; vkuzn...@redhat.com; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH net-n
On Fri, May 12, 2017 at 02:36:19PM -0400, Steven Rostedt wrote:
> On Fri, 12 May 2017 11:25:35 -0700
> "Paul E. McKenney" wrote:
>
> > On Fri, May 12, 2017 at 01:15:45PM -0400, Steven Rostedt wrote:
> > > From: "Steven Rostedt (VMware)"
> > >
> > > As stack tracing now requires "rcu watching",
From: Markus Elfring
Date: Fri, 12 May 2017 20:46:54 +0200
Three update suggestions were taken into account
from static source code analysis.
Markus Elfring (3):
Delete an error message for a failed memory allocation in etb_probe()
Fix a typo in a comment line
Improve a size determination
From: Markus Elfring
Date: Fri, 12 May 2017 20:23:43 +0200
Omit an extra message for a memory allocation failure in this function.
This issue was detected by using the Coccinelle software.
Link:
http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf
Sign
From: Markus Elfring
Date: Fri, 12 May 2017 20:30:42 +0200
Delete a character in this description for a condition check.
Signed-off-by: Markus Elfring
---
drivers/hwtracing/coresight/coresight-etb10.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hwtracing/coresig
On Fri, May 12, 2017 at 02:40:27PM -0400, Steven Rostedt wrote:
> On Fri, 12 May 2017 11:35:59 -0700
> "Paul E. McKenney" wrote:
>
> > On Fri, May 12, 2017 at 01:15:46PM -0400, Steven Rostedt wrote:
> > > From: "Steven Rostedt (VMware)"
> > >
> > > Allow get_online_cpus() to be recursive. If a
From: Markus Elfring
Date: Fri, 12 May 2017 20:36:03 +0200
Replace the specification of a data structure by a pointer dereference
as the parameter for the operator "sizeof" to make the corresponding size
determination a bit safer according to the Linux coding style convention.
Signed-off-by: Mar
On Thu, May 11, 2017 at 11:00 PM, Dave Airlie wrote:
>
> It also has an amdgpu fixes pull, with lots of ongoing work on Vega10
> which is new in this kernel and is preliminary support so may have a
> fair bit of movement.
Note: I will *not* be taking these kinds of pull requests after rc1.
If Ve
With this patch, we don't try to umount all mounts of a tree together.
Instead of this we umount them one by one. In this case, we see a significant
improvement in performance for the worsе case.
v2: create a sorted list of mounts, so that umount a child before its
parent.
The reason of this opti
On Fri, May 12, 2017 at 04:48:48PM +0300, Peter Ujfalusi wrote:
> We have one register for each EP to set the maximum packet size for both
> TX and RX.
> If for example an RX programming would happen before the previous TX
> transfer finishes we would reset the TX packet side.
>
> To fix this issu
On Fri, 12 May 2017, Paul E. McKenney wrote:
> On Fri, Apr 28, 2017 at 05:10:40PM -0700, Paul E. McKenney wrote:
> > On Fri, Apr 28, 2017 at 05:51:15PM -0400, Nicolas Pitre wrote:
> > > On Fri, 28 Apr 2017, Paul E. McKenney wrote:
> > >
> > > > Hello, Nicolas!
> > > >
> > > > Saw the TTY write u
On Fri, May 12, 2017 at 8:01 PM, Alexandre Courbot wrote:
> From: Alexandre Courbot
>
> I have not been able to dedicate time to the GPIO subsystem since quite
> some time, and I don't see the situation improving in the near future.
> Update the maintainers list to reflect this unfortunate fact.
On Thu, May 11, 2017 at 10:54 PM, Martin Schwidefsky
wrote:
> On Thu, 11 May 2017 22:34:31 -0700
> Kees Cook wrote:
>
>> On Thu, May 11, 2017 at 10:28 PM, Martin Schwidefsky
>> wrote:
>> > On Thu, 11 May 2017 16:44:07 -0700
>> > Linus Torvalds wrote:
>> >
>> >> On Thu, May 11, 2017 at 4:17 PM,
On Fri, May 12, 2017 at 12:01 PM, Kees Cook wrote:
>
> Yeah, the risk for "corrupted addr_limit" is mainly a concern for
> archs with addr_limit on the kernel stack. If I'm reading things
> correctly, that means, from the archs I've been paying closer
> attention to, it's an issue for arm, mips, a
On Fri, May 12, 2017 at 02:59:48PM -0400, Nicolas Pitre wrote:
> On Fri, 12 May 2017, Paul E. McKenney wrote:
>
> > On Fri, Apr 28, 2017 at 05:10:40PM -0700, Paul E. McKenney wrote:
> > > On Fri, Apr 28, 2017 at 05:51:15PM -0400, Nicolas Pitre wrote:
> > > > On Fri, 28 Apr 2017, Paul E. McKenney w
On Fri, May 12, 2017 at 12:01:59PM -0700, Kees Cook wrote:
> Yeah, the risk for "corrupted addr_limit" is mainly a concern for
> archs with addr_limit on the kernel stack. If I'm reading things
> correctly, that means, from the archs I've been paying closer
> attention to, it's an issue for arm, mi
SPI NOR branches are now hosted on MTD repos, spi-nor/next is on l2-mtd
and spi-nor/fixes will be on linux-mtd.
Signed-off-by: Cyrille Pitchen
---
Hi all,
this is the same patch as for the NAND subsystem migration.
Hence same question: how de we specify the branches?
I don't see any entry for th
ssi_fips.c:
fixing checkpatch.pl errors:
ERROR: code indent should use tabs where possible
+int rc = 0;$
ERROR: code indent should use tabs where possible
+int rc = 0;$
Signed-off-by: Connor Kelleher
---
drivers/staging/ccree/ssi_fips.c | 4 ++--
1 file changed, 2 insertions(+
ssi_fips.c:
fixing checkpatch.pl errors:
ERROR: trailing whitespace
+ * $
ERROR: trailing whitespace
+ * $
ERROR: trailing whitespace
+ * $
ERROR: trailing whitespace
+This function returns the REE FIPS state. $
ERROR: trailing whitespace
+It should be called by kernel module. $
ERROR: trai
From: Haiyang Zhang
The clean up function is updated to cover duplicate config info in
files included by "source" key word in Ubuntu network config.
Signed-off-by: Haiyang Zhang
---
tools/hv/bondvf.sh | 21 ++---
1 files changed, 18 insertions(+), 3 deletions(-)
diff --git a
On Fri, May 12, 2017 at 12:08 PM, Linus Torvalds
wrote:
> On Fri, May 12, 2017 at 12:01 PM, Kees Cook wrote:
>> Yeah, the risk for "corrupted addr_limit" is mainly a concern for
>> archs with addr_limit on the kernel stack. If I'm reading things
>> correctly, that means, from the archs I've been
Hey Ingo,
Here it is, with all the headers required.
The following changes since commit a351e9b9fc24e982ec2f0e76379a49826036da12:
Linux 4.11 (2017-04-30 19:47:48 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/sashal/linux.git
liblockdep-fixes
On Fri, May 12, 2017 at 12:55:22PM -0500, Eric W. Biederman wrote:
> Date: Thu, 11 May 2017 18:21:01 -0500
>
> The code can potentially sleep for an indefinite amount of time in
> zap_pid_ns_processes triggering the hung task timeout, and increasing
> the system average. This is undesirable. Sle
On Thu, May 11, 2017 at 02:33:14PM +0100, Alan Cox wrote:
> On Thu, 11 May 2017 09:29:14 +0100
> Okash Khawaja wrote:
>
> > Hi Alan,
> >
> > On Wed, May 10, 2017 at 08:41:51PM +0100, Alan Cox wrote:
> > > > + if (!(tmp_termios.c_cflag & CRTSCTS)) {
> > > > + tmp_termios.c_cfl
On Fri, May 12, 2017 at 12:00 PM, Hans Verkuil wrote:
> On 05/12/17 11:49, Arnd Bergmann wrote:
>> I can probably come up with a workaround, but haven't completely thought
>> through all the combinations yet. Also, I assume the same fix will be needed
>> for exynos, though that has not come up in
Hi Eric,
On Fri, May 12, 2017 at 12:33:01PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > Hi Eric,
> >
> > On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
> >> Vovo Yang writes:
> >>
> >> > On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
> >> > wrote:
> >
This patch fixes the issue where TTY-migrated synths would take a while to shut
up after hitting numpad enter key. When calling synth_flush, even though XOFF
character is sent as high priority, data buffered in TTY layer is still sent to
the synth. This patch flushes that buffered data when synt
On Thu, May 11, 2017 at 03:02:35PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.18.53 release.
> There are 39 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know
On Fri, May 12, 2017 at 11:04:26AM -0700, Rohit Jain wrote:
> The patch avoids CPUs which might be considered interrupt-heavy when
> trying to schedule threads (on the push side) in the system. Interrupt
> Awareness has only been added into the fair scheduling class.
>
> It does so by, using the f
On Thu, May 11, 2017 at 04:12:23PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.4.68 release.
> There are 60 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
On Fri, May 12, 2017 at 01:15:44PM -0400, Steven Rostedt wrote:
> 2) Allow for get_online_cpus() to nest
So Thomas and me have been avoiding doing this.
In general we avoid nested locking in the kernel. Nested locking makes
an absolute mockery of locking rules and what all gets protected.
Yes,
401 - 500 of 634 matches
Mail list logo