...@lists.xenproject.org
Cc: Konrad Rzeszutek Wilk
Cc: David Vrabel
Cc: Bjorn Helgaas
Cc: Graeme Gregory
Cc: Lv Zheng
Link:
http://lkml.kernel.org/r/1421720467-7709-4-git-send-email-jiang@linux.intel.com
Signed-off-by: Thomas Gleixner
Signed-off-by: Sasha Levin
---
arch/x86/include/asm/acpi.h | 1 +
arch
On 02/10/2015 04:30 AM, Raghavendra K T wrote:
>>
>> So I think Raghavendra's last version (which hopefully fixes the
>> lockup problem that Sasha reported) together with changing that
>
> V2 did pass the stress, but getting confirmation Sasha would help.
I've been running it for the last two day
e (32 cpu +ht sandy bridge 8GB 16vcpu guest)
>> benchmark overcommit %improve
>> kernbench 1x -0.13
>> kernbench 2x0.02
>> dbench 1x -1.77
>> dbench 2x -0.63
>>
>> [Jeremy: hinted missing
Hey Ian,
Sorry - I forgot to reply. It's in my stable queue and will be shipped in
the next release.
Thanks,
Sasha
On 09/11/2015 11:10 AM, Ian Campbell wrote:
> ping?
>
> On Wed, 2015-09-02 at 10:18 +0100, Ian Campbell wrote:
>> [resending to correct stable address, sorry folks]
>>
>> On Wed,
_ro.
Signed-off-by: Andy Lutomirski
Cc: Andrew Cooper
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: David Vrabel
Cc: Denys Vlasenko
Cc: H. Peter Anvin
Cc: Jan Beulich
Cc: Konrad Rzeszutek Wilk
Cc: Linus Torvalds
Cc: Peter Zijlstra
Cc: Sasha Levin
Cc: S
.com
Cc: casca...@linux.vnet.ibm.com
Cc: david.vra...@citrix.com
Cc: sanje...@broadcom.com
Cc: siva.kal...@broadcom.com
Cc: vyasev...@gmail.com
Cc: xen-de...@lists.xensource.com
Link: http://lkml.kernel.org/r/20150417190448.ga9...@l.oracle.com
Signed-off-by: Ingo Molnar
Signed-off-by: Sasha Levin
---
ar
On 07/17/2015 10:09 AM, Boris Ostrovsky wrote:
> On 07/17/2015 09:59 AM, Ian Campbell wrote:
>> On Tue, 2015-07-07 at 15:54 -0400, Boris Ostrovsky wrote:
>>> Commit 63753fac67e1 ("x86: Store a per-cpu shadow copy of CR4") in
>>> 3.18.y branch introduced a regression on PVH Xen guests.
>>>
>>> Pleas
On 02/06/2015 09:49 AM, Raghavendra K T wrote:
> static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
> {
> if (TICKET_SLOWPATH_FLAG &&
> - static_key_false(¶virt_ticketlocks_enabled)) {
> - arch_spinlock_t prev;
> + static_key_false(¶virt_tick
r unlock(),
> and we can move slowpath clearing to fastpath lock.
>
> However it brings additional case to be handled, viz., slowpath still
> could be set when somebody does arch_trylock. Handle that too by ignoring
> slowpath flag during lock availability check.
>
> Reported-by:
On 02/06/2015 02:42 PM, Davidlohr Bueso wrote:
> On Fri, 2015-02-06 at 08:25 -0800, Linus Torvalds wrote:
>> On Fri, Feb 6, 2015 at 6:49 AM, Raghavendra K T
>> wrote:
>>> Paravirt spinlock clears slowpath flag after doing unlock.
>> [ fix edited out ]
>>
>> So I'm not going to be applying this for
10 matches
Mail list logo