On 10/25/2016 11:03 AM, Christian Borntraeger wrote:
> Peter,
>
> here is v2 with some improved patch descriptions and some fixes. The
> previous version has survived one day of linux-next and I only changed
> small parts.
> So unless there is some other issue, feel free to
On 11/15/2016 01:30 PM, Russell King - ARM Linux wrote:
> On Tue, Oct 25, 2016 at 11:03:11AM +0200, Christian Borntraeger wrote:
>> For spinning loops people do often use barrier() or cpu_relax().
>> For most architectures cpu_relax and barrier are the same, but on
>> some ar
On 11/15/2016 02:37 PM, Russell King - ARM Linux wrote:
> On Tue, Nov 15, 2016 at 02:19:53PM +0100, Christian Borntraeger wrote:
>> On 11/15/2016 01:30 PM, Russell King - ARM Linux wrote:
>>> On Tue, Oct 25, 2016 at 11:03:11AM +0200, Christian Borntraeger wrote:
>>>>
On 11/16/2016 11:23 AM, Peter Zijlstra wrote:
> On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote:
>> Hi, Peter.
>> I think we can avoid a function call in a simpler way. How about below
>>
>> static inline bool vcpu_is_preempted(int cpu)
>> {
>> /* only set in pv case*/
>>
No need to duplicate the same define everywhere. Since
the only user is stop-machine and the only provider is
s390, we can use a default implementation of cpu_relax_yield
in sched.h.
Suggested-by: Russell King
Signed-off-by: Christian Borntraeger
---
arch/alpha/include/asm/processor.h | 1
On 10/21/2016 04:57 PM, David Miller wrote:
> From: Christian Borntraeger
> Date: Fri, 21 Oct 2016 13:58:53 +0200
>
>> For spinning loops people did often use barrier() or cpu_relax().
>> For most architectures cpu_relax and barrier are the same, but on
>> some arch
As there are no users left, we can remove cpu_relax_lowlatency.
Signed-off-by: Christian Borntraeger
---
arch/alpha/include/asm/processor.h | 1 -
arch/arc/include/asm/processor.h| 2 --
arch/arm/include/asm/processor.h| 1 -
arch/arm64/include/asm/processor.h | 1
_yield
PS: In the long run I would also try to provide for s390 something
like cpu_relax_yield_to with a cpu number (or just add that to
cpu_relax_yield), since a yield_to is always better than a yield as
long as we know the waiter.
----
stop_machine seemed to be the only important place for yielding during
cpu_relax. This was fixed by using cpu_relax_yield. Therefore, we can
now redefine cpu_relax to be a barrier instead on s390, making s390
identical to all other architectures.
Signed-off-by: Christian Borntraeger
---
arch
With the s390 special case of a yielding cpu_relax implementation gone,
we can now remove all users of cpu_relax_lowlatency and replace them
with cpu_relax.
Signed-off-by: Christian Borntraeger
---
drivers/gpu/drm/i915/i915_gem_request.c | 2 +-
drivers/vhost/net.c | 4
is place is probably the most important one
as it makes all but one guest CPUs wait for one guest CPU.
As we now have cpu_relax_yield, we can use this in multi_cpu_stop.
For now lets only add it here. We can add it later in other places
when necessary.
Signed-off-by: Christian Borntraeger
---
k
ed in more
and more places, lets revert the logic and provide a cpu_relax_yield
that can be called in places where yielding is more important than
latency. By default this is the same as cpu_relax on all architectures.
Signed-off-by: Christian Borntraeger
---
arch/alpha/include/asm/processor.
; - Minor cleanups.
>
> Andy Lutomirski (6):
> vring: Introduce vring_use_dma_api()
> virtio_ring: Support DMA APIs
> virtio: Add improved queue allocation API
> virtio_mmio: Use the DMA API if enabled
> virtio_pci: Use the DMA API if enabled
> vring: Use the DMA A
On 01/05/2016 10:30 AM, Michael S. Tsirkin wrote:
>
> arch/s390/kernel/vdso.c:smp_mb();
>
> Looking at
> Author: Christian Borntraeger
> Date: Fri Sep 11 16:23:06 2015 +0200
>
> s390/vdso: use correct memory barrier
>
>
Am 25.02.2015 um 11:08 schrieb Ingo Molnar:
>
> * Greg KH wrote:
>
It's:
d6abfdb20223 x86/spinlocks/paravirt: Fix memory corruption on unlock
>>>
>>> Yes, This is the original patch. Please note I have taken out the
>>> READ_ONCE changes from the original patch to avoid build war
.
Signed-off-by: Christian Borntraeger
---
arch/x86/xen/p2m.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index edbc7a6..cb71016 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -554,7 +554,7 @@ static bool alloc_p2m
so
use __force on that cast.
Fixes: a91ed664749c ("kernel: tighten rules for ACCESS ONCE")
Signed-off-by: Christian Borntraeger
---
include/linux/compiler.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 5e186bf..
Folks,
fyi, this is my current patch queue for the next merge window. It
does contain a patch that will disallow ACCESS_ONCE on non-scalar
types.
The tree is part of linux-next and can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/borntraeger/linux.git linux-next
Christian
ACCESS_ONCE does not work reliably on non-scalar types. For
example gcc 4.6 and 4.7 might remove the volatile tag for such
accesses during the SRA (scalar replacement of aggregates) step
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)
Fixup gup_pmd_range.
Signed-off-by: Christian
.o] Error 1
Replace ACCESS_ONCE with READ_ONCE to fix the problem.
Fixes: a91ed664749c ("kernel: tighten rules for ACCESS ONCE")
Cc: Paul E. McKenney
Cc: Christian Borntraeger
Signed-off-by: Guenter Roeck
Reviewed-by: Paul E. McKenney
Signed-off-by: Christian Borntraeger
---
a
Now that all non-scalar users of ACCESS_ONCE have been converted
to READ_ONCE or ASSIGN once, lets tighten ACCESS_ONCE to only
work on scalar types.
This variant was proposed by Alexei Starovoitov.
Signed-off-by: Christian Borntraeger
Reviewed-by: Paul E. McKenney
---
include/linux/compiler.h
commit 78bff1c8684f ("x86/ticketlock: Fix spin_unlock_wait() livelock")
introduced another ACCESS_ONCE case in x86 spinlock.h.
Change that as well.
Signed-off-by: Christian Borntraeger
Cc: Oleg Nesterov
---
arch/x86/include/asm/spinlock.h | 2 +-
1 file changed, 1 insertion(+),
READ_ONCE.
Signed-off-by: Christian Borntraeger
---
arch/powerpc/mm/hugetlbpage.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 5ff4e07..620d0ec 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch
READ_ONCE.
Signed-off-by: Christian Borntraeger
---
arch/powerpc/kvm/book3s_hv_rm_xics.c | 8
arch/powerpc/kvm/book3s_xics.c | 16
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c
b/arch/powerpc/kvm
Am 15.01.2015 um 11:43 schrieb David Vrabel:
> On 15/01/15 08:58, Christian Borntraeger wrote:
>> ACCESS_ONCE does not work reliably on non-scalar types. For
>> example gcc 4.6 and 4.7 might remove the volatile tag for such
>> accesses during the SRA (scalar replacemen
Am 15.01.2015 um 20:38 schrieb Oleg Nesterov:
> On 01/15, Christian Borntraeger wrote:
>>
>> --- a/arch/x86/include/asm/spinlock.h
>> +++ b/arch/x86/include/asm/spinlock.h
>> @@ -186,7 +186,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t
>&g
Am 15.01.2015 um 21:01 schrieb Oleg Nesterov:
> On 01/15, Christian Borntraeger wrote:
>>
>> Am 15.01.2015 um 20:38 schrieb Oleg Nesterov:
>>> On 01/15, Christian Borntraeger wrote:
>>>>
>>>> --- a/arch/x86/include/asm/spinlock.h
>>>>
Am 16.01.2015 um 00:09 schrieb Michael Ellerman:
> On Thu, 2015-01-15 at 09:58 +0100, Christian Borntraeger wrote:
>> ACCESS_ONCE does not work reliably on non-scalar types. For
>> example gcc 4.6 and 4.7 might remove the volatile tag for such
>> accesses during the SRA (
Am 28.07.2015 um 03:08 schrieb Andy Lutomirski:
> On Mon, Sep 1, 2014 at 10:39 AM, Andy Lutomirski wrote:
>> This fixes virtio on Xen guests as well as on any other platform
>> that uses virtio_pci on which physical addresses don't match bus
>> addresses.
>>
>> This can be tested with:
>>
>> v
29 matches
Mail list logo