target352 356 +4
e843419@0b02_d7e7_408 8 - -8
e843419@01bb_21d2_868 8 - -8
finish_task_switch.isra 592 548 -44
Signed-off-by: Nysal Jan K.A.
Thanks!
Reviewed-by:
)
return;
sync_core_before_usermode();
}
#else
static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct
*mm) { }
#endif
Not sure what folks prefer.
In either case I think it's probably worth a short comment explaining
why it's worth the
s://github.com/intel-lab-lkp/linux/commits/Mathieu-Desnoyers/compiler-h-Introduce-ptr_eq-to-preserve-address-dependency/20241008-215353
base: https://git.kernel.org/cgit/linux/kernel/git/powerpc/linux.git next
patch link:
https://lore.kernel.org/all/20241008135034.1982519-5-mathieu.desnoy...
https://github.com/intel-lab-lkp/linux/commits/Mathieu-Desnoyers/compiler-h-Introduce-ptr_eq-to-preserve-address-dependency/20241005-023027
base: https://git.kernel.org/cgit/linux/kernel/git/powerpc/linux.git next
patch link:
https://lore.kernel.org/all/20241004182734.1761555-5-mathieu.desnoy...
On 2024-03-25 16:34, Nathan Lynch wrote:
Mathieu Desnoyers writes:
In the powerpc architecture support within the liburcu project [1]
we have a cache line size defined as 256 bytes with the following
comment:
/* Include size of POWER5+ L3 cache lines: 256 bytes */
#define CAA_CACHE_LINE_SIZE
On 2024-03-26 03:19, Michael Ellerman wrote:
Mathieu Desnoyers writes:
Hi,
Hi Mathieu,
In the powerpc architecture support within the liburcu project [1]
we have a cache line size defined as 256 bytes with the following
comment:
/* Include size of POWER5+ L3 cache lines: 256 bytes
is is why we came up with this
value, but I don't have the detailed specs of that machine.
Any feedback on this matter would be appreciated.
Thanks!
Mathieu
[1] https://liburcu.org
[2] https://github.com/urcu/userspace-rcu/pull/22
[3] https://www.7-cpu.com/
--
Mathieu Desnoyers
EfficiOS
spect it would be good to merge that fix into tip/master though
sched/core.
Thanks,
Mathieu
Thanks
-Sachin
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On 2022-12-06 21:09, Michael Ellerman wrote:
Mathieu Desnoyers writes:
On 2022-12-05 17:50, Michael Ellerman wrote:
Michael Jeanson writes:
On 2022-12-05 15:11, Michael Jeanson wrote:
Michael Jeanson writes:
In v5.7 the powerpc syscall entry/exit logic was rewritten in C, on
lsyms.c:symbol_valid() to also include function descriptor
symbols. This would mean accepting symbols pointing into the .opd ELF
section.
IMHO the second option would be better because it does not increase the
kernel image size as much as KALLSYMS_ALL.
Thoughts ?
Thanks,
Mathieu
--
Mathieu De
RCU read locks shared with nmi handlers.
Thoughts ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
.
> --- a/include/linux/sync_core.h
> +++ /dev/null
> @@ -1,21 +0,0 @@
> -/* SPDX-License-Identifier: GPL-2.0 */
> -#ifndef _LINUX_SYNC_CORE_H
> -#define _LINUX_SYNC_CORE_H
> -
> -#ifdef CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> -#include
> -#else
> -/*
> - * This is a dummy sync_core_before_usermode() implementation that can be
> used
> - * on all architectures which return to user-space through core serializing
> - * instructions.
> - * If your architecture returns to user-space through non-core-serializing
> - * instructions, you need to write your own functions.
> - */
> -static inline void sync_core_before_usermode(void)
> -{
> -}
> -#endif
> -
> -#endif /* _LINUX_SYNC_CORE_H */
> -
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
even make it irrelevant.
Thanks,
Mathieu
>
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: Nicholas Piggin
> Cc: Mathieu Desnoyers
> Cc: Peter Zijlstra
> Signed-off-by: Andy Lutomirski
> -
("x86/kvm: Use generic xfer
> to guest work function"), where TIF_NOTIFY_RESUME would be cleared by KVM
> without updating rseq, leading to a stale CPU ID and other badness.
>
> Signed-off-by: Sean Christopherson
Thanks!
Acked-by: Mathieu Desnoyers
> ---
> tools/testing
- On Aug 27, 2021, at 7:23 PM, Sean Christopherson sea...@google.com wrote:
> On Fri, Aug 27, 2021, Mathieu Desnoyers wrote:
[...]
>> Does it reproduce if we randomize the delay to have it picked randomly from
>> 0us
>> to 100us (with 1us step) ? It would remove a
- On Aug 26, 2021, at 7:54 PM, Sean Christopherson sea...@google.com wrote:
> On Thu, Aug 26, 2021, Mathieu Desnoyers wrote:
>> - On Aug 25, 2021, at 8:51 PM, Sean Christopherson sea...@google.com
>> wrote:
>> >> >> + r = sched_s
- On Aug 25, 2021, at 8:51 PM, Sean Christopherson sea...@google.com wrote:
> On Mon, Aug 23, 2021, Mathieu Desnoyers wrote:
>> [ re-send to Darren Hart ]
>>
>> - On Aug 23, 2021, at 11:18 AM, Mathieu Desnoyers
>> mathieu.desnoy...@efficios.com wrote:
>>
[ re-send to Darren Hart ]
- On Aug 23, 2021, at 11:18 AM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> - On Aug 20, 2021, at 6:50 PM, Sean Christopherson sea...@google.com
> wrote:
>
>> Add a test to verify an rseq's CPU ID is updated correctly if the
();
> + cpu = sched_getcpu();
> + rseq_cpu = READ_ONCE(__rseq.cpu_id);
> + smp_rmb();
> + } while (snapshot != atomic_read(&seq_cnt));
> +
> + TEST_ASSERT(rseq_cpu == cpu,
> + "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);
> + }
> +
> + pthread_join(migration_thread, NULL);
> +
> + kvm_vm_free(vm);
> +
> + sys_rseq(RSEQ_FLAG_UNREGISTER);
> +
> + return 0;
> +}
> --
> 2.33.0.rc2.250.ged5fa647cd-goog
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
earing TIF_NOTIFY_RESUME without informing rseq can lead to segfaults
> and other badness in userspace VMMs that use rseq in combination with KVM,
> e.g. due to the CPU ID being stale after task migration.
Acked-by: Mathieu Desnoyers
>
> Fixes: 72c3c0fe54a3 ("x86/kvm: Use generic xfer to
- On Aug 19, 2021, at 7:48 PM, Sean Christopherson sea...@google.com wrote:
> On Thu, Aug 19, 2021, Mathieu Desnoyers wrote:
>> - On Aug 17, 2021, at 8:12 PM, Sean Christopherson sea...@google.com
>> wrote:
>> > @@ -250,7 +250,7 @@ static int rseq_ip_fi
- On Aug 19, 2021, at 7:33 PM, Sean Christopherson sea...@google.com wrote:
> On Thu, Aug 19, 2021, Mathieu Desnoyers wrote:
>> - On Aug 17, 2021, at 8:12 PM, Sean Christopherson sea...@google.com
>> wrote:
>>
>> > Add a test to verify an rseq's CPU
seq_abi.cpu_id reads
vs
sched_setaffinity calls within the migration thread.
Thoughts ?
Thanks,
Mathieu
> + TEST_ASSERT(rseq_cpu == cpu || cpu != sched_getcpu(),
> + "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);
> + }
> +
> + pthread_join(migration_thread, NULL);
> +
> + kvm_vm_free(vm);
> +
> + sys_rseq(RSEQ_FLAG_UNREGISTER);
> +
> + return 0;
> +}
> --
> 2.33.0.rc1.237.g0d66db33f3-goog
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
ip fixup code. Indeed, it
is not relevant to do any fixup here, because it is nested in a ioctl system
call.
Effectively, this would preserve the SIGSEGV behavior when this ioctl is
erroneously called by user-space from a rseq critical section.
Thanks for looking into this !
Mathieu
> return clear_rseq_cs(t);
> ret = rseq_need_restart(t, rseq_cs.flags);
> if (ret <= 0)
> --
> 2.33.0.rc1.237.g0d66db33f3-goog
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
e paths which consume the
NOTIFY_RESUME without calling the rseq callback, which introduces issues.
Agreed.
Acked-by: Mathieu Desnoyers
>
> Signed-off-by: Sean Christopherson
> ---
> arch/arm/kernel/signal.c | 1 -
> arch/arm64/kernel/signal.c | 1 -
> arch/csky/kernel/signa
- On Jun 18, 2021, at 3:58 PM, Andy Lutomirski l...@kernel.org wrote:
> On Fri, Jun 18, 2021, at 9:31 AM, Mathieu Desnoyers wrote:
>> - On Jun 17, 2021, at 8:12 PM, Andy Lutomirski l...@kernel.org wrote:
>>
>> > On 6/17/21 7:47 AM, Mathieu Desnoyers wrote:
>&g
le__ ("sync" : : : "memory")
So the original motivation here was to skip a "sync" instruction whenever
switching between threads which are part of the same process. But based on
recent discussions, I suspect my implementation may be inaccurately doing
so
- On Jun 17, 2021, at 8:12 PM, Andy Lutomirski l...@kernel.org wrote:
> On 6/17/21 7:47 AM, Mathieu Desnoyers wrote:
>
>> Please change back this #ifndef / #else / #endif within function for
>>
>> if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_COR
tent with the icache flush and the CPU's cache type.
> +#
> +# On powerpc, a program can use MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE
> +# similarly to arm64. It would be nice if the powerpc maintainers could
> +# add a more clear explanantion.
We should document the requirements on ARMv7 as well.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
e {
...
}
I don't think mixing up preprocessor and code logic makes it more readable.
Thanks,
Mathieu
> } else if (flags == MEMBARRIER_FLAG_RSEQ) {
> if (!IS_ENABLED(CONFIG_RSEQ))
> return -EINVAL;
> --
> 2.31.1
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Dec 28, 2020, at 4:06 PM, Andy Lutomirski l...@kernel.org wrote:
> On Mon, Dec 28, 2020 at 12:32 PM Mathieu Desnoyers
> wrote:
>>
>> - On Dec 28, 2020, at 2:44 PM, Andy Lutomirski l...@kernel.org wrote:
>>
>> > On Mon, Dec 28, 2020 at 11:09
4/a/Memory-Ordering/Barriers/ISB-in-more-detail
[2]
https://montcs.bloomu.edu/Information/ARMv8/ARMv8-A_Architecture_Reference_Manual_(Issue_A.a).pdf
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
ck".
>
> So, the core executing this call is not allowed to block, but the
> other part indicates that the other CPUs _have_ executed a serialising
> instruction before this call returns... one wonders how that happens
> without blocking. Maybe the CPU spins waiting for completion instead?
Membarrier expedited sync-core issues IPIs to all CPUs running sibling
threads. AFAIR the IPI mechanism uses the "csd lock" which is basically
busy waiting. So it does not "block", it busy-waits.
For completeness of the explanation, other (non-running) threads acting
on the same mm will eventually issue the context synchronizing instruction
before returning to user-space whenever they are scheduled back.
Thanks,
Mathieu
>
> --
> RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
> FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
pose to user-space, e.g. flush_icache_user_range on arm32.
So between code modification and allowing other threads to jump to that code,
it should be expected that architectures without coherent i/d cache will need
to flush caches to ensure coherency *and* to issue membarrier to make sure
core serializing instructions will be issued by every thread acting on the
same mm either immediately by means of the IPI, or before they return to
user-space if they do not happen to be currently running when the membarrier
system call is invoked.
Hoping this clarifies things. I suspect we will need to clarify documentation
about what membarrier *does not* guarantee, given that you mistakenly expected
membarrier to take care of cache flushing.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
?
Based on the notes I have, use of `eret` on aarch64 guarantees a context
synchronizing
instruction when returning to user-space.
Thanks,
Mathieu
>
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: Nich
// the mm check for?
>> +membarrier_mm_sync_core_before_usermode(next);
>
> On the other hand the reason for this mm check that you mention contradicts
> my previous understanding as the git log says:
>
> commit 2840cf02fae627860156737e83326df354ee4ec
re is the meat. The current code is using the (possibly incomplete)
lazy TLB state known by the scheduler to sync core, and it appears it may be
a bit more heavy that what is strictly needed.
Your change instead rely on the internal knowledge of lazy TLB within x86
switch_mm_irqs_off to achie
tter put in x86 lazy tlb code.
Ideally yes this complexity should sit within the x86 architecture code
if only that architecture requires it.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 21, 2020, at 11:19 AM, Peter Zijlstra pet...@infradead.org wrote:
> On Tue, Jul 21, 2020 at 11:15:13AM -0400, Mathieu Desnoyers wrote:
>> - On Jul 21, 2020, at 11:06 AM, Peter Zijlstra pet...@infradead.org
>> wrote:
>>
>> > On Tue, Jul 21, 2020
rely on this, and we just provide an additional guarantee for future kthread
implementations.
> Also, I just realized, I still have a fix for use_mm() now
> kthread_use_mm() that seems to have been lost.
I suspect we need to at least document the memory barriers in kthread_use_mm and
kthread_unuse_mm to state that they are required by membarrier if we want to
ipi kthreads as well.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
have a compelling use-case to implement a different behavior which covers
kthreads, this could be added consistently across membarrier commands with a
flag (or by adding new commands).
Does this approach make sense ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Please provide an example case with memory accesses via kthread_use_mm where
ordering matters to support your concern.
> so I really think
> it's a fragile interface with no real way for the user to know how
> kernel threads may use its mm for any particular reason, so membarrier
> should synchronize all possible kernel users as well.
I strongly doubt so, but perhaps something should be clarified in the
documentation
if you have that feeling.
Thanks,
Mathieu
>
> Thanks,
> Nick
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 17, 2020, at 1:44 PM, Alan Stern st...@rowland.harvard.edu wrote:
> On Fri, Jul 17, 2020 at 12:22:49PM -0400, Mathieu Desnoyers wrote:
>> - On Jul 17, 2020, at 12:11 PM, Alan Stern st...@rowland.harvard.edu
>> wrote:
>>
>> >> > I agree w
ory barrier on the caller thread _after_ we finished
* waiting for the last IPI. [...]
However, it does not explain why it needs to be paired with a barrier in the
scheduler, clearly for the case where the IPI is skipped. I wonder whether this
part
of the comment is factually correct:
* [...] Matches memory barriers around rq->curr modification in
scheduler.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 17, 2020, at 10:51 AM, Alan Stern st...@rowland.harvard.edu wrote:
> On Fri, Jul 17, 2020 at 09:39:25AM -0400, Mathieu Desnoyers wrote:
>> - On Jul 16, 2020, at 5:24 PM, Alan Stern st...@rowland.harvard.edu
>> wrote:
>>
>> > On Thu, Jul 16, 202
rmed through
system calls from the context of user-space threads, which are called from the
right mm.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 16, 2020, at 5:24 PM, Alan Stern st...@rowland.harvard.edu wrote:
> On Thu, Jul 16, 2020 at 02:58:41PM -0400, Mathieu Desnoyers wrote:
>> - On Jul 16, 2020, at 12:03 PM, Mathieu Desnoyers
>> mathieu.desnoy...@efficios.com wrote:
>>
>> >
- On Jul 16, 2020, at 12:03 PM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> - On Jul 16, 2020, at 11:46 AM, Mathieu Desnoyers
> mathieu.desnoy...@efficios.com wrote:
>
>> - On Jul 16, 2020, at 12:42 AM, Nicholas Piggin npig...@gmail.com wrote:
>&
- On Jul 16, 2020, at 11:46 AM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> - On Jul 16, 2020, at 12:42 AM, Nicholas Piggin npig...@gmail.com wrote:
>> I should be more complete here, especially since I was complaining
>> about unclear barrier comment :)
all.
In the case of io_uring, submitting a request or returning from waiting
on request completion appear to provide this causality relationship.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
_uring write request
or this other scenario:
* Frequent read / Infrequent write, communicating read completion through
variable X
load from X (waiting for X==1) -> membarrier -> submit io_uring write request
with matching
wait for io_uring read request completion -> asm volatile (::: "memory") ->
store X=1
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
ed more than just context sync after IPI. We need context
sync
in return path of any trap/interrupt/system call which returns to user-space,
else
we'd need to add the proper core serializing barriers in the scheduler, as we
had
to do for lazy tlb on x86.
Or am I missing something ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
interrupt to kernel code does not need to be context serializing
as long as kernel serializes before returning to user-space.
However, return from interrupt to user-space needs to be context serializing.
Thanks,
Mathieu
>
> https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-July/214171.html
RIVATE,GLOBAL}_EXPEDITED, implicitly
> - * provided by mmdrop(),
> - * - a sync_core for SYNC_CORE.
> + * switch_mm(). Membarrier requires a full barrier after storing to
> + * rq->curr, before returning to userspace, for
> + * {PRIVATE,GLOBAL}_EXPEDITED. This is implicitly provided by mmdrop().
>*/
> - if (mm) {
> - membarrier_mm_sync_core_before_usermode(mm);
> + if (mm)
> mmdrop(mm);
> - }
> +
> if (unlikely(prev_state == TASK_DEAD)) {
> if (prev->sched_class->task_dead)
> prev->sched_class->task_dead(prev);
> @@ -6292,6 +6289,7 @@ void idle_task_exit(void)
> BUG_ON(current != this_rq()->idle);
>
> if (mm != &init_mm) {
> + /* enter_lazy_tlb is not done because we're about to go down */
> switch_mm(mm, &init_mm, current);
> finish_arch_post_lock_switch();
> }
> --
> 2.23.0
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 9, 2020, at 4:46 PM, Segher Boessenkool seg...@kernel.crashing.org
wrote:
> On Thu, Jul 09, 2020 at 01:56:19PM -0400, Mathieu Desnoyers wrote:
>> > Just to make sure I understand your recommendation. So rather than
>> > hard coding r17 as the temporary registers
- On Jul 9, 2020, at 1:42 PM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> - On Jul 9, 2020, at 1:37 PM, Segher Boessenkool
> seg...@kernel.crashing.org
> wrote:
>
>> On Thu, Jul 09, 2020 at 09:43:47AM -0400, Mathieu Desnoyers wrote:
>>> >
- On Jul 9, 2020, at 1:37 PM, Segher Boessenkool seg...@kernel.crashing.org
wrote:
> On Thu, Jul 09, 2020 at 09:43:47AM -0400, Mathieu Desnoyers wrote:
>> > What protects r17 *after* this asm statement?
>>
>> As discussed in the other leg of the thread (with the cod
- On Jul 8, 2020, at 8:18 PM, Segher Boessenkool seg...@kernel.crashing.org
wrote:
> On Wed, Jul 08, 2020 at 08:01:23PM -0400, Mathieu Desnoyers wrote:
>> > > #define RSEQ_ASM_OP_CMPEQ(var, expect, label)
>> > > \
>> > > LOAD_
- On Jul 8, 2020, at 8:10 PM, Segher Boessenkool seg...@kernel.crashing.org
wrote:
> Hi!
>
> On Wed, Jul 08, 2020 at 10:00:01AM -0400, Mathieu Desnoyers wrote:
[...]
>
>> -#define STORE_WORD "std "
>> -#define LOAD_WORD "ld "
&
r17 as long as your code (after inlining etc.!) stays
> small, but there is Murphy's law.
r17 is in the clobber list, so it should be ok.
>
> Anyway... something in rseq_str is wrong, missing %X. This may
> have to do with the abuse of inline asm here, making a fix harder :-(
I just committed a fix which enhances the macros.
Thanks for your help!
Mathieu
>
>
> Segher
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
IT for SMP support")
Signed-off-by: Mathieu Desnoyers
Cc: Christophe Leroy
Cc: Kumar Gala
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: linuxppc-dev@lists.ozlabs.org
Cc: # v2.6.28+
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 2 +-
arch/powerpc/include/asm/n
uot; (*((unsigned char *)ptep+4))
: "r" (pte) : "memory");
where I would have expected:
stw%U1%X1 %L2,%1"
Is it a bug or am I missing something ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 8, 2020, at 10:21 AM, Christophe Leroy christophe.le...@csgroup.eu
wrote:
> Le 08/07/2020 à 16:00, Mathieu Desnoyers a écrit :
>> - On Jul 8, 2020, at 8:33 AM, Mathieu Desnoyers
>> mathieu.desnoy...@efficios.com wrote:
>>
>>> - On Jul 7, 2020,
barrier + context synchronistaion by the time it has done" is not strictly
correct: the context synchronizing instruction does not strictly need to
happen on each core before membarrier returns. A similar line of thoughts
can be followed for memory barriers.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On Jul 8, 2020, at 8:33 AM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> - On Jul 7, 2020, at 8:59 PM, Segher Boessenkool
> seg...@kernel.crashing.org
> wrote:
[...]
>>
>> So perhaps you have code like
>>
>> int *p;
>> int x
- On Jul 7, 2020, at 8:59 PM, Segher Boessenkool seg...@kernel.crashing.org
wrote:
> Hi!
>
> On Tue, Jul 07, 2020 at 03:17:10PM -0400, Mathieu Desnoyers wrote:
>> I'm trying to build librseq at:
>>
>> https://git.kernel.org/pub/scm/libs/librseq/librseq.
//gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html#Machine-Constraints
it seems that "Q" means "A memory operand addressed by just a base register."
I suspect that lwz and stw don't expect some kind of immediate offset which
can be kept with "m", and
nterrupt replay is an interesting case. I thought it was okay (because
> the IPI would cause a hard interrupt which does do the rfi) but that
> should at least be written.
Yes.
> The context synchronisation happens before
> the Linux IPI function is called, but for the purpose of membarr
Use "twui" as the guard instruction for the restartable sequence abort
handler.
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: Alan Modra
CC: linuxppc-
Use "twui" as the guard instruction for the restartable sequence abort
handler.
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: Alan Modra
CC: linuxppc-
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/uapi/asm/unistd.
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/uapi/asm/unistd.
- On Jun 5, 2018, at 1:18 AM, Michael Ellerman m...@ellerman.id.au wrote:
> Mathieu Desnoyers writes:
>
>> From: Boqun Feng
>>
>> Wire up the rseq system call on powerpc.
>>
>> This provides an ABI improving the speed of a user-space getcpu
>> o
- On Jun 5, 2018, at 1:21 AM, Michael Ellerman m...@ellerman.id.au wrote:
> Mathieu Desnoyers writes:
>> From: Boqun Feng
>>
>> Syscalls are not allowed inside restartable sequences, so add a call to
>> rseq_syscall() at the very beginning of s
-reservation/store-conditional atomics.
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h |
-bit powerpc kernel by Mathieu Desnoyers. Still needs to
be tested on 32-bit powerpc kernel. ]
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: li
From: Boqun Feng
Call the rseq_handle_notify_resume() function on return to userspace if
TIF_NOTIFY_RESUME thread flag is set.
Perform fixup on the pre-signal when a signal is delivered on top of a
restartable sequence critical section.
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu
- On May 24, 2018, at 3:03 AM, Michael Ellerman m...@ellerman.id.au wrote:
> Mathieu Desnoyers writes:
>> - On May 23, 2018, at 4:14 PM, Mathieu Desnoyers
>> mathieu.desnoy...@efficios.com wrote:
> ...
>>>
>>> Hi Boqun,
>>>
>>> I t
- On May 23, 2018, at 4:14 PM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
> - On May 20, 2018, at 10:08 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
>> On Fri, May 18, 2018 at 02:17:17PM -0400, Mathieu Desnoyers wrote:
>>> - On May 17, 2018, a
- On May 20, 2018, at 10:08 AM, Boqun Feng boqun.f...@gmail.com wrote:
> On Fri, May 18, 2018 at 02:17:17PM -0400, Mathieu Desnoyers wrote:
>> - On May 17, 2018, at 7:50 PM, Boqun Feng boqun.f...@gmail.com wrote:
>> [...]
>> >> > I think you're right.
TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)
>> >
By the way, I think this is not the right spot to call rseq_syscall, because
interrupts are disabled. I think we should move this hunk right after
system_call_exit.
Would you like to implement and test an updated patch adding those calls for
ppc 32 and 64 ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
- On May 16, 2018, at 9:19 PM, Boqun Feng boqun.f...@gmail.com wrote:
> On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
>> - On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org
>> wrote:
>>
>> > On Mon, Apr 30, 2018 at 06:44
- On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers wrote:
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index c32a181a7cbb..ed21a777e8c6 100644
>> --- a/arch/powerpc/
-reservation/store-conditional atomics.
TODO: wire up rseq_syscall() on return from system call. It is used with
CONFIG_DEBUG_RSEQ=y to ensure system calls are not issued within rseq critical
section
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul
From: Boqun Feng
Call the rseq_handle_notify_resume() function on return to userspace if
TIF_NOTIFY_RESUME thread flag is set.
Perform fixup on the pre-signal when a signal is delivered on top of a
restartable sequence critical section.
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/asm/unistd.
From: Boqun Feng
Call the rseq_handle_notify_resume() function on return to userspace if
TIF_NOTIFY_RESUME thread flag is set.
Perform fixup on the pre-signal when a signal is delivered on top of a
restartable sequence critical section.
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu
-reservation/store-conditional atomics.
TODO: wire up rseq_syscall() on return from system call. It is used with
CONFIG_DEBUG_RSEQ=y to ensure system calls are not issued within rseq critical
section
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/asm/unistd.
-reservation/store-conditional atomics.
Signed-off-by: Boqun Feng
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h |
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/kernel/signal.c | 3 +++
2 files changed, 4
- On Feb 5, 2018, at 3:22 PM, Ingo Molnar mi...@kernel.org wrote:
> * Mathieu Desnoyers wrote:
>
>>
>> +config ARCH_HAS_MEMBARRIER_HOOKS
>> +bool
>
> Yeah, so I have renamed this to ARCH_HAS_MEMBARRIER_CALLBACKS, and propagated
> it
> through th
Signed-off-by: Mathieu Desnoyers
Acked-by: Peter Zijlstra (Intel)
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutom
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC:
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC:
TE_EXPEDITED.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC
TE_EXPEDITED.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC
TE_EXPEDITED.
Signed-off-by: Mathieu Desnoyers
CC: Peter Zijlstra
CC: Paul E. McKenney
CC: Boqun Feng
CC: Andrew Hunter
CC: Maged Michael
CC: Avi Kivity
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Dave Watson
CC: Alan Stern
CC: Will Deacon
CC: Andy Lutomirski
CC
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Boqun Feng
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/asm/unistd.
Signed-off-by: Mathieu Desnoyers
CC: Benjamin Herrenschmidt
CC: Paul Mackerras
CC: Michael Ellerman
CC: Peter Zijlstra
CC: "Paul E. McKenney"
CC: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/kernel/signal.c | 3 +++
2 files changed, 4
1 - 100 of 166 matches
Mail list logo