On 9/7/2018 10:38 AM, Alan Stern wrote:
> On Fri, 7 Sep 2018, Daniel Lustig wrote:
>
>> On 9/7/2018 9:09 AM, Will Deacon wrote:
>>> On Fri, Sep 07, 2018 at 12:00:19PM -0400, Alan Stern wrote:
>>>> On Thu, 6 Sep 2018, Andrea Parri wrote:
>>>>
>>
On 9/7/2018 9:09 AM, Will Deacon wrote:
> On Fri, Sep 07, 2018 at 12:00:19PM -0400, Alan Stern wrote:
>> On Thu, 6 Sep 2018, Andrea Parri wrote:
>>
Have you noticed any part of the generic code that relies on ordinary
acquire-release (rather than atomic RMW acquire-release) in order to
On 7/12/2018 2:45 AM, Will Deacon wrote:
> On Thu, Jul 12, 2018 at 11:34:32AM +0200, Peter Zijlstra wrote:
>> On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
>>> And I think if we raise atomic*_acquire() to require TSO (but ideally
>>> raise it to RCsc) we're there.
>>
>> To clarify
On 7/12/2018 11:10 AM, Linus Torvalds wrote:
> On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra wrote:
>>
>> The locking pattern is fairly simple and shows where RCpc comes apart
>> from expectation real nice.
>
> So who does RCpc right now for the unlock-lock sequence? Somebody
> mentioned powerp
On 7/11/2018 10:00 AM, Peter Zijlstra wrote:
> On Wed, Jul 11, 2018 at 04:57:51PM +0100, Will Deacon wrote:
>
>> It might be simple to model, but I worry this weakens our locking
>> implementations to a point where they will not be understood by the average
>> kernel developer. As I've said before
On 7/9/2018 1:01 PM, Alan Stern wrote:
> More than one kernel developer has expressed the opinion that the LKMM
> should enforce ordering of writes by locking. In other words, given
> the following code:
>
> WRITE_ONCE(x, 1);
> spin_unlock(&s):
> spin_lock(&s);
> WRITE_ONC
On 7/9/2018 9:52 AM, Will Deacon wrote:
> On Fri, Jul 06, 2018 at 02:10:55PM -0700, Paul E. McKenney wrote:
>> On Fri, Jul 06, 2018 at 04:37:21PM -0400, Alan Stern wrote:
>>> On Thu, 5 Jul 2018, Andrea Parri wrote:
>>>
> At any rate, it looks like instead of strengthening the relation, I
>
On 7/5/2018 9:56 AM, Paul E. McKenney wrote:
> On Thu, Jul 05, 2018 at 05:22:26PM +0100, Will Deacon wrote:
>> On Thu, Jul 05, 2018 at 08:44:39AM -0700, Daniel Lustig wrote:
>>> On 7/5/2018 8:31 AM, Paul E. McKenney wrote:
>>>> On Thu, Jul 05, 2018 at 10:21:36AM -04
On 7/5/2018 8:31 AM, Paul E. McKenney wrote:
> On Thu, Jul 05, 2018 at 10:21:36AM -0400, Alan Stern wrote:
>> At any rate, it looks like instead of strengthening the relation, I
>> should write a patch that removes it entirely. I also will add new,
>> stronger relations for use with locking, essen
On 7/5/2018 8:16 AM, Daniel Lustig wrote:
> On 7/5/2018 7:44 AM, Will Deacon wrote:
>> Andrea,
>>
>> On Thu, Jul 05, 2018 at 04:00:29PM +0200, Andrea Parri wrote:
>>> On Wed, Jul 04, 2018 at 01:11:04PM +0100, Will Deacon wrote:
>>>> On Tue, Jul 03,
On 7/5/2018 7:44 AM, Will Deacon wrote:
> Andrea,
>
> On Thu, Jul 05, 2018 at 04:00:29PM +0200, Andrea Parri wrote:
>> On Wed, Jul 04, 2018 at 01:11:04PM +0100, Will Deacon wrote:
>>> On Tue, Jul 03, 2018 at 01:28:17PM -0400, Alan Stern wrote:
There's also read-write ordering, in the form of
#x27;ll take a human out of the loop.
>
> CC: Daniel Lustig
> Signed-off-by: Palmer Dabbelt
Looks like there's an accidental backquote before my name?
Once that gets fixed:
Acked-by: Daniel Lustig
> ---
> MAINTAINERS | 1 +
> 1 file changed, 1 insertion(+)
&
On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
>> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
>>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
>>
>> [...]
>>
>>> >This proposal relie
On 3/1/2018 1:54 PM, Palmer Dabbelt wrote:
> On Thu, 01 Mar 2018 07:11:41 PST (-0800), parri.and...@gmail.com wrote:
>> Hi Daniel,
>>
>> On Thu, Feb 22, 2018 at 11:47:57AM -0800, Daniel Lustig wrote:
>>> On 2/22/2018 10:27 AM, Peter Zijlstra wrote:
>>> >
On 2/27/2018 10:21 AM, Palmer Dabbelt wrote:
> On Mon, 26 Feb 2018 18:24:11 PST (-0800), parri.and...@gmail.com wrote:
>> Introduce __smp_{store_release,load_acquire}, and rely on the generic
>> definitions for smp_{store_release,load_acquire}. This avoids the use
>> of full ("rw,rw") fences on SMP
On 2/22/2018 10:27 AM, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 10:13:17AM -0800, Paul E. McKenney wrote:
>> So we have something that is not all that rare in the Linux kernel
>> community, namely two conflicting more-or-less concurrent changes.
>> This clearly needs to be resolved, either b
On 2/22/2018 6:12 AM, Andrea Parri wrote:
> On Thu, Feb 22, 2018 at 02:40:04PM +0100, Peter Zijlstra wrote:
>> On Thu, Feb 22, 2018 at 01:19:50PM +0100, Andrea Parri wrote:
>>
>>> C unlock-lock-read-ordering
>>>
>>> {}
>>> /* s initially owned by P1 */
>>>
>>> P0(int *x, int *y)
>>> {
>>> WRITE
On 2/21/2018 9:27 PM, Boqun Feng wrote:
> On Wed, Feb 21, 2018 at 08:13:57PM -0800, Paul E. McKenney wrote:
>> On Thu, Feb 22, 2018 at 11:23:49AM +0800, Boqun Feng wrote:
>>> On Tue, Feb 20, 2018 at 03:25:10PM -0800, Paul E. McKenney wrote:
From: Alan Stern
This commit adds a litmus
On 12/1/2017 7:32 AM, Alan Stern wrote:
> On Fri, 1 Dec 2017, Boqun Feng wrote:
>>> But even on a non-other-multicopy-atomic system, there has to be some
>>> synchronization between the memory controller and P1's CPU. Otherwise,
>>> how could the system guarantee that P1's smp_load_acquire would
On 11/29/2017 12:42 PM, Paul E. McKenney wrote:
> On Wed, Nov 29, 2017 at 02:53:06PM -0500, Alan Stern wrote:
>> On Wed, 29 Nov 2017, Peter Zijlstra wrote:
>>
>>> On Wed, Nov 29, 2017 at 11:04:53AM -0800, Daniel Lustig wrote:
>>>
>>>> While we'
On 11/27/2017 1:16 PM, Alan Stern wrote:
> This is essentially a repeat of an email I sent out before the
> Thanksgiving holiday, the assumption being that lack of any responses
> was caused by the holiday break. (And this time the message is CC'ed
> to LKML, so there will be a public record of it
On 11/27/2017 1:16 PM, Alan Stern wrote:> C rel-acq-write-ordering-3
>
> {}
>
> P0(int *x, int *s, int *y)
> {
> WRITE_ONCE(*x, 1);
> smp_store_release(s, 1);
> r1 = smp_load_acquire(s);
> WRITE_ONCE(*y, 1);
> }
>
> P1(int *x, int *y)
> {
> r2 = READ_ONCE(*y);
>
> From: Will Deacon [mailto:will.dea...@arm.com]
> Hi Daniel,
>
> On Thu, Nov 16, 2017 at 06:40:46AM +0000, Daniel Lustig wrote:
> > > > In that case, maybe we should just start out having a fence on
> > > > both sides for
> > >
> > > Actually
> > In that case, maybe we should just start out having a fence on both
> > sides for
>
> Actually, given your architecture is RCsc rather than RCpc, so I think maybe
> you could follow the way that ARM uses(i.e. relaxed load + release store + a
> full barrier). You can see the commit log of 8e86f
> -Original Message-
> From: Boqun Feng [mailto:boqun.f...@gmail.com]
> Sent: Wednesday, November 15, 2017 5:19 PM
> To: Daniel Lustig
> Cc: Palmer Dabbelt ; will.dea...@arm.com; Arnd
> Bergmann ; Olof Johansson ; linux-
> ker...@vger.kernel.org; patc...@
e
> >> >P0:
> >> >WRITE_ONCE(x) = 1;
> >> >atomic_add_return(y, 1);
> >> >WRITE_ONCE(z) = 1;
> >> >
> >> >P1:
> >> >READ_ONCE(z) // reads 1
> >> >smp_rmb();
> >> >READ_ONCE(x) // must not re
> > https://github.com/riscv/riscv-isa-manual/releases/download/riscv-user
> > -2.2/riscv-spec-v2.2.pdf
>
> That's the most up to date spec.
Yes, that's the most up to date public spec. Internally, the RISC-V memory
model task group has been working on fixing the memory model spec for the
past c
27 matches
Mail list logo