On Tue, Jan 17, 2017 at 02:24:08PM +0800, Boqun Feng wrote:
> On Tue, Jan 17, 2017 at 11:33:41AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > > +       /*
> > > > +        * We assign class_idx here redundantly even though following
> > > > +        * memcpy will cover it, in order to ensure a rcu reader can
> > > > +        * access the class_idx atomically without lock.
> > > > +        *
> > > > +        * Here we assume setting a word-sized variable is atomic.
> > > 
> > > which one, where?
> > 
> > I meant xlock_class(xlock) in check_add_plock().
> > 
> > I was not sure about the following two.
> > 
> > 1. Is it ordered between following a and b?
> >    a. memcpy -> list_add_tail_rcu
> >    b. list_for_each_entry_rcu -> load class_idx (xlock_class)
> >    I assumed that it's not ordered.
> > 2. Does memcpy guarantee atomic store for each word?
> >    I assumed that it doesn't.
> > 
> > But I think I was wrong.. The first might be ordered. I will remove
> > the following redundant statement. It'd be orderd, right?
> > 
> 
> Yes, a and b are ordered, IOW, they could be paired, meaning when we
> got the item in a list_for_each_entry_rcu() loop, all memory operations
> before the corresponding list_add_tail_rcu() should be observed by us.

Thank you for confirming it.

> 
> Regards,
> Boqun
> 
> > > 
> > > > +        */
> > > > +       xlock->hlock.class_idx = hlock->class_idx;
> > > > +       gen_id = (unsigned int)atomic_inc_return(&cross_gen_id);
> > > > +       WRITE_ONCE(xlock->gen_id, gen_id);
> > > > +       memcpy(&xlock->hlock, hlock, sizeof(struct held_lock));
> > > > +       INIT_LIST_HEAD(&xlock->xlock_entry);
> > > > +       list_add_tail_rcu(&xlock->xlock_entry, &xlocks_head);
> > > 


Reply via email to