<snip>
> 
> 
> >>>> +__rte_experimental
> >>>> +static inline void
> >>>> +rte_seqlock_write_lock(rte_seqlock_t *seqlock) {
> >>>> +  uint32_t sn;
> >>>> +
> >>>> +  /* to synchronize with other writers */
> >>>> + rte_spinlock_lock(&seqlock->lock);
> >>>> +
> >>>> +  sn = seqlock->sn + 1;
> >>> The load of seqlock->sn could use __atomic_load_n to be consistent.
> >>>
> >>
> >> But why? I know it doesn't have any cost (these loads are going to be
> >> atomic anyways), but why use a construct with stronger guarantees
> >> than you have to?
> > Using __atomic_xxx ensures that the operation is atomic always. I believe (I
> am not sure) that, when not using __atomic_xxx, the compiler is allowed to
> use non-atomic operations.
> > The other reason is we are not qualifying 'sn' as volatile. Use of
> __atomic_xxx inherently indicate to the compiler not to cache 'sn' in a
> register. I do not know the compiler behavior if some operations on 'sn' use
> __atomic_xxx and some do not.
> We don’t need an atomic read here as the seqlock->lock protects (serialises)
> writer-side accesses to seqlock->sn. There is no other thread which could
> update seqlock->sn while this thread owns the lock. The seqlock owner could
> read seqlock->sn byte for byte without any problems.
> Only writes to seqlock->sn need to be atomic as there might be readers who
> read seqlock->sn and in such multi-access scenarios, all accesses need to be
> atomic in order to avoid data races.
How does the compiler interpret a mix of __atomic_xxx and non-atomic access to 
a memory location? What are the guarantees the compiler provides in such cases?
Do you see any harm in making this an atomic operation?

> 
> If seqlock->sn was a C11 _Atomic type, all accesses would automatically be
> atomic.
Exactly, we do not have that currently.

> 
> - Ola
> 

Reply via email to