On 2022-03-25 22:10, Stephen Hemminger wrote:
> On Fri, 25 Mar 2022 21:24:28 +0100
> Mattias Rönnblom <mattias.ronnb...@ericsson.com> wrote:
>
>> diff --git a/lib/eal/include/rte_seqlock.h b/lib/eal/include/rte_seqlock.h
>> new file mode 100644
>> index 0000000000..b975ca848a
>> --- /dev/null
>> +++ b/lib/eal/include/rte_seqlock.h
>> @@ -0,0 +1,84 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2022 Ericsson AB
>> + */
>> +
>> +#ifndef _RTE_SEQLOCK_H_
>> +#define _RTE_SEQLOCK_H_
>> +
>> +#include <stdbool.h>
>> +#include <stdint.h>
>> +
>> +#include <rte_atomic.h>
>> +#include <rte_branch_prediction.h>
>> +#include <rte_spinlock.h>
>> +
>> +struct rte_seqlock {
>> +    uint64_t sn;
>> +    rte_spinlock_t lock;
>> +};
>> +
>> +typedef struct rte_seqlock rte_seqlock_t;
>> +
>
> Add a reference to Wikipedia and/or Linux since not every DPDK
> user maybe familar with this.

OK, will do.

>> +
>> +    sn = seqlock->sn + 1;
>> +
>> +    __atomic_store_n(&seqlock->sn, sn, __ATOMIC_RELAXED);
>> +
>> +    /* __ATOMIC_RELEASE to prevent stores after (in program order)
>> +     * from happening before the sn store.
>> +     */
>> +    rte_atomic_thread_fence(__ATOMIC_RELEASE);
> Could this just be __atomic_fetch_add() with __ATOMIC_RELEASE?

If I understood C11 correctly, an __atomic_fetch_add() with 
__ATOMIC_RELEASE only prevents stores that precedes it (in program 
order) to be move ahead of it. Thus, stores that follows it may be 
reordered across the __atomic_fetch_add(), and seen by a reader before 
the sn change.

Also, __atomic_fetch_add() would generate an atomic add machine 
instruction, which, at least according to my experience (on x86_64), is 
slower than a mov+add+mov, which is what the above code will generate 
(plus prevent certain compiler optimizations). That's with TSO. What 
would happen on weakly ordered machines, I don't know in detail.




Reply via email to