On Tue, Sep 11, 2018 at 04:43:04PM -0400, Emilio G. Cota wrote: > On Tue, Sep 11, 2018 at 05:43:38 -0700, Richard Henderson wrote: > > On 09/10/2018 04:27 PM, Emilio G. Cota wrote: > > > +#define GEN_READ(name, type) \ > > > + type name(const type *ptr) \ > > > + { \ > > > + QemuSpin *lock = addr_to_lock(ptr); \ > > > + type ret; \ > > > + \ > > > + qemu_spin_lock(lock); \ > > > + ret = *ptr; \ > > > + qemu_spin_unlock(lock); \ > > > + return ret; \ > > > + } > > > + > > > +GEN_READ(atomic_read_i64, int64_t) > > > +GEN_READ(atomic_read_u64, uint64_t) > > > > Is there really a good reason to have two external > > functions instead of having one of them inline and > > perform a cast? > > Not really. I can do a follow-up patch if you want me to. > > > Is this any better than using libatomic? > > I didn't think of using libatomic. I just checked the source > code and it's quite similar: > - It uses 64 locks instead of 16 ($page_size/$cache_line, > but these are hard-coded for POSIX as 4096/64, respectively) > - We compute the cacheline size and corresponding padding > at run-time, which is a little better than the above. > - The locks are implemented as pthread_mutex instead of > spinlocks. I think spinlocks make more sense here because > we do not expect huge contention (systems without > !CONFIG_ATOMIC64 have few cores); for libatomic it makes > sense to use mutexes because it might be used in many-core > machines. > > So yes, we could have used libatomic. If you feel strongly > about it, I can give it a shot.
Would allowing user to choose between libatomic or Emilio's implementation at configure time be a bad idea? One could pass, for example, --with-libatomic to configure script to use libatomic instead of using Emilio's implementation, which could be the default implementation - or vice-versa. Murilo