This patchset applies on top of tcg-next (8b1fe3f4 "cpu-exec: Clean up 'interrupt_request' reloading", tagged "pull-tcg-20160512").
For reference, here is v4: https://lists.gnu.org/archive/html/qemu-devel/2016-04/msg04670.html Changes from v4: - atomics.h: + Add atomic_read_acquire and atomic_set_release + Rename atomic_test_and_set to atomic_test_and_set_acquire [ Richard: I removed your reviewed-by ] - qemu_spin @ thread.h: + add bool qemu_spin_locked() to check whether the lock is taken. + Use newly-added acquire/release atomic ops. This is clearer and improves performance; for instance, now we don't emit an unnecessary smp_mb() thanks to using atomic_set_release() instead of atomic_mb_set(). Also, note that __sync_test_and_set has acquire semantics, so it makes sense to have an atomic_test_and_set_acquire that directly calls it, instead of calling atomic_xchg, which emits a full barrier (that we don't need) before __sync_test_and_set. [ Richard: I removed your reviewed-by ] - tests: + add parallel benchmark (qht-bench). Some perf numbers in the commit message, comparing QHT vs. CLHT and ck_hs. + invoke qht-bench from `make check` with test-qht-par. It uses system(3); I couldn't find a way to detect from qht-bench when it is run from gtester, so I decided to just add a silly program to invoke it. - trivial: util/Makefile.objs: add qdist.o and qht.o each on a separate line - trivial: added copyright header to test programs - trivial: updated phys_pc, pc, flags commit message with Richard's comment that hashing cs_base probably isn't worth it. - qht: + Document that duplicate pointer values cannot be inserted. + qht_insert: return true/false upon success/failure, just like qht_remove. This can help find bugs. + qht_remove: only write to seqlock if the removal happens -- otherwise the write is unnecessary, since nothing is written to the bucket. + trivial: s/n_items/n_entries/ for consistency. + qht_grow: substitute it for qht_resize. This is mostly useful for testing. + resize: do not track qht_map->n_entries; track instead the number of non-head buckets added. This improves scalability, since we only increment this number (with the relatively expensive atomic_inc) every time a new non-head bucket is allocated, instead of every time an entry is added/removed. * return bool from qht_resize and qht_reset_size; they return false if the resize was not needed (i.e. if the previous size was the requested size). + qht_lookup: do not check for !NULL entries; check directly for a hash match. This gives a ~2% perf. increase during benchmarking. The buckets in the microbenchmarks are equally-sized well distributed, which is approximately the case in QEMU thanks to xxhash and resizing. + Remove MRU bucket promotion policy. With automatic resizing, this is not needed. Furthermore, removing it saves code. + qht_lookup: Add fast-path without do {} while (seqlock). This gives a 4% perf. improvement on a read-only benchmark. + struct qht_bucket: document the struct + rename qht_lock() to qht_map_lock_buckets() + add map__atomic_mb and bucket_next__atomic_mb helpers that include the necessary atomic_read() and rmb(). [ All the above changes for qht are simple enough that I kept Richard's reviewed-by.] + Support concurrent writes to separate buckets. This is in an additional patch to ease reviewing; feel free to squash it on top of the QHT patch. Thanks, Emilio