This is the fifth iteration of the patch series which applies to the upstream branch of QEMU (v2.4.0).
Changes versus previous versions are at the bottom of this cover letter. The code is also available at following repository: https://git.virtualopensystems.com/dev/qemu-mt.git branch: slowpath-for-atomic-v5-no-mttcg (branch slowpath-for-atomic-v5-mttcg for the version based on mttcg) This patch series provides an infrastructure for atomic instruction implementation in QEMU, thus offering a 'legacy' solution for translating guest atomic instructions. Moreover, it can be considered as a first step toward a multi-thread TCG. The underlying idea is to provide new TCG helpers (sort of softmmu helpers) that guarantee atomicity to some memory accesses or in general a way to define memory transactions. More specifically, the new softmmu helpers behave as LoadLink and StoreConditional instructions, and are called from TCG code by means of target specific helpers. This work includes the implementation for all the ARM atomic instructions, see target-arm/op_helper.c. The implementation heavily uses the software TLB together with a new bitmap that has been added to the ram_list structure which flags, on a per-CPU basis, all the memory pages that are in the middle of a LoadLink (LL), StoreConditional (SC) operation. Since all these pages can be accessed directly through the fast-path and alter a vCPU's linked value, the new bitmap has been coupled with a new TLB flag for the TLB virtual address which forces the slow-path execution for all the accesses to a page containing a linked address. The new slow-path is implemented such that: - the LL behaves as a normal load slow-path, except for clearing the dirty flag in the bitmap. The cputlb.c code while generating a TLB entry, checks if there is at least one vCPU that has the bit cleared in the exclusive bitmap, it that case the TLB entry will have the EXCL flag set, thus forcing the slow-path. In order to ensure that all the vCPUs will follow the slow-path for that page, we flush the TLB cache of all the other vCPUs. The LL will also set the linked address and size of the access in a vCPU's private variable. After the corresponding SC, this address will be set to a reset value. - the SC can fail returning 1, or succeed, returning 0. It has to come always after a LL and has to access the same address 'linked' by the previous LL, otherwise it will fail. If in the time window delimited by a legit pair of LL/SC operations another write access happens to the linked address, the SC will fail. In theory, the provided implementation of TCG LoadLink/StoreConditional can be used to properly handle atomic instructions on any architecture. The code has been tested with bare-metal test cases and by booting Linux. * Performance considerations The new slow-path adds some overhead to the translation of the ARM atomic instructions, since their emulation doesn't happen anymore only in the guest (by mean of pure TCG generated code), but requires the execution of two helpers functions. Despite this, the additional time required to boot an ARM Linux kernel on an i7 clocked at 2.5GHz is negligible. Instead, on a LL/SC bound test scenario - like: https://git.virtualopensystems.com/dev/tcg_baremetal_tests.git - this solution requires 30% (1 million iterations) and 70% (10 millions iterations) of additional time for the test to complete. Changes from v4: - Reworked the exclusive bitmap to be of fixed size (8 bits per address) - The slow-path is now TCG backend independent, no need to touch tcg/* anymore as suggested by Aurelien Jarno. Changes from v3: - based on upstream QEMU - addressed comments from Alex Bennée - the slow path can be enabled by the user with: ./configure --enable-tcg-ldst-excl only if the backend supports it - all the ARM ldex/stex instructions make now use of the slow path - added aarch64 TCG backend support - part of the code has been rewritten Changes from v2: - the bitmap accessors are now atomic - a rendezvous between vCPUs and a simple callback support before executing a TB have been added to handle the TLB flush support - the softmmu_template and softmmu_llsc_template have been adapted to work on real multi-threading Changes from v1: - The ram bitmap is not reversed anymore, 1 = dirty, 0 = exclusive - The way how the offset to access the bitmap is calculated has been improved and fixed - A page to be set as dirty requires a vCPU to target the protected address and not just an address in the page - Addressed comments from Richard Henderson to improve the logic in softmmu_template.h and to simplify the methods generation through softmmu_llsc_template.h - Added initial implementation of qemu_{ldlink,stcond}_i32 for tcg/i386 This work has been sponsored by Huawei Technologies Duesseldorf GmbH. Alvise Rigo (6): exec.c: Add new exclusive bitmap to ram_list softmmu: Add new TLB_EXCL flag softmmu: Add helpers for a new slowpath target-arm: Create new runtime helpers for excl accesses configure: Use slow-path for atomic only when the softmmu is enabled target-arm: translate: Use ld/st excl for atomic insns configure | 11 +++++ cputlb.c | 43 ++++++++++++++++- exec.c | 8 +++- include/exec/cpu-all.h | 8 ++++ include/exec/cpu-defs.h | 12 +++++ include/exec/memory.h | 3 +- include/exec/ram_addr.h | 75 +++++++++++++++++++++++++++++ softmmu_llsc_template.h | 124 ++++++++++++++++++++++++++++++++++++++++++++++++ softmmu_template.h | 124 +++++++++++++++++++++++++++++++++++++++--------- target-arm/helper.h | 10 ++++ target-arm/op_helper.c | 94 ++++++++++++++++++++++++++++++++++++ target-arm/translate.c | 121 ++++++++++++++++++++++++++++++++++++++++++++-- tcg/tcg.h | 30 ++++++++++++ 13 files changed, 633 insertions(+), 30 deletions(-) create mode 100644 softmmu_llsc_template.h -- 2.5.3