Hi Paolo, Richard and all, What is the exact semantic of tcg_enabled() supposed to be and is it ill-defined in multi-arch?
Currently, tcg_enabled is defined as: bool tcg_enabled(void) { return tcg_ctx.code_gen_buffer != NULL; } In the multi-arch work, the tcg_ctx is now multiple per-arch. So lets assume that we virtualise tcg_enabled as a CPU hook. This handles a good number of cases, where there is a sense of a current CPU to which the tcg_enabled is being queried. All uses in target-foo are trivially handled as they will link against their local tcg_enabled implementation. This per-cpu approach has the added advantage of preparing support for mixed KVM/TCG multi-arch systems. But tcg_enabled() is used in the memory API for dirty code tracking where there is no sense of a current tcg_ctx: include/exec/ram_addr.h: if (tcg_enabled()) { include/exec/ram_addr.h: uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE; memory.c: mr->dirty_log_mask = tcg_enabled() ? (1 << DIRTY_MEMORY_CODE) : 0; memory.c: mr->dirty_log_mask = tcg_enabled() ? (1 << DIRTY_MEMORY_CODE) : 0; memory.c: mr->dirty_log_mask = tcg_enabled() ? (1 << DIRTY_MEMORY_CODE) : 0; memory.c: mr->dirty_log_mask = tcg_enabled() ? (1 << DIRTY_MEMORY_CODE) : 0; So what is the correct logic for populating dirty_log_mask and friends when there are 0. 1, or more TCG engines? Regards, Peter