We can simplify our DC_ZVA if we recognize that the largest BS that we actually use in system mode is 64. Let us just assert that it fits within TARGET_PAGE_SIZE.
For DC_GVA and STZGM, we want to be able to write whole bytes of tag memory, so assert that BS is >= 2 * TAG_GRANULE, or 32. Reviewed-by: Peter Maydell <peter.mayd...@linaro.org> Signed-off-by: Richard Henderson <richard.hender...@linaro.org> --- target/arm/cpu.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 10677c0c23..f09efc4370 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -1758,6 +1758,30 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) } #endif + if (tcg_enabled()) { + int dcz_blocklen = 4 << cpu->dcz_blocksize; + + /* + * We only support DCZ blocklen that fits on one page. + * + * Architectually this is always true. However TARGET_PAGE_SIZE + * is variable and, for compatibility with -machine virt-2.7, + * is only 1KiB, as an artifact of legacy ARMv5 subpage support. + * But even then, while the largest architectural DCZ blocklen + * is 2KiB, no cpu actually uses such a large blocklen. + */ + assert(dcz_blocklen <= TARGET_PAGE_SIZE); + + /* + * We only support DCZ blocksize >= 2*TAG_GRANULE, which is to say + * both nibbles of each byte storing tag data may be written at once. + * Since TAG_GRANULE is 16, this means that blocklen must be >= 32. + */ + if (cpu_isar_feature(aa64_mte, cpu)) { + assert(dcz_blocklen >= 2 * TAG_GRANULE); + } + } + qemu_init_vcpu(cs); cpu_reset(cs); -- 2.25.1