On Thu, 20 Oct 2022 at 13:21, Peter Maydell <peter.mayd...@linaro.org> wrote: > > From: Richard Henderson <richard.hender...@linaro.org> > > Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. > In is_guarded_page, use probe_access_full instead of just guessing > that the tlb entry is still present. Also handles the FIXME about > executing from device memory.
Hi, Richard -- Coverity spotted a problem (CID 1507929) with this addition of 'guarded' to the ARMCacheAttrs struct, and then looking at the code I noticed another one... > diff --git a/target/arm/internals.h b/target/arm/internals.h > index 9566364dcae..c3c3920ded2 100644 > --- a/target/arm/internals.h > +++ b/target/arm/internals.h > @@ -1095,6 +1095,7 @@ typedef struct ARMCacheAttrs { > unsigned int attrs:8; > unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs > */ > bool is_s2_format:1; > + bool guarded:1; /* guarded bit of the v8-64 PTE */ > } ARMCacheAttrs; The one Coverity spots is that combine_cacheattrs() was never updated to do anything with 'guarded' so the struct value it returns now has an uninitialized value for that field. Since the GP bit only exists at stage 1 presumably this should be ret.guarded = s1.guarded; ? (If I'm not misreading the code this is an actual bug because we'll use the field for the case where s1 and s2 are enabled.) The issue I noticed is that in ptw.c when we set the 'guarded' field we do it like this: if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) { result->f.guarded = extract64(attrs, 50, 1); /* GP */ } The GP bit only exists in stage 1 descriptors but we don't check that here, so we will set the 'guarded' bit in the result if the guest (incorrectly) sets the RES0 bit 50 in a stage 2 descriptor. We should move this check into the else clause of the immediately following "if (regime_is_stage2(mmu_idx)) ..." I think. (It looks like this one pre-dates this patch to shift to using f.guarded.) thanks -- PMM