On Sat, Sep 10, 2022 at 1:24 AM <leon@is.currently.online> wrote: > > From: Leon Schuermann <leon@is.currently.online> > > This commit fixes PMP address access checks with non page-aligned PMP > regions on harts with MPU enabled. Without this change, the presence > of an MPU in the virtual CPU model would influence the PMP address > check behavior when an access size was unknown (`size == 0`), > regardless of whether virtual memory has actually been enabled by the > guest. > > The RISC-V Privileged Spec Version 20211203[1] states in 4.3.1 > Addressing and Memory Protection that "[...] [w]hen Sv32 virtual > memory mode is selected in the MODE field of the satp register, > supervisor virtual addresses are translated into supervisor physical > addresses via a two-level page table. The 20-bit VPN is translated > into a 22-bit physical page number (PPN), while the 12-bit page offset > is untranslated. The resulting supervisor-level physical addresses are > then checked using any physical memory protection structures (Sections > 3.7), before being directly converted to machine-level physical > addresses. [...]" and "[...] [w]hen the value of satp.MODE is Bare, > the 32-bit virtual address is translated (unmodified) into a 32-bit > physical address [...]". Other modes such as Sv39, Sv48 and Sv57 are > said to behave similar in this regard. > > From this specification it can be inferred that any access made when > virtual memory is disabled, which is the case when satp.MODE is set to > "Bare" (0), should behave identically with respect to PMP checks as if > no MPU were present in the system at all. The current implementation, > however, degrades any PMP address checks of unknown access size (which > seems to be the case for instruction fetches at least) to be of > page-granularity, just based on the fact that the hart has MPU support > enabled. This causes systems that rely on 4-byte aligned PMP regions > to incur access faults, which are not occurring with the MPU disabled, > independent of any runtime guest configuration. > > While there possibly are other unhandled edge cases in which > page-granularity access checks might not be appropriate, this commit > appears to be a strict improvement over the current implementation's > behavior. It has been tested using Tock OS, but not with other > systems (e.g., Linux) yet. > > [1]: > https://github.com/riscv/riscv-isa-manual/releases/download/Priv-v1.12/riscv-privileged-20211203.pdf > > Signed-off-by: Leon Schuermann <leon@is.currently.online>
Reviewed-by: Alistair Francis <alistair.fran...@wdc.com> > --- > > This patch is a resubmission to include all maintainers of the > modified files and main QEMU mailing list, as determined through the > `get_maintainer.pl` script. > > Also, one particular example of an additional edge case not handled > through this patch might be a hart operating in M-mode. Given that > virtual memory through {Sv32,Sv39,Sv48,Sv57} is only supported for > S-mode and U-mode respectively, enabling virtual memory in the satp > CSR should not have any effect on the behavior of memory accesses > w.r.t. PMP checks for harts operating in M-mode. > > I'm going to defer adding this additional check, as I'd appreciate some > feedback as to whether my reasoning is correct here at all first. > > Thanks! > > -Leon > > --- > target/riscv/pmp.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) > > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c > index ea2b67d947..48f64a4aef 100644 > --- a/target/riscv/pmp.c > +++ b/target/riscv/pmp.c > @@ -300,6 +300,7 @@ bool pmp_hart_has_privs(CPURISCVState *env, target_ulong > addr, > int i = 0; > int ret = -1; > int pmp_size = 0; > + uint64_t satp_mode; > target_ulong s = 0; > target_ulong e = 0; > > @@ -310,10 +311,17 @@ bool pmp_hart_has_privs(CPURISCVState *env, > target_ulong addr, > } > > if (size == 0) { > - if (riscv_feature(env, RISCV_FEATURE_MMU)) { > + if (riscv_cpu_mxl(env) == MXL_RV32) { > + satp_mode = SATP32_MODE; > + } else { > + satp_mode = SATP64_MODE; > + } > + > + if (riscv_feature(env, RISCV_FEATURE_MMU) > + && get_field(env->satp, satp_mode)) { > /* > - * If size is unknown (0), assume that all bytes > - * from addr to the end of the page will be accessed. > + * If size is unknown (0) and virtual memory is enabled, assume > that > + * all bytes from addr to the end of the page will be accessed. > */ > pmp_size = -(addr | TARGET_PAGE_MASK); I'm not sure if we need this at all. This function is only called from get_physical_address_pmp() which then calculates the maximum size using pmp_is_range_in_tlb(). I suspect that we could just use sizeof(target_ulong) as the fallback for every time size == 0. Then pmp_is_range_in_tlb() will set the tlb_size to the maximum possible size of the PMP region. As a plus, we would remove some macros as well, so what about (untested)? if (size == 0) { if (riscv_cpu_mxl(env) == MXL_RV32) { pmp_size = 4; } else { pmp_size = 8; } } else { pmp_size = size; } Alistair > } else { > > base-commit: 61fd710b8da8aedcea9b4f197283dc38638e4b60 > -- > 2.36.0 > >