On 6/18/25 5:15 PM, Jan Beulich wrote:
On 10.06.2025 15:05, Oleksii Kurochko wrote:
Instruct the remote harts to execute one or more HFENCE.GVMA instructions,
covering the range of guest physical addresses between start_addr and
start_addr + size for all the guests.
Here and in the code comment: Why "for all the guests"? Under what conditions
would you require such a broad (guest) TLB flush?

Originally, it came from Andrew reply:
```
TLB flushing needs to happen for each pCPU which potentially has cached
a mapping.

In other arches, this is tracked by d->dirty_cpumask which is the bitmap
of pCPUs where this domain is scheduled.

CPUs need to flush their TLBs before removing themselves from
d->dirty_cpumask, which is typically done during context switch, but it
means that to flush the P2M, you only need to IPI a subset of CPUs.
```

But specifically this function was introduced to work in case no VMID support
as we can't distinguish which TLB entries belong to which domain. As a result,
we have no choice but to flush the entire TLB to avoid incorrect translations.

However, this patch may no longer be necessary, as VMID support has been
introduced and|sbi_remote_hfence_gvma_vmid()| will be used instead.


--- a/xen/arch/riscv/sbi.c
+++ b/xen/arch/riscv/sbi.c
@@ -258,6 +258,15 @@ int sbi_remote_sfence_vma(const cpumask_t *cpu_mask, 
vaddr_t start,
                        cpu_mask, start, size, 0, 0);
  }
+int sbi_remote_hfence_gvma(const cpumask_t *cpu_mask, vaddr_t start,
+                           size_t size)
+{
+    ASSERT(sbi_rfence);
As previously indicated, I question the usefulness of such assertions. If the
pointer is still NULL, ...

+    return sbi_rfence(SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
+                      cpu_mask, start, size, 0, 0);
... you'll crash here anyway (much like you will in a release build).

I will drop ASSERT() for rfence functions.

Thanks.

~ Oleksii

Reply via email to