On 6/26/25 1:43 PM, Juergen Gross wrote:
On 26.06.25 13:34, Oleksii Kurochko wrote:
On 6/26/25 12:41 PM, Jan Beulich wrote:
On 26.06.2025 12:05, Oleksii Kurochko wrote:
On 6/24/25 4:01 PM, Jan Beulich wrote:
On 24.06.2025 15:47, Oleksii Kurochko wrote:
On 6/24/25 12:44 PM, Jan Beulich wrote:
On 24.06.2025 11:46, Oleksii Kurochko wrote:
On 6/18/25 5:46 PM, Jan Beulich wrote:
On 10.06.2025 15:05, Oleksii Kurochko wrote:
--- /dev/null
+++ b/xen/arch/riscv/p2m.c
@@ -0,0 +1,115 @@
+#include <xen/bitops.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+#include <xen/xvmalloc.h>
+
+#include <asm/p2m.h>
+#include <asm/sbi.h>
+
+static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * hgatp's VMID field is 7 or 14 bits. RV64 may support
14-bit VMID.
+ * Using a bitmap here limits us to 127 (2^7 - 1) or 16383
(2^14 - 1)
+ * concurrent domains.
Which is pretty limiting especially in the RV32 case. Hence
why we don't
assign a permanent ID to VMs on x86, but rather manage IDs
per-CPU (note:
not per-vCPU).
Good point.
I don't believe anyone will use RV32.
For RV64, the available ID space seems sufficiently large.
However, if it turns out that the value isn't large enough even
for RV64,
I can rework it to manage IDs per physical CPU.
Wouldn't that approach result in more TLB entries being flushed
compared
to per-vCPU allocation, potentially leading to slightly worse
performance?
Depends on the condition for when to flush. Of course
performance is
unavoidably going to suffer if you have only very few VMIDs to use.
Nevertheless, as indicated before, the model used on x86 may be a
candidate to use here, too. See hvm_asid_handle_vmenter() for the
core (and vendor-independent) part of it.
IIUC, so basically it is just a round-robin and when VMIDs are
ran out
then just do full guest TLB flush and start to re-use VMIDs from
the start.
It makes sense to me, I'll implement something similar. (as I'm
not really
sure that we needdata->core_asid_generation, probably, I will
understand it better when
start to implement it)
Well. The fewer VMID bits you have the more quickly you will need
a new
generation. And keep track of the generation you're at you also
need to
track the present number somewhere.
What about then to allocate VMID per-domain?
That's what you're doing right now, isn't it? And that gets
problematic when
you have only very few bits in hgatp.VMID, as mentioned below.
Right, I just phrased my question poorly—sorry about that.
What I meant to ask is: does the approach described above
actually depend on whether
VMIDs are allocated per-domain or per-pCPU? It seems that the
main advantage of
allocating VMIDs per-pCPU is potentially reducing the number of
TLB flushes,
since it's more likely that a platform will have more
than|VMID_MAX| domains than
|VMID_MAX| physical CPUs—am I right?
Seeing that there can be systems with hundreds or even thousands
of CPUs,
I don't think I can agree here. Plus per-pCPU allocation would
similarly
get you in trouble when you have only very few VMID bits.
But not so fast as in case of per-domain allocation, right?
I mean that if we have only 4 bits, then in case of per-domain
allocation we will
need to do TLB flush + VMID re-assigning when we have more then 16
domains.
But in case of per-pCPU allocation we could run 16 domains on 1
pCPU and at the same
time in multiprocessor systems we have more pCPUs, which will allow
us to run more
domains and avoid TLB flushes.
On other hand, it is needed to consider that it's unlikely that a
domain will have
only one vCPU. And it is likely that amount of vCPUs will be bigger
then an amount
of domains, so to have a round-robin approach (as x86) without
permanent ID allocation
for each domain will work better then per-pCPU allocation.
Here you (appear to) say one thing, ...
In other words, I'm not 100% sure that I get a point why x86 chose
per-pCPU allocation
instead of per-domain allocation with having the same VMID for all
vCPUs of domains.
... and then here the opposite. Overall I'm in severe trouble
understanding this
reply of yours as a whole, so I fear I can't really respond to it
(or even just
parts thereof).
IIUC, x86 allocates VMIDs per physical CPU (pCPU) "dynamically" —
these are just
sequential numbers, and once VMIDs run out on a given pCPU, there's
no guarantee
that a vCPU will receive the same VMID again.
On the other hand, RISC-V currently allocates a single VMID per
domain, and that
VMID is considered "permanent" until the domain is destroyed. This
means we are
limited to at most VMID_MAX domains. To avoid this limitation, I plan
to implement
a round-robin reuse approach: when no free VMIDs remain, we start a
new generation
and begin reusing old VMIDs.
The only remaining design question is whether we want RISC-V to
follow a global
VMID allocation policy (i.e., one VMID per domain, shared across all
of its vCPUs),
or adopt a policy similar to x86 with per-CPU VMID allocation (each
vCPU gets its
own VMID, local to the CPU it's running on).
Each policy has its own trade-offs. But in the case where the number
of available
VMIDs is small (i.e., low VMIDLEN), a global allocation policy may be
more suitable,
as it requires fewer VMIDs overall.
So my main question was:
What are the advantages of per-pCPU VMID allocation in scenarios with
limited VMID
space, and why did x86 choose that design?
From what I can tell, the benefits of per-pCPU VMID allocation include:
- Minimized inter-CPU TLB flushes — since VMIDs are local, TLB
entries don’t need
to be invalidated on other CPUs when reused.
- Better scalability — this approach works better on systems with a
large number
of CPUs.
- Frequent VM switches don’t require global TLB flushes — reducing
the overhead
of context switching.
However, the downside is that this model consumes more VMIDs. For
example,
if a single domain runs on 4 vCPUs across 4 CPUs, it will consume 4
VMIDs instead
of just one.
Consider you have 4 bits for VMIDs, resulting in 16 VMID values.
If you have a system with 32 physical CPUs and 32 domains with 1 vcpu
each
on that system, your scheme would NOT allow to keep each physical cpu
busy
by running a domain on it, as only 16 domains could be active at the same
time.
It makes sense to me.
Thanks.
~ Oleksii