On 17.11.25 18:43, Jan Beulich wrote:
On 14.11.2025 15:01, Grygorii Strashko wrote:
From: Grygorii Strashko <[email protected]>
Xen uses below pattern for raw_x_guest() functions:
define raw_copy_to_guest(dst, src, len) \
(is_hvm_vcpu(current) ? \
copy_to_user_hvm((dst), (src), (len)) : \
copy_to_guest_pv(dst, src, len))
This pattern works depending on CONFIG_PV/CONFIG_HVM as:
- PV=y and HVM=y
Proper guest access function is selected depending on domain type.
- PV=y and HVM=n
Only PV domains are possible. is_hvm_domain/vcpu() will constify to "false"
and compiler will optimize code and skip HVM specific part.
- PV=n and HVM=y
Only HVM domains are possible. is_hvm_domain/vcpu() will not be constified.
No PV specific code will be optimized by compiler.
- PV=n and HVM=n
No guests should possible. The code will still follow PV path.
Rework raw_x_guest() code to use static inline functions which account for
above PV/HVM possible configurations with main intention to optimize code
for (PV=n and HVM=y) case.
For the case (PV=n and HVM=n) return "len" value indicating a failure (no
guests should be possible in this case, which means no access to guest
memory should ever happen).
Finally move arch/x86/usercopy.c into arch/x86/pv/usercopy.c to use it only
with PV=y.
The measured (bloat-o-meter) improvement for (PV=n and HVM=y) case is:
add/remove: 3/8 grow/shrink: 3/89 up/down: 1018/-12087 (-11069)
Total: Before=1937280, After=1926211, chg -0.57%
[[email protected]: Suggested to use static inline functions vs
macro combinations]
Suggested-by: Teddy Astie <[email protected]>
Signed-off-by: Grygorii Strashko <[email protected]>
Reviewed-by: Jason Andryuk <[email protected]>
I would guess that this R-b would have needed dropping, ...
---
changes in v4:
- move usercopy.c into arch/x86/pv/
- rework to always dynamically check for HVM vcpu(domain) by using is_hvm_vcpu()
as requested by Jan Beulich
... with at least the latter of these two changes.
--- a/xen/arch/x86/pv/Makefile
+++ b/xen/arch/x86/pv/Makefile
@@ -14,6 +14,10 @@ obj-y += ro-page-fault.o
obj-$(CONFIG_PV_SHIM) += shim.o
obj-$(CONFIG_TRACEBUFFER) += trace.o
obj-y += traps.o
+obj-$(CONFIG_PV) += usercopy.o
Just obj-y with the movement.
However, is the movement (and was the adding of $(CONFIG_PV) in the earlier
version) actually correct? The file also produces copy_{from,to}_unsafe_ll(),
which aren't PV-specific. This may be only a latent issue right now, as we
have only a single use site of copy_from_unsafe(), but those functions need
to remain available. (We may want to arrange for them to be removed when
linking, as long as they're not referenced. But that's a separate topic.)
It is confusing that none of build cfg combinations have failed
(HVM=y PV=n, HVM=n PV=n) :(
copy_to_unsafe_ll()
- called from copy_to_unsafe()
- copy_to_unsafe() has no users (unreachable, MISRA 2.1?)
copy_from_unsafe_ll()
- called from copy_from_unsafe()
- copy_from_unsafe() called from one place do_invalid_op() with
copy_from_unsafe(,, n = sizeof(bug_insn)).
Due to __builtin_constant_p(n) check the copy_from_unsafe() call
optimized by compiler to
get_unsafe_size(*(uint16_t *)to, from, 2, UA_DROP, ret, 2);
as result copy_from_unsafe_ll() is unreachable also (?).
If those function are not subject to be removed, the
usercopy.c can't be moved in "x86/pv", Right?
Making copy_{from,to}_unsafe_ll() available for !PV means
rewriting usercopy.c in some way, Right?
--
Best regards,
-grygorii