On 24.07.19 12:07, Jan Beulich wrote:
On 23.07.2019 20:25, Juergen Gross wrote:
Today there are two scenarios which are pinning vcpus temporarily to
a single physical cpu:
- wait_event() handling
- vcpu_pin_override() handling
Each of those cases are handled independently today using their own
temporary cpumask to save the old affinity settings.
The two cases can be combined as the first case will only pin a vcpu to
the physical cpu it is already running on, while vcpu_pin_override() is
allowed to fail.
So merge the two temporary pinning scenarios by only using one cpumask
and a per-vcpu bitmask for specifying which of the scenarios is
currently active (they are allowed to nest).
Hmm, "nest" to me means LIFO-like behavior, but the logic is more relaxed
afaict.
Okay, will rephrase.
@@ -1267,7 +1284,8 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void)
arg)
if ( copy_from_guest(&sched_pin_override, arg, 1) )
break;
- ret = vcpu_pin_override(current, sched_pin_override.pcpu);
+ cpu = sched_pin_override.pcpu < 0 ? NR_CPUS : sched_pin_override.pcpu;
I don't think you mean the caller to achieve the same effect by both
passing in a negative value or NR_CPUS - it should remain to be just
negative values which clear the override.
Okay.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel