On 14.01.20 15:27, Jan Beulich wrote:
On 08.01.2020 16:23, Juergen Gross wrote:
@@ -234,16 +233,6 @@ void domctl_lock_release(void)
spin_unlock(¤t->domain->hypercall_deadlock_mutex);
}
-static inline
-int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
-{
- return vcpuaff->flags == 0 ||
- ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
- guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
- ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
- guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
-}
I'd like to suggest keeping this and ...
@@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
u_domctl)
case XEN_DOMCTL_setvcpuaffinity:
case XEN_DOMCTL_getvcpuaffinity:
- {
- struct vcpu *v;
- const struct sched_unit *unit;
- struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity;
-
- ret = -EINVAL;
- if ( vcpuaff->vcpu >= d->max_vcpus )
- break;
-
- ret = -ESRCH;
- if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
- break;
-
- unit = v->sched_unit;
- ret = -EINVAL;
- if ( vcpuaffinity_params_invalid(vcpuaff) )
- break;
... everything up to here (except the [too early] unit assignment),
as not being scheduler specific at all. The remainder then would
better become two distinct functions, eliminating the need to pass
op->cmd (and presumably passing "v" instead of "d"). If, otoh, the
decision (supported by others) is to move everything, then I think
it would be appropriate to make at least some adjustments: The code
above should be converted to use domain_vcpu(), and e.g. ...
Either would be fine with me.
- if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
- {
- cpumask_var_t new_affinity, old_affinity;
- cpumask_t *online = cpupool_domain_master_cpumask(v->domain);
... this should use "d".
Yes.
@@ -875,6 +876,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
return ret;
}
+int cpupool_get_id(const struct domain *d)
I find plain int odd for something like an ID, but I can see why
this is.
+cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
const twice?
See patch 9.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel