On 05/13/2016 11:09 AM, Jan Beulich wrote:
On 13.05.16 at 16:50, <ta...@tklengyel.com> wrote:
[...]
@@ -1468,6 +1505,69 @@ int
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
}
break;
+ case XENMEM_sharing_op_bulk_share:
+ {
+ unsigned long max_sgfn, max_cgfn;
+ struct domain *cd;
+
+ rc = -EINVAL;
+ if ( !mem_sharing_enabled(d) )
+ goto out;
+
+ rc = rcu_lock_live_remote_domain_by_id(mso.u.bulk.client_domain,
+ &cd);
+ if ( rc )
+ goto out;
+
+ rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
Either you pass XENMEM_sharing_op_share here, or you need to
update xen/xsm/flask/policy/access_vectors (even if it's only a
comment which needs updating).
Right, it should actually be sharing_op_share here.
That said - are this and the similar pre-existing XSM checks actually
correct? I.e. is one of the two domains here really controlling the
other? I would have expected that a tool stack domain initiates the
sharing between two domains it controls...
Not sure what was the original rationale behind it either.
Daniel - any opinion on this one?
This hook checks two permissions; the primary check is that current (which
is not either argument) can perform HVM__MEM_SHARING on (cd). When XSM is
disabled, this is checked as device model permissions. I don't think this
is what you were asking about, because this is actually a control operation.
The other permission check invoked by this hook, only when XSM is enabled,
is a check for HVM__SHARE_MEM between (d) and (cd). This is to allow a
security policy to be written that forbids memory sharing between different
users but allow it between VMs belonging to a single user (as an example).
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel