On 01/06/2015 08:41 AM, Andrew Cooper wrote:
On 06/01/15 02:18, Boris Ostrovsky wrote:
Instead of copying data for each field in xen_sysctl_topologyinfo separately
put cpu/socket/node into a single structure and do a single copy for each
processor.
There is also no need to copy whole op to user at the end, max_cpu_index is
sufficient
Rename xen_sysctl_topologyinfo and XEN_SYSCTL_topologyinfo to reflect the fact
that these are used for CPU topology. Subsequent patch will add support for
PCI topology sysctl.
Signed-off-by: Boris Ostrovsky <boris.ostrov...@oracle.com>
If we are going to change the hypercall, then can we see about making it
a stable interface (i.e. not a sysctl/domctl)? There are non-toolstack
components which might want/need access to this information. (i.e. I am
still looking for a reasonable way to get this information from Xen in
hwloc)
Can't those components dlopen libxl? That's what I was assuming we'd do
with hwlock.
<snip>
+ if ( cpu_online(i) )
{
- uint32_t socket = cpu_online(i) ? cpu_to_socket(i) : ~0u;
- if ( copy_to_guest_offset(ti->cpu_to_socket, i, &socket, 1) )
- break;
+ cputopo.core = cpu_to_core(i);
+ cputopo.socket = cpu_to_socket(i);
+ cputopo.node = cpu_to_node(i);
}
- if ( !guest_handle_is_null(ti->cpu_to_node) )
+ else
+ cputopo.core = cputopo.socket =
+ cputopo.node = INVALID_TOPOLOGY_ID;
+
In particular, can we fix this broken behaviour. The cpu being online
or not has no bearing on whether Xen has topology information, and a
side effect is that when the cpu governer powers down a cpu, it
disappears from the reported topology.
Yes, I should fix that as well.
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel