On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote:
> On 07/28/2015 06:29 AM, Juergen Gross wrote:
> > On 07/27/2015 04:09 PM, Dario Faggioli wrote:
> >> On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
> >>> On 07/24/2015 05:58 PM, Dario Faggioli wrote:
> >>
> So, just to check if
On Wed, 2015-07-29 at 08:04 +0200, Juergen Gross wrote:
> On 07/28/2015 06:17 PM, Dario Faggioli wrote:
> >> On 07/28/2015 06:29 AM, Juergen Gross wrote:
> >
> >>> I'll make some performance tests on a big machine (4 sockets, 60 cores,
> >>> 120 threads) regarding topology information:
> >>>
> > I'
On 07/28/2015 06:17 PM, Dario Faggioli wrote:
On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote:
On 07/28/2015 06:29 AM, Juergen Gross wrote:
I'll make some performance tests on a big machine (4 sockets, 60 cores,
120 threads) regarding topology information:
- bare metal
- "random" topo
On Tue, 2015-07-28 at 18:17 +0200, Dario Faggioli wrote:
> So, my test box looks like this:
> cpu_topology :
> cpu:coresocket node
> 0: 010
> 1: 010
> 2: 110
> 3: 110
> 4:
On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote:
> On 07/28/2015 06:29 AM, Juergen Gross wrote:
> > I'll make some performance tests on a big machine (4 sockets, 60 cores,
> > 120 threads) regarding topology information:
> >
> > - bare metal
> > - "random" topology (like today)
> > - "simpl
On Tue, 2015-07-28 at 11:05 +0100, Wei Liu wrote:
> On Fri, Jul 24, 2015 at 06:05:59PM +0200, Dario Faggioli wrote:
> > BTW, I've also been grepping, and I'm not seeing XENMEM_get_vnumainfo
> > being called anywhere either... Well, no wonder, we're seeing vNUMA
> > setup issues! If I did check for
On 07/28/2015 06:29 AM, Juergen Gross wrote:
On 07/27/2015 04:09 PM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, just to check if I'm understanding is correct: you'd like to add an
abstraction layer, in Linux
On Fri, Jul 24, 2015 at 06:05:59PM +0200, Dario Faggioli wrote:
> On Fri, 2015-07-24 at 17:14 +0200, Juergen Gross wrote:
> > On 07/24/2015 04:44 PM, Dario Faggioli wrote:
>
> > > Ok. And I already have a question (as I lost track of things a bit).
> > > What you just said about ACPI tables is cer
On 28/07/15 04:52, Juergen Gross wrote:
> On 07/28/2015 01:19 AM, Andrew Cooper wrote:
>> On 27/07/2015 18:42, Dario Faggioli wrote:
>>> On Mon, 2015-07-27 at 17:33 +0100, Andrew Cooper wrote:
On 27/07/15 17:31, David Vrabel wrote:
>
>> Yeah, indeed.
>> That's the downside of Juerg
On 07/27/2015 04:09 PM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, just to check if I'm understanding is correct: you'd like to add an
abstraction layer, in Linux, like in generic (or, perhaps, scheduling)
co
On 07/28/2015 01:19 AM, Andrew Cooper wrote:
On 27/07/2015 18:42, Dario Faggioli wrote:
On Mon, 2015-07-27 at 17:33 +0100, Andrew Cooper wrote:
On 27/07/15 17:31, David Vrabel wrote:
Yeah, indeed.
That's the downside of Juergen's "Linux scheduler
approach". But the issue is there, even witho
On 27/07/2015 18:42, Dario Faggioli wrote:
> On Mon, 2015-07-27 at 17:33 +0100, Andrew Cooper wrote: >> On 27/07/15 17:31,
> David Vrabel wrote: >>> Yeah, indeed. That's
the downside of Juergen's "Linux scheduler approach". But the issue
is there, even without taking vNUMA into acco
. snip..
> So, it looks to me that:
> 1) any application using CPUID for either licensing or
> placement/performance optimization will get (potentially) random
> results;
Right, that is a bug that Andrew outlined in this leveling document I
believe. We just pluck the cpuid results on wha
On Mon, 2015-07-27 at 17:33 +0100, Andrew Cooper wrote:
> On 27/07/15 17:31, David Vrabel wrote:
> >
> >> Yeah, indeed. That's the downside of Juergen's "Linux scheduler
> >> approach". But the issue is there, even without taking vNUMA into
> >> account, and I think something like that would really
On 27/07/15 17:31, David Vrabel wrote:
>
>>> 2. For HVM guests, use the existing hardware interfaces to present NUMA
>>> topology. i.e., CPUID, ACPI tables etc. This will work for both kernel
>>> and userspace and both will see the same topology.
>>>
>>> This also has the advantage that any hyper
On 27/07/15 17:02, Dario Faggioli wrote:
> On Mon, 2015-07-27 at 16:13 +0100, David Vrabel wrote:
>> On 16/07/15 11:32, Dario Faggioli wrote:
>>>
>>> Anyway, is there anything we can do to fix or workaround things?
>>
>> This thread has gotten a bit long...
>>
> Yep, indeed... :-(
>
>> For Linux I
On Mon, 2015-07-27 at 16:13 +0100, David Vrabel wrote:
> On 16/07/15 11:32, Dario Faggioli wrote:
> >
> > Anyway, is there anything we can do to fix or workaround things?
>
> This thread has gotten a bit long...
>
Yep, indeed... :-(
> For Linux I would like to see:
>
> 1. No support for NUMA i
On 16/07/15 11:32, Dario Faggioli wrote:
>
> Anyway, is there anything we can do to fix or workaround things?
This thread has gotten a bit long...
For Linux I would like to see:
1. No support for NUMA in PV guests -- if you want new MM features in a
guest use HVM.
2. For HVM guests, use the ex
On 07/27/2015 04:51 PM, Boris Ostrovsky wrote:
On 07/27/2015 10:43 AM, Juergen Gross wrote:
On 07/27/2015 04:34 PM, Boris Ostrovsky wrote:
On 07/27/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, j
On Mon, 2015-07-27 at 10:34 -0400, Boris Ostrovsky wrote:
> On 07/27/2015 10:09 AM, Dario Faggioli wrote:
> > Of course, it's not that my opinion on where should be in Linux counts
> > that much! :-D Nevertheless, I wanted to make it clear that, while
> > skeptic at the beginning, I now think th
On 07/27/2015 10:43 AM, Juergen Gross wrote:
On 07/27/2015 04:34 PM, Boris Ostrovsky wrote:
On 07/27/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, just to check if I'm understanding is correct: you
On 07/27/2015 04:34 PM, Boris Ostrovsky wrote:
On 07/27/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, just to check if I'm understanding is correct: you'd like to add an
abstraction layer, in Linux
On 07/27/2015 04:34 PM, Boris Ostrovsky wrote:
On 07/27/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, just to check if I'm understanding is correct: you'd like to add an
abstraction layer, in Linux
On 07/27/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
So, just to check if I'm understanding is correct: you'd like to add an
abstraction layer, in Linux, like in generic (or, perhaps, scheduling)
code
On Fri, 2015-07-24 at 18:10 +0200, Juergen Gross wrote:
> On 07/24/2015 05:58 PM, Dario Faggioli wrote:
> > So, just to check if I'm understanding is correct: you'd like to add an
> > abstraction layer, in Linux, like in generic (or, perhaps, scheduling)
> > code, to hide the direct interaction wi
On Mon, 2015-07-27 at 12:11 +0100, George Dunlap wrote:
> 1. Userspace applications are in the habit of reading CPUID to determine
> the topology of the system they're running on
>
I'd add this item here:
1b. Linux kernel uses CPUID to configure some bits of its scheduler. The
result of that
On 07/27/2015 03:23 PM, Dario Faggioli wrote:
On Mon, 2015-07-27 at 14:01 +0200, Juergen Gross wrote:
On 07/27/2015 01:11 PM, George Dunlap wrote:
Or alternately, if the user wants to give up on the "consolidation"
aspect of virtualization, they can pin vcpus to pcpus and then pass in
the act
On Fri, 2015-07-24 at 13:11 -0400, Boris Ostrovsky wrote:
> On 07/24/2015 12:48 PM, Juergen Gross wrote:
> > On 07/24/2015 06:40 PM, Boris Ostrovsky wrote:
> >> On 07/24/2015 12:10 PM, Juergen Gross wrote:
> >>>
> >>> If we can fiddle with the masks on boot, we could do it in a running
> >>> system
On Mon, 2015-07-27 at 14:01 +0200, Juergen Gross wrote:
> On 07/27/2015 01:11 PM, George Dunlap wrote:
> > Or alternately, if the user wants to give up on the "consolidation"
> > aspect of virtualization, they can pin vcpus to pcpus and then pass in
> > the actual host topology (hyperthreads and a
On Mon, 2015-07-27 at 11:49 +0100, Andrew Cooper wrote:
> On 27/07/15 11:41, George Dunlap wrote:
> > Can you expand a little on this? I'm having trouble figuring out
> > exactly what user-space applications are reading and how they're using
> > it -- and, how they work currently in virtual envir
At 14:01 +0200 on 27 Jul (1438005701), Juergen Gross wrote:
> There would be another solution, of course:
>
> Support hyperthreads in the Xen scheduler via gang scheduling. While
> this is not a simple solution, it is a fair one. Hyperthreads on one
> core can influence each other rather much. Wit
On 07/27/2015 01:11 PM, George Dunlap wrote:
On 07/27/2015 11:54 AM, Juergen Gross wrote:
On 07/27/2015 12:43 PM, George Dunlap wrote:
On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote:
On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:39 PM, Juergen Gross wrote:
I don'
On 07/27/2015 12:54 PM, Andrew Cooper wrote:
On 27/07/15 11:43, George Dunlap wrote:
On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote:
On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:39 PM, Juergen Gross wrote:
I don't say mangling cpuids can't solve the scheduling prob
On 07/27/2015 11:54 AM, Juergen Gross wrote:
> On 07/27/2015 12:43 PM, George Dunlap wrote:
>> On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote:
>>> On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:39 PM, Juergen Gross wrote:
>
>
>
> I don't say manglin
On 27/07/15 11:43, George Dunlap wrote:
> On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote:
>> On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
>>> On 07/24/2015 12:39 PM, Juergen Gross wrote:
I don't say mangling cpuids can't solve the scheduling problem. It
surely can. But
On 07/27/2015 12:43 PM, George Dunlap wrote:
On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote:
On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:39 PM, Juergen Gross wrote:
I don't say mangling cpuids can't solve the scheduling problem. It
surely can. But it can't solve
On 27/07/15 11:41, George Dunlap wrote:
> On Fri, Jul 24, 2015 at 5:09 PM, Konrad Rzeszutek Wilk
> wrote:
>> On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
>>> On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
On 07/24/2015 05:14 PM, Juergen Gross wrote:
> On 07/24/
On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote:
> On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
>>
>> On 07/24/2015 12:39 PM, Juergen Gross wrote:
>>>
>>>
>>>
>>> I don't say mangling cpuids can't solve the scheduling problem. It
>>> surely can. But it can't solve the scheduling problem wit
On Fri, Jul 24, 2015 at 5:09 PM, Konrad Rzeszutek Wilk
wrote:
> On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
>> On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
>> > On 07/24/2015 05:14 PM, Juergen Gross wrote:
>> > > On 07/24/2015 04:44 PM, Dario Faggioli wrote:
>>
>> >
On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:39 PM, Juergen Gross wrote:
I don't say mangling cpuids can't solve the scheduling problem. It
surely can. But it can't solve the scheduling problem without hiding
information like number of sockets or cores which might be require
On 07/24/2015 06:40 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:10 PM, Juergen Gross wrote:
If we can fiddle with the masks on boot, we could do it in a running
system, too. Another advantage with not relying on cpuid. :-)
I am trying to catch up with this thread so I may have missed it, but
On Fri, Jul 24, 2015 at 04:44:36PM +0200, Dario Faggioli wrote:
> On Fri, 2015-07-24 at 12:28 +0200, Juergen Gross wrote:
> > On 07/23/2015 04:07 PM, Dario Faggioli wrote:
>
> > > FWIW, I was thinking that the kernel were a better place, as Juergen is
> > > saying, while now I'm more convinced tha
On 07/24/2015 12:48 PM, Juergen Gross wrote:
On 07/24/2015 06:40 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:10 PM, Juergen Gross wrote:
If we can fiddle with the masks on boot, we could do it in a running
system, too. Another advantage with not relying on cpuid. :-)
I am trying to catch up
On 07/24/2015 06:40 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:10 PM, Juergen Gross wrote:
If we can fiddle with the masks on boot, we could do it in a running
system, too. Another advantage with not relying on cpuid. :-)
I am trying to catch up with this thread so I may have missed it, but
On 07/24/2015 12:39 PM, Juergen Gross wrote:
I don't say mangling cpuids can't solve the scheduling problem. It
surely can. But it can't solve the scheduling problem without hiding
information like number of sockets or cores which might be required
for license purposes. If we don't care, fine.
On 07/24/2015 12:10 PM, Juergen Gross wrote:
If we can fiddle with the masks on boot, we could do it in a running
system, too. Another advantage with not relying on cpuid. :-)
I am trying to catch up with this thread so I may have missed it, but I
still don't understand why we don't want to
On 07/24/2015 06:29 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Jul 24, 2015 at 06:18:56PM +0200, Juergen Gross wrote:
On 07/24/2015 06:09 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
On 0
On Fri, Jul 24, 2015 at 06:18:56PM +0200, Juergen Gross wrote:
> On 07/24/2015 06:09 PM, Konrad Rzeszutek Wilk wrote:
> >On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
> >>On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
> >>>On 07/24/2015 05:14 PM, Juergen Gross wrote:
> >>
On 07/24/2015 06:09 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
On 07/24/2015 05:14 PM, Juergen Gross wrote:
On 07/24/2015 04:44 PM, Dario Faggioli wrote:
In fact, I think that it is
On Fri, 2015-07-24 at 12:09 -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
> > So, just to check if I'm understanding is correct: you'd like to add an
> > abstraction layer, in Linux, like in generic (or, perhaps, scheduling)
> > code, to hide
On 07/24/2015 05:58 PM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
On 07/24/2015 05:14 PM, Juergen Gross wrote:
On 07/24/2015 04:44 PM, Dario Faggioli wrote:
In fact, I think that it is the topology, i.e., what comes from MSRs,
that needs to adapt, and foll
On Fri, Jul 24, 2015 at 05:58:29PM +0200, Dario Faggioli wrote:
> On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
> > On 07/24/2015 05:14 PM, Juergen Gross wrote:
> > > On 07/24/2015 04:44 PM, Dario Faggioli wrote:
>
> > >> In fact, I think that it is the topology, i.e., what comes from MS
On Fri, 2015-07-24 at 17:14 +0200, Juergen Gross wrote:
> On 07/24/2015 04:44 PM, Dario Faggioli wrote:
> > Ok. And I already have a question (as I lost track of things a bit).
> > What you just said about ACPI tables is certainly true for baremetal and
> > HVM guests, but for PV? At the time I wa
On 07/23/2015 03:25 AM, Jan Beulich wrote:
On 22.07.15 at 20:10, wrote:
I don't think this is currently doable with what we have for CPUID
support in xl syntax. I am pretty sure we need to at least be able to
specify all leaf 4's indexes. And we can't.
BTW, irrespective of this particular prob
On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote:
> On 07/24/2015 05:14 PM, Juergen Gross wrote:
> > On 07/24/2015 04:44 PM, Dario Faggioli wrote:
> >> In fact, I think that it is the topology, i.e., what comes from MSRs,
> >> that needs to adapt, and follow vNUMA, as much as possible. Do we
On 07/24/2015 05:14 PM, Juergen Gross wrote:
On 07/24/2015 04:44 PM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 12:28 +0200, Juergen Gross wrote:
On 07/23/2015 04:07 PM, Dario Faggioli wrote:
FWIW, I was thinking that the kernel were a better place, as Juergen is
saying, while now I'm more
On 07/24/2015 04:44 PM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 12:28 +0200, Juergen Gross wrote:
On 07/23/2015 04:07 PM, Dario Faggioli wrote:
FWIW, I was thinking that the kernel were a better place, as Juergen is
saying, while now I'm more convinced that tools would be more
appropriate
On Fri, 2015-07-24 at 12:28 +0200, Juergen Gross wrote:
> On 07/23/2015 04:07 PM, Dario Faggioli wrote:
> > FWIW, I was thinking that the kernel were a better place, as Juergen is
> > saying, while now I'm more convinced that tools would be more
> > appropriate, as Boris is saying.
>
> I've colle
On 07/23/2015 04:07 PM, Dario Faggioli wrote:
On Thu, 2015-07-23 at 06:43 +0200, Juergen Gross wrote:
On 07/22/2015 04:44 PM, Boris Ostrovsky wrote:
On 07/22/2015 10:09 AM, Juergen Gross wrote:
I think we have 2 possible solutions:
1. Try to handle this all in the hypervisor via CPUID mangl
On 07/23/2015 04:07 PM, Dario Faggioli wrote:
On Thu, 2015-07-23 at 06:43 +0200, Juergen Gross wrote:
On 07/22/2015 04:44 PM, Boris Ostrovsky wrote:
On 07/22/2015 10:09 AM, Juergen Gross wrote:
I think we have 2 possible solutions:
1. Try to handle this all in the hypervisor via CPUID mangl
On Thu, 2015-07-23 at 06:43 +0200, Juergen Gross wrote:
> On 07/22/2015 04:44 PM, Boris Ostrovsky wrote:
> > On 07/22/2015 10:09 AM, Juergen Gross wrote:
> I think we have 2 possible solutions:
>
> 1. Try to handle this all in the hypervisor via CPUID mangling.
>
> 2. Add
On Wed, 2015-07-22 at 14:10 -0400, Boris Ostrovsky wrote:
> On 07/22/2015 11:49 AM, Dario Faggioli wrote:
> > In fact, of course there are other issues (like the ones you're
> > mentioning, caused by this), but it's only with vNUMA that I see 2 out
> > of 4 vcpus completely lost! :-/
>
> My guess
On 23/07/15 05:43, Juergen Gross wrote:
> On 07/22/2015 04:44 PM, Boris Ostrovsky wrote:
>> On 07/22/2015 10:09 AM, Juergen Gross wrote:
>>> On 07/22/2015 03:58 PM, Boris Ostrovsky wrote:
On 07/22/2015 09:50 AM, Juergen Gross wrote:
> On 07/22/2015 03:36 PM, Dario Faggioli wrote:
>> On
>>> On 23.07.15 at 06:43, wrote:
> Hmm, I didn't think of user processes. Are you aware of cases where they
> are to be considered?
Why wouldn't a sophisticated user mode program attempt to adjust
certain memory objects' sizes based on cache size?
Jan
__
>>> On 22.07.15 at 20:10, wrote:
> I don't think this is currently doable with what we have for CPUID
> support in xl syntax. I am pretty sure we need to at least be able to
> specify all leaf 4's indexes. And we can't.
>
> BTW, irrespective of this particular problem, adding support for indexe
On 07/22/2015 04:44 PM, Boris Ostrovsky wrote:
On 07/22/2015 10:09 AM, Juergen Gross wrote:
On 07/22/2015 03:58 PM, Boris Ostrovsky wrote:
On 07/22/2015 09:50 AM, Juergen Gross wrote:
On 07/22/2015 03:36 PM, Dario Faggioli wrote:
On Tue, 2015-07-21 at 16:00 -0400, Boris Ostrovsky wrote:
On 0
On 07/22/2015 11:49 AM, Dario Faggioli wrote:
On Wed, 2015-07-22 at 11:32 -0400, Boris Ostrovsky wrote:
On 07/22/2015 10:50 AM, Dario Faggioli wrote:
Yep. Exacty. As Boris says, this is a generic scheduling issue, although
it's tru that it's only (as far as I can tell) with vNUMA that it bite
u
On Wed, 2015-07-22 at 11:32 -0400, Boris Ostrovsky wrote:
> On 07/22/2015 10:50 AM, Dario Faggioli wrote:
> > Yep. Exacty. As Boris says, this is a generic scheduling issue, although
> > it's tru that it's only (as far as I can tell) with vNUMA that it bite
> > us so hard...
>
> I am not sure tha
On 07/22/2015 10:50 AM, Dario Faggioli wrote:
On Wed, 2015-07-22 at 16:09 +0200, Juergen Gross wrote:
On 07/22/2015 03:58 PM, Boris Ostrovsky wrote:
What if I configure a guest to follow HW topology? I.e. I pin VCPUs to
appropriate cores/threads? With elfnote I am stuck with disabled topology.
On Wed, 2015-07-22 at 16:09 +0200, Juergen Gross wrote:
> On 07/22/2015 03:58 PM, Boris Ostrovsky wrote:
> > What if I configure a guest to follow HW topology? I.e. I pin VCPUs to
> > appropriate cores/threads? With elfnote I am stuck with disabled topology.
>
> Add an option to do exactly that:
On 07/22/2015 10:09 AM, Juergen Gross wrote:
On 07/22/2015 03:58 PM, Boris Ostrovsky wrote:
On 07/22/2015 09:50 AM, Juergen Gross wrote:
On 07/22/2015 03:36 PM, Dario Faggioli wrote:
On Tue, 2015-07-21 at 16:00 -0400, Boris Ostrovsky wrote:
On 07/20/2015 10:43 AM, Boris Ostrovsky wrote:
On 0
On 07/22/2015 03:58 PM, Boris Ostrovsky wrote:
On 07/22/2015 09:50 AM, Juergen Gross wrote:
On 07/22/2015 03:36 PM, Dario Faggioli wrote:
On Tue, 2015-07-21 at 16:00 -0400, Boris Ostrovsky wrote:
On 07/20/2015 10:43 AM, Boris Ostrovsky wrote:
On 07/20/2015 10:09 AM, Dario Faggioli wrote:
I
On 07/22/2015 09:50 AM, Juergen Gross wrote:
On 07/22/2015 03:36 PM, Dario Faggioli wrote:
On Tue, 2015-07-21 at 16:00 -0400, Boris Ostrovsky wrote:
On 07/20/2015 10:43 AM, Boris Ostrovsky wrote:
On 07/20/2015 10:09 AM, Dario Faggioli wrote:
I'll need to see how LLC IDs are calculated, prob
On 07/22/2015 03:36 PM, Dario Faggioli wrote:
On Tue, 2015-07-21 at 16:00 -0400, Boris Ostrovsky wrote:
On 07/20/2015 10:43 AM, Boris Ostrovsky wrote:
On 07/20/2015 10:09 AM, Dario Faggioli wrote:
I'll need to see how LLC IDs are calculated, probably also from some
CPUID bits.
No, can't d
On Tue, 2015-07-21 at 16:00 -0400, Boris Ostrovsky wrote:
> On 07/20/2015 10:43 AM, Boris Ostrovsky wrote:
> > On 07/20/2015 10:09 AM, Dario Faggioli wrote:
> > I'll need to see how LLC IDs are calculated, probably also from some
> > CPUID bits.
>
>
> No, can't do this: LLC is calculated from C
On 07/20/2015 10:43 AM, Boris Ostrovsky wrote:
On 07/20/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-17 at 14:17 -0400, Boris Ostrovsky wrote:
On 07/17/2015 03:27 AM, Dario Faggioli wrote:
In the meanwhile, what should we do? Document this? How? "don't use
vNUMA with PV guest in SMT en
On 07/20/2015 10:09 AM, Dario Faggioli wrote:
On Fri, 2015-07-17 at 14:17 -0400, Boris Ostrovsky wrote:
On 07/17/2015 03:27 AM, Dario Faggioli wrote:
In the meanwhile, what should we do? Document this? How? "don't use
vNUMA with PV guest in SMT enabled systems" seems a bit harsh... Is
there a w
On Fri, 2015-07-17 at 14:17 -0400, Boris Ostrovsky wrote:
> On 07/17/2015 03:27 AM, Dario Faggioli wrote:
> > In the meanwhile, what should we do? Document this? How? "don't use
> > vNUMA with PV guest in SMT enabled systems" seems a bit harsh... Is
> > there a workaround we can put in place/sugge
On 07/17/2015 03:27 AM, Dario Faggioli wrote:
On Fri, 2015-07-17 at 07:09 +0100, Jan Beulich wrote:
On 16.07.15 at 18:59, wrote:
And in general (both for PV and HVM) --- is there any reason to expose
CPU topology at all? I can see it being useful if VCPUs are pinned but
if they are not then it
On 16/07/15 17:59, Boris Ostrovsky wrote:
> On 07/16/2015 12:39 PM, Andrew Cooper wrote:
>> On 16/07/15 17:29, Jan Beulich wrote:
>> On 16.07.15 at 17:50, wrote:
Can't we set leaf 1's EBX[32:16] to 1?
>
> (I obviously fat-fingered this --- I meant EBX[23:16])
>
>>> I don't think we should
On Fri, Jul 17, 2015 at 09:27:55AM +0200, Dario Faggioli wrote:
> On Fri, 2015-07-17 at 07:09 +0100, Jan Beulich wrote:
> > >>> On 16.07.15 at 18:59, wrote:
> > > And in general (both for PV and HVM) --- is there any reason to expose
> > > CPU topology at all? I can see it being useful if VCPUs a
>>> On 17.07.15 at 09:27, wrote:
> In the meanwhile, what should we do? Document this? How? "don't use
> vNUMA with PV guest in SMT enabled systems" seems a bit harsh... Is
> there a workaround we can put in place/suggest?
Use SLE / openSUSE kernels ;-) ?
Jan
__
On Fri, 2015-07-17 at 07:09 +0100, Jan Beulich wrote:
> >>> On 16.07.15 at 18:59, wrote:
> > And in general (both for PV and HVM) --- is there any reason to expose
> > CPU topology at all? I can see it being useful if VCPUs are pinned but
> > if they are not then it can make performance worse.
>
>>> On 16.07.15 at 18:59, wrote:
> And in general (both for PV and HVM) --- is there any reason to expose
> CPU topology at all? I can see it being useful if VCPUs are pinned but
> if they are not then it can make performance worse.
Indeed - that's what our kernels have been doing for years, an
On 07/16/2015 12:39 PM, Andrew Cooper wrote:
On 16/07/15 17:29, Jan Beulich wrote:
On 16.07.15 at 17:50, wrote:
Can't we set leaf 1's EBX[32:16] to 1?
(I obviously fat-fingered this --- I meant EBX[23:16])
I don't think we should partially overwrite the relevant parts of
CPUID output - eit
On 16/07/15 17:29, Jan Beulich wrote:
On 16.07.15 at 17:50, wrote:
>> Can't we set leaf 1's EBX[32:16] to 1?
> I don't think we should partially overwrite the relevant parts of
> CPUID output - either all or nothing (so that things at least
> remain consistent).
Also, there are no masking/ov
>>> On 16.07.15 at 17:50, wrote:
> Can't we set leaf 1's EBX[32:16] to 1?
I don't think we should partially overwrite the relevant parts of
CPUID output - either all or nothing (so that things at least
remain consistent).
Jan
___
Xen-devel mailing li
On 07/16/2015 11:45 AM, Andrew Cooper wrote:
On 16/07/15 16:25, Wei Liu wrote:
On Thu, Jul 16, 2015 at 11:56:50AM +0100, Andrew Cooper wrote:
On 16/07/15 11:47, Jan Beulich wrote:
On 16.07.15 at 12:32, wrote:
root@test:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 si
On 16/07/15 16:25, Wei Liu wrote:
> On Thu, Jul 16, 2015 at 11:56:50AM +0100, Andrew Cooper wrote:
>> On 16/07/15 11:47, Jan Beulich wrote:
>> On 16.07.15 at 12:32, wrote:
root@test:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 475 MB
nod
On Thu, Jul 16, 2015 at 12:32:42PM +0200, Dario Faggioli wrote:
> Hey,
>
> This started on IRC, but it's actually appropriate to have the
> conversation here.
>
> I just discovered an issue with vNUMA, when PV guests are used. In fact,
> creating a 4 vCPUs PV guest, and making up things so that a
On Thu, Jul 16, 2015 at 11:56:50AM +0100, Andrew Cooper wrote:
> On 16/07/15 11:47, Jan Beulich wrote:
> On 16.07.15 at 12:32, wrote:
> >> root@test:~# numactl --hardware
> >> available: 2 nodes (0-1)
> >> node 0 cpus: 0 1
> >> node 0 size: 475 MB
> >> node 0 free: 382 MB
> >> node 1 cpus: 2
On 16/07/15 11:47, Jan Beulich wrote:
On 16.07.15 at 12:32, wrote:
>> root@test:~# numactl --hardware
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1
>> node 0 size: 475 MB
>> node 0 free: 382 MB
>> node 1 cpus: 2 3
>> node 1 size: 495 MB
>> node 1 free: 475 MB
>> node distances:
>> node 0
>>> On 16.07.15 at 12:32, wrote:
> root@test:~# numactl --hardware
> available: 2 nodes (0-1)
> node 0 cpus: 0 1
> node 0 size: 475 MB
> node 0 free: 382 MB
> node 1 cpus: 2 3
> node 1 size: 495 MB
> node 1 free: 475 MB
> node distances:
> node 0 1
> 0: 10 10
> 1: 20 10
>
> root@tes
Hey,
This started on IRC, but it's actually appropriate to have the
conversation here.
I just discovered an issue with vNUMA, when PV guests are used. In fact,
creating a 4 vCPUs PV guest, and making up things so that all the 4
vCPUs should be busy, I see this:
root@Zhaman:~# xl vcpu-list test
N
94 matches
Mail list logo