On Thu, 2019-04-11 at 09:16 +0200, Juergen Gross wrote:
> On 11/04/2019 02:34, Dario Faggioli wrote:
> > Also, it's Phoronix again. I don't especially love it, but I'm
> > still
> > working on convincing our own internal automated benchmarking tool
> > (which I like a lot more :-) ) to be a good fr
On 11/04/2019 02:34, Dario Faggioli wrote:
> On Fri, 2019-03-29 at 19:16 +0100, Dario Faggioli wrote:
>> On Fri, 2019-03-29 at 16:08 +0100, Juergen Gross wrote:
>>> I have done some very basic performance testing: on a 4 cpu system
>>> (2 cores with 2 threads each) I did a "make -j 4" for building
On Fri, 2019-03-29 at 19:16 +0100, Dario Faggioli wrote:
> On Fri, 2019-03-29 at 16:08 +0100, Juergen Gross wrote:
> > I have done some very basic performance testing: on a 4 cpu system
> > (2 cores with 2 threads each) I did a "make -j 4" for building the
> > Xen
> > hypervisor. With This test has
On 01/04/2019 09:10, Dario Faggioli wrote:
> On Mon, 2019-04-01 at 08:49 +0200, Juergen Gross wrote:
>> On 01/04/2019 08:41, Jan Beulich wrote:
>>> One further general question came to mind: How about also having
>>> "sched-granularity=thread" (or "...=none") to retain current
>>> behavior, at leas
>>> On 01.04.19 at 08:49, wrote:
> On 01/04/2019 08:41, Jan Beulich wrote:
> On 29.03.19 at 16:08, wrote:
>>> Via boot parameter sched_granularity=core (or sched_granularity=socket)
>>> it is possible to change the scheduling granularity from thread (the
>>> default) to either whole cores or
On Mon, 2019-04-01 at 08:49 +0200, Juergen Gross wrote:
> On 01/04/2019 08:41, Jan Beulich wrote:
> > One further general question came to mind: How about also having
> > "sched-granularity=thread" (or "...=none") to retain current
> > behavior, at least to have an easy way to compare effects if
>
On 01/04/2019 08:41, Jan Beulich wrote:
On 29.03.19 at 16:08, wrote:
>> Via boot parameter sched_granularity=core (or sched_granularity=socket)
>> it is possible to change the scheduling granularity from thread (the
>> default) to either whole cores or even sockets.
>
> One further general q
>>> On 29.03.19 at 16:08, wrote:
> Via boot parameter sched_granularity=core (or sched_granularity=socket)
> it is possible to change the scheduling granularity from thread (the
> default) to either whole cores or even sockets.
One further general question came to mind: How about also having
"sch
On 29/03/2019 19:16, Dario Faggioli wrote:
> Even if I've only skimmed through it... cool series! :-D
>
> On Fri, 2019-03-29 at 16:08 +0100, Juergen Gross wrote:
>>
>> I have done some very basic performance testing: on a 4 cpu system
>> (2 cores with 2 threads each) I did a "make -j 4" for buildi
Makes sense. The reason I ask is we currently have to disable HT due
to L1TF until a scheduler change is made to address the issue and the
#1 question everyone asks is what will that do to performance so any
info on that topic and how a patch like this will address the L1TF
issue is most helpful.
Even if I've only skimmed through it... cool series! :-D
On Fri, 2019-03-29 at 16:08 +0100, Juergen Gross wrote:
>
> I have done some very basic performance testing: on a 4 cpu system
> (2 cores with 2 threads each) I did a "make -j 4" for building the
> Xen
> hypervisor. With This test has been
On 29/03/2019 17:39, Rian Quinn wrote:
> Out of curiosity, has there been any research done on whether or not
> it makes more sense to just disable CPU threading with respect to
> overall performance? In some of the testing that we did with OpenXT,
> we noticed in some of our tests a performance in
Out of curiosity, has there been any research done on whether or not
it makes more sense to just disable CPU threading with respect to
overall performance? In some of the testing that we did with OpenXT,
we noticed in some of our tests a performance increase when
hyperthreading was disabled. I woul
On Fri, 2019-03-29 at 18:00 +0100, Juergen Gross wrote:
> On 29/03/2019 17:56, Dario Faggioli wrote:
> > As said by Juergen, the two approaches (and hence the structure of
> > the
> > series) are quite different. This series is more generic, acts on
> > the
> > common scheduler code and logic. It's
On 29/03/2019 17:56, Dario Faggioli wrote:
> On Fri, 2019-03-29 at 16:46 +0100, Juergen Gross wrote:
>> On 29/03/2019 16:39, Jan Beulich wrote:
>> On 29.03.19 at 16:08, wrote:
This is achieved by switching the scheduler to no longer see
vcpus as
the primary object to schedule, b
On Fri, 2019-03-29 at 16:46 +0100, Juergen Gross wrote:
> On 29/03/2019 16:39, Jan Beulich wrote:
> > > > > On 29.03.19 at 16:08, wrote:
> > > This is achieved by switching the scheduler to no longer see
> > > vcpus as
> > > the primary object to schedule, but "schedule items". Each
> > > schedule
On 29/03/2019 16:39, Jan Beulich wrote:
On 29.03.19 at 16:08, wrote:
>> Via boot parameter sched_granularity=core (or sched_granularity=socket)
>> it is possible to change the scheduling granularity from thread (the
>> default) to either whole cores or even sockets.
>>
>> All logical cpus (th
>>> On 29.03.19 at 16:08, wrote:
> Via boot parameter sched_granularity=core (or sched_granularity=socket)
> it is possible to change the scheduling granularity from thread (the
> default) to either whole cores or even sockets.
>
> All logical cpus (threads) of the core or socket are always sched
On 29/03/2019 16:08, Juergen Gross wrote:
> This series is very RFC
>
> Add support for core- and socket-scheduling in the Xen hypervisor.
>
> Via boot parameter sched_granularity=core (or sched_granularity=socket)
> it is possible to change the scheduling granularity from thread (the
> defau
19 matches
Mail list logo