On 10/05/2019 12:29, Dario Faggioli wrote:
> On Fri, 2019-05-10 at 11:00 +0200, Juergen Gross wrote:
>> On 10/05/2019 10:53, Jan Beulich wrote:
>> On 08.05.19 at 16:36, wrote:
With sched-gran=core or sched-gran=socket offlining a single cpu
results
in moving the complete co
>>> On 10.05.19 at 11:00, wrote:
> On 10/05/2019 10:53, Jan Beulich wrote:
> On 08.05.19 at 16:36, wrote:
>>> On 06/05/2019 12:01, Jan Beulich wrote:
>>> On 06.05.19 at 11:23, wrote:
> And that was mentioned in the cover letter: cpu hotplug is not yet
> handled (hence the RFC sta
On Fri, 2019-05-10 at 11:00 +0200, Juergen Gross wrote:
> On 10/05/2019 10:53, Jan Beulich wrote:
> > > > > On 08.05.19 at 16:36, wrote:
> > >
> > > With sched-gran=core or sched-gran=socket offlining a single cpu
> > > results
> > > in moving the complete core or socket to cpupool_free_cpus and
On 10/05/2019 10:53, Jan Beulich wrote:
On 08.05.19 at 16:36, wrote:
>> On 06/05/2019 12:01, Jan Beulich wrote:
>> On 06.05.19 at 11:23, wrote:
And that was mentioned in the cover letter: cpu hotplug is not yet
handled (hence the RFC status of the series).
When cpu ho
>>> On 08.05.19 at 16:36, wrote:
> On 06/05/2019 12:01, Jan Beulich wrote:
> On 06.05.19 at 11:23, wrote:
>>> And that was mentioned in the cover letter: cpu hotplug is not yet
>>> handled (hence the RFC status of the series).
>>>
>>> When cpu hotplug is being added it might be appropriate to
On 06/05/2019 12:01, Jan Beulich wrote:
On 06.05.19 at 11:23, wrote:
>> And that was mentioned in the cover letter: cpu hotplug is not yet
>> handled (hence the RFC status of the series).
>>
>> When cpu hotplug is being added it might be appropriate to switch the
>> scheme as you suggested. R
On 06/05/2019 15:14, Jan Beulich wrote:
On 06.05.19 at 14:23, wrote:
>> On 06/05/2019 13:58, Jan Beulich wrote:
>> On 06.05.19 at 12:20, wrote:
On 06/05/2019 12:01, Jan Beulich wrote:
On 06.05.19 at 11:23, wrote:
>> On 06/05/2019 10:57, Jan Beulich wrote:
>>> . Yet
>>> On 06.05.19 at 14:23, wrote:
> On 06/05/2019 13:58, Jan Beulich wrote:
> On 06.05.19 at 12:20, wrote:
>>> On 06/05/2019 12:01, Jan Beulich wrote:
>>> On 06.05.19 at 11:23, wrote:
> On 06/05/2019 10:57, Jan Beulich wrote:
>> . Yet then I'm a little puzzled by its use here in t
On 06/05/2019 13:58, Jan Beulich wrote:
On 06.05.19 at 12:20, wrote:
>> On 06/05/2019 12:01, Jan Beulich wrote:
>> On 06.05.19 at 11:23, wrote:
On 06/05/2019 10:57, Jan Beulich wrote:
> . Yet then I'm a little puzzled by its use here in the first place.
> Generally I think f
>>> On 06.05.19 at 12:20, wrote:
> On 06/05/2019 12:01, Jan Beulich wrote:
> On 06.05.19 at 11:23, wrote:
>>> On 06/05/2019 10:57, Jan Beulich wrote:
. Yet then I'm a little puzzled by its use here in the first place.
Generally I think for_each_cpu() uses in __init functions are
>>>
On 06/05/2019 12:01, Jan Beulich wrote:
On 06.05.19 at 11:23, wrote:
>> On 06/05/2019 10:57, Jan Beulich wrote:
>> On 06.05.19 at 08:56, wrote:
void scheduler_percpu_init(unsigned int cpu)
{
struct scheduler *sched = per_cpu(scheduler, cpu);
struct sched_r
>>> On 06.05.19 at 11:23, wrote:
> On 06/05/2019 10:57, Jan Beulich wrote:
> On 06.05.19 at 08:56, wrote:
>>> void scheduler_percpu_init(unsigned int cpu)
>>> {
>>> struct scheduler *sched = per_cpu(scheduler, cpu);
>>> struct sched_resource *sd = per_cpu(sched_res, cpu);
>>> +
On 06/05/2019 10:57, Jan Beulich wrote:
On 06.05.19 at 08:56, wrote:
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -1701,6 +1701,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>> printk(XENLOG_INFO "Parked %u CPUs\n", num_parked);
>> smp_cpus_done
>>> On 06.05.19 at 08:56, wrote:
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1701,6 +1701,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
> printk(XENLOG_INFO "Parked %u CPUs\n", num_parked);
> smp_cpus_done();
>
> +scheduler_smp_init();
> +
>
Add a scheduling granularity enum ("thread", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"thread", this can be modified by the new boot parameter (x86 only)
"sched_granularity".
According to the selected granularity sched_granularity is set after
all c
15 matches
Mail list logo