Hi Preeti
On 23 October 2013 11:50, Preeti U Murthy wrote:
> Hi Peter
>
> On 10/23/2013 03:41 AM, Peter Zijlstra wrote:
>> This nohz stuff really needs to be re-thought and made more scalable --
>> its a royal pain :/
>
> Why not do something like the below instead? It does the following.
>
> Th
Hi Srivatsa,
I can try to run some of our stress tests on your patches. Have you
got a git tree that i can pull ?
Regards,
Vincent
On 8 February 2013 19:09, Srivatsa S. Bhat
wrote:
> On 02/08/2013 10:14 PM, Srivatsa S. Bhat wrote:
>> On 02/08/2013 09:11 PM, Russell King - ARM Linux wrote:
>>> O
On 15 February 2013 20:40, Srivatsa S. Bhat
wrote:
> Hi Vincent,
>
> On 02/15/2013 06:58 PM, Vincent Guittot wrote:
>> Hi Srivatsa,
>>
>> I have run some tests with you branch (thanks Paul for the git tree)
>> and you will find results below.
>>
>
>
On 18 February 2013 11:51, Srivatsa S. Bhat
wrote:
> On 02/18/2013 04:04 PM, Srivatsa S. Bhat wrote:
>> On 02/18/2013 03:54 PM, Vincent Guittot wrote:
>>> On 15 February 2013 20:40, Srivatsa S. Bhat
>>> wrote:
>>>> Hi Vincent,
>>>>
>>&
On 18 February 2013 16:30, Steven Rostedt wrote:
> On Mon, 2013-02-18 at 11:58 +0100, Vincent Guittot wrote:
>
>> My tests have been done without cpuidle because i have some issues
>> with function tracer and cpuidle
>>
>> But the cpu hotplug and cpuidle work wel
On 18 February 2013 20:53, Steven Rostedt wrote:
> On Mon, 2013-02-18 at 17:50 +0100, Vincent Guittot wrote:
>
>> yes for sure.
>> The problem is more linked to cpuidle and function tracer.
>>
>> cpu hotplug and function tracer work when cpuidle is disable.
>>
Hi Srivatsa,
I have run some tests with genload on my ARM platform but even with
the mainline the cpu_down is quite short and stable ( around 4ms )
with 5 or 2 online cores. The duration is similar with your patches
I have maybe not used the right option for genload ? I have used
genload -m 10 w
res for the affected architectures.
>
> Cc: Dietmar Eggemann
> Cc: Peter Zijlstra
> Cc: Ingo Molnar
> Cc: Benjamin Herrenschmidt
> Cc: Vincent Guittot
Acked-by Vincent Guittot
> Signed-off-by: Guenter Roeck
> ---
> v2: Fix problem in all affected architectures with a
Le lundi 05 juin 2023 à 10:07:16 (+0200), Tobias Huschle a écrit :
> On 2023-05-16 15:36, Vincent Guittot wrote:
> > On Mon, 15 May 2023 at 13:46, Tobias Huschle
> > wrote:
> > >
> > > The current load balancer implementation implies that scheduler
> > >
On Wed, 12 Jul 2023 at 16:11, Peter Zijlstra wrote:
>
> Hi
>
> Thomas just tripped over the x86 topology setup creating a 'DIE' domain
> for the package mask :-)
May be a link to the change that triggers this patch could be useful
>
> Since these names are SCHED_DEBUG only, rename them.
> I don'
On Mon, 15 May 2023 at 13:46, Tobias Huschle wrote:
>
> The current load balancer implementation implies that scheduler groups,
> within the same domain, all host the same number of CPUs. This is
> reflected in the condition, that a scheduler group, which is load
> balancing and classified as havi
PI to notify an idle CPU in TIF_POLLING
> mode of pending IPI
> x86/thread_info: Introduce TIF_NOTIFY_IPI flag
>
> K Prateek Nayak (10):
> arm/thread_info: Introduce TIF_NOTIFY_IPI flag
> alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
> openrisc/thread_info: Introd
On Wed, 6 Mar 2024 at 11:18, K Prateek Nayak wrote:
>
> Hello Vincent,
>
> Thank you for taking a look at the series.
>
> On 3/6/2024 3:29 PM, Vincent Guittot wrote:
> > Hi Prateek,
> >
> > Adding Julia who could be interested in this patchset. Your patchset
On Tue, 19 Mar 2024 at 10:08, Tobias Huschle wrote:
>
> On 2024-03-18 15:45, Luis Machado wrote:
> > On 3/14/24 13:45, Tobias Huschle wrote:
> >> On Fri, Mar 08, 2024 at 03:11:38PM +, Luis Machado wrote:
> >>> On 2/28/24 16:10, Tobias Huschle wrote:
>
> Questions:
> 1. The kwork
On Wed, 20 Mar 2024 at 08:04, Tobias Huschle wrote:
>
> On Tue, Mar 19, 2024 at 02:41:14PM +0100, Vincent Guittot wrote:
> > On Tue, 19 Mar 2024 at 10:08, Tobias Huschle wrote:
> > >
...
> > >
> > > Haven't seen that one yet. Unfortunately, it d
On Thu, 21 Mar 2024 at 13:18, Tobias Huschle wrote:
>
> On Wed, Mar 20, 2024 at 02:51:00PM +0100, Vincent Guittot wrote:
> > On Wed, 20 Mar 2024 at 08:04, Tobias Huschle wrote:
> > > There was no guarantee of course. place_entity was reducing the vruntime
> > >
On Mon, 12 Apr 2021 at 11:37, Mel Gorman wrote:
>
> On Mon, Apr 12, 2021 at 11:54:36AM +0530, Srikar Dronamraju wrote:
> > * Gautham R. Shenoy [2021-04-02 11:07:54]:
> >
> > >
> > > To remedy this, this patch proposes that the LLC be moved to the MC
> > > level which is a group of cores in one ha
On Mon, 12 Apr 2021 at 17:24, Mel Gorman wrote:
>
> On Mon, Apr 12, 2021 at 02:21:47PM +0200, Vincent Guittot wrote:
> > > > Peter, Valentin, Vincent, Mel, etal
> > > >
> > > > On architectures where we have multiple levels of cache access latencies
>
On Tue, 10 Aug 2021 at 16:41, Ricardo Neri
wrote:
>
> When deciding to pull tasks in ASYM_PACKING, it is necessary not only to
> check for the idle state of the destination CPU, dst_cpu, but also of
> its SMT siblings.
>
> If dst_cpu is idle but its SMT siblings are busy, performance suffers
> if
On Fri, 27 Aug 2021 at 16:50, Peter Zijlstra wrote:
>
> On Fri, Aug 27, 2021 at 12:13:42PM +0200, Vincent Guittot wrote:
> > > +/**
> > > + * asym_smt_can_pull_tasks - Check whether the load balancing CPU can
> > > pull tasks
> > > + * @dst_cp
On Fri, 27 Aug 2021 at 21:45, Ricardo Neri
wrote:
>
> On Fri, Aug 27, 2021 at 12:13:42PM +0200, Vincent Guittot wrote:
> > On Tue, 10 Aug 2021 at 16:41, Ricardo Neri
> > wrote:
> > > @@ -9540,6 +9629,12 @@ static struct rq *find_busiest_queu
On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
wrote:
>
> When deciding to pull tasks in ASYM_PACKING, it is necessary not only to
> check for the idle state of the destination CPU, dst_cpu, but also of
> its SMT siblings.
>
> If dst_cpu is idle but its SMT siblings are busy, performance suffers
> if
On Fri, 17 Sept 2021 at 03:01, Ricardo Neri
wrote:
>
> On Wed, Sep 15, 2021 at 05:43:44PM +0200, Vincent Guittot wrote:
> > On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
> > wrote:
> > >
> > > When deciding to pull tasks in ASYM_PACKING, it is necessary not on
On Fri, 17 Sept 2021 at 03:01, Ricardo Neri
wrote:
>
> On Wed, Sep 15, 2021 at 05:43:44PM +0200, Vincent Guittot wrote:
> > On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
> > wrote:
> > >
> > > When deciding to pull tasks in ASYM_PACKING, it is necessary not on
ndes (Google)
> Reviewed-by: Len Brown
> Originally-by: Peter Zijlstra (Intel)
> Signed-off-by: Peter Zijlstra (Intel)
> Signed-off-by: Ricardo Neri
Reviewed-by: Vincent Guittot
> ---
> Changes since v4:
> * None
>
> Changes since v3:
> * Clear the flags of
(Google)
> Reviewed-by: Len Brown
> Signed-off-by: Ricardo Neri
Reviewed-by: Vincent Guittot
> ---
> Changes since v4:
> * None
>
> Changes since v3:
> * Further rewording of the commit message. (Len)
>
> Changes since v2:
> * Reworded the commit message for
mar Eggemann
> Cc: Mel Gorman
> Cc: Quentin Perret
> Cc: Rafael J. Wysocki
> Cc: Srinivas Pandruvada
> Cc: Steven Rostedt
> Cc: Tim Chen
> Reviewed-by: Joel Fernandes (Google)
> Reviewed-by: Len Brown
> Originally-by: Peter Zijlstra (Intel)
> Signed-off-by: Peter Zijl
Tim Chen
> Reviewed-by: Joel Fernandes (Google)
> Reviewed-by: Len Brown
> Co-developed-by: Peter Zijlstra (Intel)
> Signed-off-by: Peter Zijlstra (Intel)
> Signed-off-by: Ricardo Neri
Reviewed-by: Vincent Guittot
> ---
> Changes since v4:
> * None
>
> Chan
On Fri, 17 Sept 2021 at 20:47, Peter Zijlstra wrote:
>
> On Fri, Sep 17, 2021 at 05:25:02PM +0200, Vincent Guittot wrote:
>
> > With the removal of the condition !sds->local_stat.sum_nr_running
> > which seems useless because dst_cpu is idle and not SMT, this patch
>
Hi Guillaume,
This patch and the patchset which includes this patch only impacts
systems with hyperthreading which is not the case of rk3328-rock64
AFAICT. So there is no reason that this code is used by the board. The
only impact should be an increase of the binary for this platform.
Could it be
On Mon, 21 Jun 2021 at 11:39, Odin Ugedal wrote:
>
> man. 21. jun. 2021 kl. 08:33 skrev Sachin Sant :
> >
> > While running LTP tests (cfs_bandwidth01) against 5.13.0-rc7 kernel on a
> > powerpc box
> > following warning is seen
> >
> > [ 6611.331827] [ cut here ]
> > [ 66
Le lundi 21 juin 2021 à 14:42:23 (+0200), Odin Ugedal a écrit :
> Hi,
>
> Did some more research, and it looks like this is what happens:
>
> $ tree /sys/fs/cgroup/ltp/ -d --charset=ascii
> /sys/fs/cgroup/ltp/
> |-- drain
> `-- test-6851
> `-- level2
> |-- level3a
> | |-- wo
On Mon, 21 Jun 2021 at 18:45, Odin Ugedal wrote:
>
> man. 21. jun. 2021 kl. 18:22 skrev Vincent Guittot
> :
> > I would prefer that we use the reason of adding the cfs in the list instead.
> >
> > Something like the below should also fixed the problem. It is based on a
Hi Sacha
On Mon, 21 Jun 2021 at 18:22, Vincent Guittot
wrote:
>
> Le lundi 21 juin 2021 à 14:42:23 (+0200), Odin Ugedal a écrit :
> > Hi,
> >
> > Did some more research, and it looks like this is what happens:
> >
> > $ tree /sys/fs/cgroup/ltp/ -d --
On Mon, 21 Jun 2021 at 19:32, Sachin Sant wrote:
>
> >>> Any thoughts Vincent?
> >>
> >>
> >> I would prefer that we use the reason of adding the cfs in the list
> >> instead.
> >>
> >> Something like the below should also fixed the problem. It is based on a
> >> proposal I made to Rik sometimes
Hi Sachin,
On Tue, 22 Jun 2021 at 09:39, Sachin Sant wrote:
>
> While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
> is seen
>
> [ 30.922154] [ cut here ]
> [ 30.922201] cfs_rq->avg.load_avg || cfs_rq->avg.util_avg ||
> cfs_rq->avg.runnable_avg
Le mardi 22 juin 2021 à 09:49:31 (+0200), Vincent Guittot a écrit :
> Hi Sachin,
>
> On Tue, 22 Jun 2021 at 09:39, Sachin Sant wrote:
> >
> > While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
> > is seen
> >
> >
Hi Sachin,
Le mardi 22 juin 2021 à 21:29:36 (+0530), Sachin Sant a écrit :
> >> On Tue, 22 Jun 2021 at 09:39, Sachin Sant
> >> wrote:
> >>>
> >>> While booting 5.13.0-rc7-next-20210621 on a PowerVM LPAR following warning
> >>> is seen
> >>>
> >>> [ 30.922154] [ cut here ]
Le mercredi 23 juin 2021 à 15:52:59 (+0530), Sachin Sant a écrit :
>
>
> > On 23-Jun-2021, at 1:28 PM, Sachin Sant wrote:
> >
> >
> Could you try the patch below ? I have been able to reproduce the
> problem locally and this
> fix it on my system:
>
> >>> I can recreate t
On Wed, 23 Jun 2021 at 14:18, Odin Ugedal wrote:
>
> Hi,
>
> Wouldn't the attached diff below also help when load is removed,
> Vincent? Isn't there a theoretical chance that x_sum ends up at zero
> while x_load ends up as a positive value (without this patch)? Can
> post as a separate patch if it
On Wed, 23 Jun 2021 at 14:37, Odin Ugedal wrote:
>
> ons. 23. jun. 2021 kl. 14:22 skrev Vincent Guittot
> :
> >
> > In theory it should not because _sum should be always larger or equal
> > to _avg * divider. Otherwise, it means that we have something wrong
> &
On Wed, 23 Jun 2021 at 17:13, Odin Ugedal wrote:
>
> ons. 23. jun. 2021 kl. 15:56 skrev Vincent Guittot
> :
> >
> >
> > The pelt value of sched_entity is synced with cfs and its contrib
> > before being removed.
>
>
> Hmm. Not sure what you mean by sche
On Wed, 23 Jun 2021 at 18:46, Sachin Sant wrote:
>
>
> > Ok. This becomes even more weird. Could you share your config file and more
> > details about
> > you setup ?
> >
> > Have you applied the patch below ?
> > https://lore.kernel.org/lkml/20210621174330.11258-1-vincent.guit...@linaro.org/
> >
On Wed, 23 Jun 2021 at 18:55, Vincent Guittot
wrote:
>
> On Wed, 23 Jun 2021 at 18:46, Sachin Sant wrote:
> >
> >
> > > Ok. This becomes even more weird. Could you share your config file and
> > > more details about
> > > you setup ?
> > >
On Fri, 14 Jun 2024 at 11:28, Peter Zijlstra wrote:
>
> On Thu, Jun 13, 2024 at 06:15:59PM +, K Prateek Nayak wrote:
> > Effects of call_function_single_prep_ipi()
> > ==
> >
> > To pull a TIF_POLLING thread out of idle to process an IPI, the sender
> >
On Sat, 15 Jun 2024 at 03:28, Peter Zijlstra wrote:
>
> On Fri, Jun 14, 2024 at 12:48:37PM +0200, Vincent Guittot wrote:
> > On Fri, 14 Jun 2024 at 11:28, Peter Zijlstra wrote:
>
> > > > Vincent [5] pointed out a case where the idle load kick will fail to
> > >
46 matches
Mail list logo